Saving cycles… O’rly? – STUPID Series


Last week, i got my “To end with mannual testing”  article out which is part of the STUPID series i plan to create. This week, i will focus on code optimization. Old code is often hard to refactor because of it’s complexity and part of this complexity often comes to preemptive optimization.. In my line of work as an application development expert, i often see programmers use STUPID programming practices. With these articles, i hope that those practices will be easier to understand and avoid at all costs.

STUPID development practices:

1. Singleton
2. Tight coupling
3. Untestable code
4. Preemptive optimization (This article)
5. Indescriptive naming
6. Duplicate code

Preemptive optimization

This action comes from predicting bottlenecks and trying to fix them in advance by adopting coding strategies that often don’t comply with the best practice standards. Other times, it even goes further by obscuring code by adding useless variables or creating heavy to read code because of indescriptive naming behaviors. All tied together, preemptive optization creates hard to follow code because you try to save cycles where you don’t need to.

What techniques should i use to optimize my code then?

First and foremost, your main task is achieving a result and with your experience, you should always be able to consider where serious bottlenecks are going to happen and you should fix them preemptively when it happens, what i mean by that:

  • Repetitive calls to a function that executes heavy IO such as SQL queries
  • Repetitive calls to functions that execute long processes such as costly loops, calls to webservice that can be cached but are not
  • Etc

Never try to save a few cycles by optimizing a statistical calculation algorithm that would save you a few millisecond. Trust me, there are many more places to place your efforts in.

The best thing you can do is have a fully functionnal software and use a profiling approach.

Profiling techniques

There are several ways you can profile your application and also several scopes to profile:

  • Input/Output such as files, webservice calls, database calls, etc
  • Processing time such as request processing in PHP, mean rendering time from your views, etc
  • Front-end processing time such as connection, negotiation, download of content, rendering time to the browser viewport

All of these profiling scope are important and you can easily see, without too much trouble where it hurts with little effort. Let’s start with front-end processing since it’s the easiest without complex tools.

Profiling the front-end

In the recent years, most developpers have adopted the web developper tools provided by their favorite browser. Mine to date is definitely Chrome’s web developper tools natively accessible through the F12 key on Windows and probably something similar in other OS’es. What most developers don’t do with it is profile their application…

There is a ton of information available in the “Network” panel just waiting for you to use. This information is all available for your eyes ina very simple fashion. Check the “Timeline” portion of your code and you will see several indicators telling you how much time it took to load each resource and on each aspect such as Connection time, Waiting time and Download time. Usage of this tool would require a complete article which i don’t really see the use in writting considering the amount of people that already did this. If you want in depth information as to how to profile a web page, read these up :

Profiling the back-end and the I/O

A tool i recently started using is xhprof. It’s an extension that you can easily download and install from PECL. To install it just run:

pecl config-set preferred_state beta
pecl install xhprof

Note that you might need to install php-common, gcc and make because pecl needs to compile the extension on site for your machine.

Once you activated the extension (by editing your php.ini or your php.d folder) you can then start profiling your code using a prepend header file. The data will be made available to you in memory, store it somewhere and then analyse it.

Obviously, this method is a little crude and that’s why i prefer using a GUI to analyse and scour the immense amount of data being made available to you. One of these tools that works hand in hand with xhprof is xhGui available at

There are many articles on getting xhGui working on your system such as


Optimize where you need to optimize, not where you think you need. Some preemptive optimization is always good, there are many obvious places you can optimize as you code. Just don’t try to optimize code that you never ran and profiled if you aren’t absolutely sure it will be slow.

In the end, favor caching of values as most bottlenecks are often found while accessing data multiple times, enforce caching of data in a local variable if you think you will re-use your data access mechanism, for the rest, limit the scope and complexity of your data access mechanisms, for example, SQL queries that take more than a few milliseconds have to be rethought or your application won’t scale well.

Finaly, opt for leaner front-ends, use optimized and standalone css and javascript files that can be cached so that re-downloading them is useless. Remember, your web server is highly aware of file changes, it won’t send a file’s content uselessly.

5 Responses

  1. […] 2. Tight coupling 3. Untestable code 4. Preemptive optimization 5. Indescriptive naming (This article) 6. Duplicate code (Coming […]

  2. […] 2. Tight coupling 3. Untestable code 4. Preemptive optimization 5. Indescriptive naming 6. Duplicate code (This […]

  3. […] Singleton 2. Tight coupling (This article) 3. Untestable code 4. Preemptive optimization 5. Indescriptive naming 6. Duplicate […]

  4. […] 2. Tight coupling 3. Untestable code (This article) 4. Preemptive optimization 5. Indescriptive naming 6. Duplicate […]

Leave a Reply