Metrics Please!

measure twice and cut (code) once

Again it's time to get something off my chest...

Twice in the last week my project has been bitten by the same, basic issue. The first problem was a memory leak in a small but important tool which was caused by introducing faulty caching to a lookup. The second problem was someone trying to solve an out of memory error by getting a client to avoid duplicating data in a request. This was fine, except that the service that received the request did the duplication anyway so the problem remained.

Now these are both the sort of bugs/problems that you see all the time so why am I wound up about it? It's because they both made it into production and really, really shouldn't.

The first change shouldn't have even been made. The lookup in question is rarely done and is likely to be unique when it is - i.e. any cache will never successfully be hit anyway.

Optimization Rule Number 1 - Always take suitable metrics BEFORE making any optimization. Don't make them on gut feel. If your metrics show that a particular piece of code or system is not a bottleneck or performance problem then don't waste time modifying the code at all!

The second change was certainly needed (a complex calculation for a report was never completing). The problem was correctly identified but the solution wasn't complete.

Optimization Rule Number 2 - Always take metrics AFTER making any optimization. These should be compared with the metrics taken before so you know if the changes have had a significant effect.

Measure twice and cut the code once!



Re: Metrics Please!

If you're optimising you should definitely take metrics! At the very least it results in a story that helps evangelise the value of good code and process: see Code Metrics.

I thought these were the rules of optimisation, though! ;)

Re: Metrics Please!

Also, I would always write some basic unit tests for any cache pool because they can have a major imapact on stability. Just a simple test to check you hit and miss when you should hit and miss :)

Re: Metrics Please!

People always under estimate just how hard it is to write a good cache. What seems a simple problem quickly turns into a can of worms.

Re: Metrics Please!

Rules of Optimisation (from M.A. Jackson): * Rule 1: Don't do it. * Rule 2 (for experts only): Don't do it yet.

Re: Metrics Please!

In many repects they are true, and I particularly like this one as well: We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

I do see one problem though, developers can become lazy and write poorly performing code. I guess there is a balance between writing efficient code and prematurely optimizing...

This article is interesting as well: http://java.sys-con.com/read/464426.htm

Add a comment Send a TrackBack