Again it's time to get something off my chest...
Twice in the last week my project has been bitten by the same, basic issue. The first problem was a memory leak in a small but important tool which was caused by introducing faulty caching to a lookup. The second problem was someone trying to solve an out of memory error by getting a client to avoid duplicating data in a request. This was fine, except that the service that received the request did the duplication anyway so the problem remained.
Now these are both the sort of bugs/problems that you see all the time so why am I wound up about it? It's because they both made it into production and really, really shouldn't.
The first change shouldn't have even been made. The lookup in question is rarely done and is likely to be unique when it is - i.e. any cache will never successfully be hit anyway.
Optimization Rule Number 1 - Always take suitable metrics BEFORE making any optimization. Don't make them on gut feel. If your metrics show that a particular piece of code or system is not a bottleneck or performance problem then don't waste time modifying the code at all!
The second change was certainly needed (a complex calculation for a report was never completing). The problem was correctly identified but the solution wasn't complete.
Optimization Rule Number 2 - Always take metrics AFTER making any optimization. These should be compared with the metrics taken before so you know if the changes have had a significant effect.
Measure twice and cut the code once!
If you're optimising you should definitely take metrics! At the very least it results in a story that helps evangelise the value of good code and process: see Code Metrics.
I thought these were the rules of optimisation, though! ;)