Quantitative results are rewarding

Recently I spent some time profiling and optimizing a certain program, and tripled its performance. My reaction was predictable. I am awesome, I thought. I made this three times as good as it was before. I'm three times as good a programmer as whoever wrote that slow code. (Even though “whoever wrote that slow code” was me.)

The results of optimization are easy to quantify: not just faster, but three times as fast. I suspect this accounts for some of the lure of optimization. Comparing performance numbers provides an obvious indication of improvement, and exaggerates its value, so even a modest optimization feels like a triumph. Adding a new feature usually has much less emotional reward, especially if it's not technically impressive or immediately useful. Fixing a bug may bring feelings of disgust at the bug, and guilt (if I created it) or contempt (if some other idiot someone else did), but it seldom feels like much of an accomplishment. Optimization, though, feels good, because it's measurable and therefore obviously valuable (even when it's not).

I also enjoy cleaning up code, for the same reason: if I make something 50 lines shorter, that tells me how much I've accomplished, so I have something to feel good about. Predictably, I feel better about making code shorter than about merely making it more readable, because readability doesn't come with numerical confirmation.

If this effect is real, it would be valuable to have a similar quantification of bugs and features, to make improvements emotionally rewarding. An automated test framework can provide this by conspicuously reporting the number of tests passed, and perhaps the change from previous versions. (It's probably best to report the number passed, not the number failed, so writing more tests makes the number better rather than worse.) If you've used a test framework with this feature, have you noticed any effect on your motivation to fix bugs?

5 comments:

  1. I use such a tool for testing Python code. I haven't noticed it having any effect on my motivation to fix bugs. But this is probably because a "bug" (as the concept manifests in my own thinking) is always something that is reported by a user, that our tests did not catch -- because we did not expect it.

    I suppose it increases my motivation to write tests for my code somewhat, but that's almost circular -- I could spend all my time writing testing all kinds of cases, likely or not, just to get more passing dots! But I don't, partly because it doesn't feel like it's going to help catch any real bugs (you know, conceptual shortcomings, race conditions, etc.), partly because it feels like the tests are largely there to compensate for the lack of static analysis in the language itself, and partly just because writing tests is so boring.

    ReplyDelete
  2. I decided a long time ago I was only going to be impressed by order-of-magnitude speedups.

    ReplyDelete
  3. I got an order-of-magnitude speedup earlier. Additional improvements were harder to find (and much needed), so 3x still felt like an accomplishment.

    ReplyDelete
  4. It's less impressive if you know how horrible the original implementation was, and that neither the 10x improvement nor the 3x involved any great cleverness. I just figured out what was making it slow and applied mundane solutions. I can't help thinking that a better programmer could have figured it out faster (and implemented parts of it in a less bug-prone way), or that it could be another order of magnitude faster if only I were a little cleverer.

    But the real point here isn't that I felt good because I did something impressive. It's that I felt good even without doing anything impressive, simply because I had a number telling me I had accomplished something.

    ReplyDelete

It's OK to comment on old posts.