Friday, January 4, 2013

Decisions About Overhead

Premature optimization is all about the decisions we programmers make on overhead, before the overhead is actually witnessed in the running system. You might call it potential overhead that spins cycles on valuable resources while other code is competing for that same resource. The decisions about how to best go about minimizing overhead in software are made while its being designed. While writing code, we notice, staring at a particular function and the structures within, that this should be altered. There is no way that all this initialization work needs to happen here before the real objective of the function even starts. And so the decision to optimize, right then and there, is made. In the most straightforward cases, yes, it's worth the five or ten minutes of re-factoring and testing it takes for the obvious improvements. But even the simple ones amount to a much decision making without the relevant data.

The question then becomes, can the system itself make the best call as to what is considered relevant overhead, and which actors are absorb the impact most? Sounds far-fetched, indeed, and even like meta-overhead. It's like we're taking premature optimization and routinizing it by even considering such a question. But as impractical as embedding such a monitor and decision maker inside our software might be, at the conceptual level, it might be worth considering the dimensions of the overhead optimization decisions we make. What would these look like if automated?

Think about brushing your teeth, and letting the water run. Wasting water is a necessary overhead of brushing your teeth. Or, perhaps we can save the water by turning it on and off again while shifting the overhead to the wear-and-tear of the faucet and the time taken to perform the on/off action. Brushing your teeth requires some overhead, and a decision as to who incurs that overhead.

These are the types of questions that programmers think about at the molecular level of their code. We cannot help it, despite the fact that we cannot know ahead of time who will feel the impact of these overhead decisons made at code-writing time. Will we provide a seamless experience for the majority while a handful experience unacceptable latency? And what about other systems running alongside ours? Do we even take them into consideration, or is that a kernel problem? All we can say for certain is that there will come a time when at the application level, some consideration of overhead generated by our code will surface in the form of self-monitoring. It's a dynamic decision about the running system, that is either handled by the operating system, or by the application internally in terms of how it requests resources.

No comments :

Post a Comment