Consumers of the cloud are of only a few variety. Maybe even two. There are those who really enjoy configuring the low-level platform aspects of what they're deploying. To these consumers, CPU topology matters, as do the network card specifics, and whether or not ACPI features are available. Platform level concerns are of the utmost importance in any environment — cloud or no cloud. But if we treat the supporting platform as the only critical piece of the system, we wouldn't succeed in our mission. Which brings me to my second class of cloud consumer — the application developer. The easy solution, and one that I think we're relying on a little too much today, is don't treat the application as an actor in the cloud landscape. Instead, the established approach is to treat the applications that actually fulfil business requirements as something that gets put into a platform container. The container is a cloud-friendly wrapper that encapsulates the application to a point where it doesn't know it is virtual.
Maybe this arrangement isn't a problem. Maybe it's the safe approach to designing applications, since they don't need to be designed any differently to accommodate their environment. To the application, it may as well be a physical box it's running on — an that's the whole premise behind virtualization. Furthermore, the focus on the platform layer enables legacy applications to be targeted for deployment inside a cloud service provider. If I'm an administrator looking to deploy some dusty old system with really finicky hardware requirements, I can bend the platform to that system's needs. This is essentially the status quo for the cloud consumer actor, and we can argue that it's actually going well because we can do some pretty interesting things within cloud environments that would otherwise be really, really hard. But I like the bleeding edge of risky technology — I can't help myself — what else can we give the application to treat it like a true cloud participant?
Does this mean that we're viewing the platform wrapper that acts as a protective coating around the web servers, the virtual private networks, the you name it, as the application itself? In a way, I would argue that we're doing exactly that. We've zoomed in 40X on all the potential problems — and potential gains — at the virtual resource level. How can we take this virtualized platform and equip it with dynamic hardware resource, and open up data channels that serve up a buffet of information? Because we've looked so closely at these issues, we're only now just starting to get really good at best solving these problems. Using cloud platform technology properly to get more for less — at least from a virtual resource standpoint. But if we were take step away from the microscope, and briefly glace in both directions, we'd see that the black box virtual machines aren't all-encompassing. They help, and they take us a long way, but I think we need to open up that encapsulated boundary that protects the applications we're writing for today's problems, and let some of the cloud technology seep through the cracks.
Google App Engine is an example of the opposite extreme from virtualization approach. It's entirely web application focused, providing the necessary APIs required to scale up operations. Here, we're not concerned with allocating enough storage, memory, CPU, and so on. That happens as a side effect of the way developers design the application, which APIs are used, and the demand for application usage. It's easy to meter resource consumption based on both API usage and raw data consumption. The reason we don't worry about you're typical virtual machine resource allocation inside App Engine is because you're not thinking about it that way. You're writing an application that uses the cloud as opposed to an application that gets placed into the cloud and left to it's own devices. PiCloud is another example of this approach.
The downside to the App Engine approach is that we loose the beneficial ignorance applications exhibit when placed inside virtual machines. With the cloud API approach, we're introducing one more dependency to the application, which ties it to a particular environment. That is of course, unless cloud providers took a different approach to what is exposed to the customer. Can there be such an environment where we have the fine level of granularity of virtual machines and yet provide the applications running inside with mechanisms to better control their environment?
I think a big reason applications get deployed to the cloud in the first place is that their creators don't know ahead of time what resources they'll need. This is really a problem at the platform level — the application looks to the operating system for resources, and the operating system looks to the virtual machine housing it. One thing that App Engine has done, for me anyway, is introduce the idea that cloud technology is not just about raw resource allocation. Its also about functionality at the application level. We can use APIs offered by cloud providers to perform big map-reduce jobs or setup a task queue. These are all really valuable features at the application level. One problem with this philosophy is that it's really difficult to standardize on anything at the application level. In contrast, it's much more straightforward to allocate more memory or to clone a virtual disk using the same API across vendors. So to help enhance cloud APIs targeted at applications, I think providers need to look at the standard usage scenarios and offer something that doesn't break code if it's running somewhere else.