ESCAPE/19: How Microservices Enable Multi-Cloud at the Expense of Developers

Published on Dec 19, 2019

I recently had the pleasure of speaking about Kelda at ESCAPE/19 in New York City. It was a fantastic event packed full of sharp folks with interesting perspectives. The talk, How Microservices Enable Multi-Cloud at the Expense of Developers, describes how microservices and CI/CD led to the development of Kubernetes which itself promises to make multi-cloud viable for the first time. However, these advantages are not without costs, particularly for developer productivity. The talk is summarized below.

Cloud Providers Like Vendor Lock-in

While cloud providers typically aren’t explicit about this, multi-cloud isn’t in their interest. The more challenging a cloud provider makes it for customers to switch to one of their competitors, the less difficulty they will have retaining and growing those customers, and the more power they will have in pricing negotiations.

Cloud providers historically have encouraged vendor lock-in by diverging their APIs. Everything from trivial differences in the semantics of the commands required to create a VM, to major differences in features and functionality all serve to make it difficult for users to shift cloud providers once they’ve made their initial choice. And while there have been efforts with varying success to standardize the cloud API, true portability has never been achieved.

Microservice & CI/CD Lead to Standardization

Finally, seemingly out of nowhere, two massive trends collided: Microservices, the idea that large single process applications (monoliths) should be broken down to smaller independent pieces (microservices). And CI/CD, the idea that these small pieces should be deployed automatically and frequently.

As a result, we now deploy more things to production more frequently, and while not necessarily evident at the time, this placed massive pressure on deployment artifacts and processes to standardize. A broken, non-standard, deployment process on a monolith application is an annoyance. On a microservice application with 30 parts changing five times a day, it’s a disaster.

 

How Cloud Native kills developer productivity

Making the switch to microservices but thinking it’s too good to be true? You already made the switch but you’ve started to notice that local development is harder than before? You’re not alone. Read more of this whitepaper

img

 

Standardization of the Cloud API

As an almost inevitable result, containers emerged to standardize the packaging and life cycle of individual microservices. And then, soon after (and with a little help from Google), Kubernetes emerged to standardize the orchestration of containers. While this wasn’t the original motivating intention of these trends, for the first time since the beginning of the cloud, all cloud providers support a single completely standard, completely portable interface for booting and maintaining software – the Kubernetes API. Having a standard API finally makes it possible to move applications from one cloud to another without significant engineering effort, removing what I believe is one of the last hurdles to the realization of multi-cloud.

Microservices Hurt Developers

Despite its numerous advantages, this new status quo introduces significant complexity to the developer experience. There are many more moving parts, operating on a complex substrate, that developers have to understand to do basic testing. For those interested in DevOps, this may not seem like a heavy lift, but for those of us who specialize in other areas, these issues can add up into a real productivity drag.

At core, the issue comes down to how dependencies are handled. With a monolith, dependencies are simply software libraries that are compiled into the binary, or linked in at runtime. Individual microservices, of course, also depend on software libraries, but in addition, they depend on other microservices. This vastly complicates the local testing process for numerous reasons:

  • The tools required to manage these dependencies (Minikube/Docker Compose) are difficult for many developers to understand.
  • The tools used in local testing are often wildly different than those used in production. This reduces the confidence that tests run locally guarantee the correctness of the final result.
  • The CPU/Memory requirements of running even relatively small numbers of microservices on a developer laptop can be prohibitive.

This Problem is Solvable

The good news is the problem is solvable. We’ve found that large teams who have been working with microservices for a long time tend to build a custom tool based on three core principles:

  • Fast & Efficient. The developer experience should be extremely fast. Every second, a developer waits for their environment to boot is pure waste.
  • Approachable. Not all developers are DevOps experts, nor should they have to be. Developers of all skill levels should be productive quickly without a lot of hassle.
  • Similar to Production. Developers need confidence that if their code works in test, it will work in production. For this reason, developing in the cloud on an environment specifically designed to be as similar to production as possible is crucial.

With a proper developer experience, organizations can get all of the benefits that microservices promise, without the drag on developer productivity.

How Microservices Enable Multi-Cloud at the Expense of Developers

Resources

Watch our DockerCon 2020 talk on Docker Compose in the Cloud.

Learn more about how Blimp scales your heavy containers into the cloud.

Try an example to see how Blimp speeds up your developer productivity on Docker.

By: Ethan Jackson