Today, you’d expect a drastic improvement of the whole experience from building your first Docker image to running it in a stable, fully managed environment.

Let’s say that hasn’t improved enough. Yet.

Kubernetes as the go-to container orchestrator is great, but still complicated, even if you are just using (not operating) it. It can do so much, but most of the time you still want the same few things:

  1. Build an app from source code in a repeatable fashion
  2. Run that app in a fully-managed, abstracted, aka serverless, runtime environment
  3. Easily provision a database and inject the credentials into the app
  4. Expose the app’s API to the world
  5. Automatically scale that app
  6. Observe the app via logs, metrics, alerts and traces

SPOILER ALERT


Basically you want to use something that works like Heroku, but due to company rules, the hope that running Kubernetes leads to a lower TCO and / or the fear of lock-in, you just can’t buy Heroku, Elastic Beanstalk, AppEngine or any other proprietary PaaS.

We easily forget: This has all been solved, but our hands are tied.


That got sad pretty quickly. Let’s brighten up.


The next wave of “just run my app” services is coming, and it’s exciting.

It’s serverless, but with containers. It’s easy to use and it might, no, should be more interoperable than Serverless 1.0.

It might even give you the power to use more of those evil but delightfully managed Herokus, while staying portable. Aren’t you hyped for that?

Let’s scale 🧗the hype ⛰️.

How can you achieve a Heroku-like workflow today?

1. Build some automation on top of Kubernetes

k8s is the swiss army knife for running workloads, with the tradeoff of being complex to operate and use. If you are experienced with k8s, you might disagree, but that’s exactly the point: You need a good deal of experience to not shoot yourself in the foot.

For developers, plain k8s exposes a bazillion API objects to deal with. But they just want to deploy a boring web app!

(What is Cloud Run? Read on to find out.)

To improve the developer experience, you could build some automation around Git hooks and create some build job templates. Many companies did that. My team did it, too.

👍🏼 It should be portable as it runs on k8s.1

👎🏼 You have to maintain that automation.

👎🏼 You need to deal with k8s complexities behind the scenes.

2. Run a serverless framework on top of Kubernetes

You can install frameworks on k8s that simplify the developer experience and there are quite a few, e.g. Knative or OpenFaaS Cloud.

👍🏼 It should be portable as it runs on k8s.1

👎🏼 You need to operate the framework, which might be complex on its own.

👎🏼 You need to deal with k8s complexities behind the scenes.

3. Use a managed function product

You could drop k8s and pick up a serverless function offering from a cloud vendor, but that could lock you in. Maybe you can mitigate that with wrappers like the Serverless Framework, but you’d still have to figure out the local development story, which gets easier and easier for containers with awesome tools like VS Code Remote Development.

👍🏼 Easier to operate, high level of abstraction.

👎🏼 Need to figure out local development.

👎🏼 Potential lock-in.

Is it possible to get the best of both worlds?

What if

  • you could build serverless apps that just run on multiple self-hosted and cloud solutions
  • so that you can decide on the level of operational complexity / control you want to deal with
  • while offering the same, delightful Heroku developer experience?

With Serverless 2.0, it is

The first generation of serverless cloud products is pretty proprietary per vendor. Here’s the interesting part: This will change and it is already happening! Alex Ellis, founder of OpenFaaS, coined a term for that :

Serverless 2.0 Landscape

In a Serverless 2.0 world, you can build your functions in any of the listed template systems, and then run them on any of the chosen serving platforms and listed installable or hosted infrastructure offerings.

The vision of Serverless 2.0 is to have

  • a common package format for apps (functions): Docker images
  • a common runtime contract: the Docker container exposes an HTTP endpoint
  • the ability to generate or even manage the provider-specific deployment configuration

In essence, Serverless 2.0 products bring all the neat features from Serverless 1.0 - auto-scaling, scale to zero, abstracted infrastructure, easy provisioning of public endpoints - but for standard Docker containers.

Where do I get Serverless 2.0 ?

Technically, most serverless frameworks for k8s could be regarded as 2.0 because they schedule containers for your functions.

Interestingly though, the big clouds already have or work on dedicated offerings that fit Serverless 2.0:

With the right abstraction, these cloud services can run containers in a portable, Serverless 2.0 fashion, with the added benefit of being fully managed by the vendor.

What abstraction?

There are two main options:

  1. There is a dedicated, standard interface for the Heroku use case. Behind it, multiple provider integrations are available. This is the approach OpenFaaS takes with faas-provider.
  2. A set of k8s API objects, e.g. those of Knative, become the de-facto standard. Provider integrations listen on these objects and act accordingly. In practice, you either run a minimal k8s cluster to interact with that API or vendors implement fake k8s endpoints. The latter approach is used for Cloud Run, which reimplements a part of the Knative API.

Personally, I favor option 1, because it doesn’t even prescribe k8s, which you might not need or want due to the aforementioned complexity.

The Vision

We’re on the peak of Mount Hype ⛰️ now, so what’s the view like?

In a Serverless 2.0 world, the developer can

  • build portable containers which can be run in a self-hosted or fully-managed, “serverless” environment
  • use the flexibility of containers to install any software needed
  • use simple, portable APIs for standard app deployments
  • forget about the underlying container service, whether it is k8s or something proprietary
  • benefit from cloud services like integrated API management, auto-scaling and container scanning, without sacrificing application portability

Scaling down

That was a nice trip so far, now let’s slowly descend from the hype. What are some of the open issues around Serverless 2.0?

What’s the value in building Docker images and running containers?

Docker (or OCI) images are flexible and portable, but building them “the hard way” (e.g. via a plain Dockerfile) might already be overkill for the simple use cases we are talking about. Most of the time, you just need the language runtime, e.g. node.js or JDK, and drop your code in. Oops, that sounds a lot like Serverless 1.0.

Thankfully, there are lots of tools out there that build the image with your code for you, e.g. jib for Java apps or Cloud Native Buildpacks (still in an early phase).

When we are talking about dependencies like language runtimes, an important point is the update strategy. CVEs pop up daily, so you need an (ideally automated) process to stay up-to-date, at least on minor versions. In general, serverless and PaaS products from the cloud are more likely to solve this for you. With managed Docker base images and the right CI triggers, you can recreate this behavior, but then it is work you have to put in.

I think having a common artifact format and runtime environment is a good way to improve interoperability. Convenience features can be implemented on top of this; cloud providers could offer an option like “auto-update my images”.

The other 90% of the Heroku / Serverless 1.0 experience are not covered yet

This post is a lot about compute - pushing code to run an app. We didn’t cover all the other aspects of your application, e.g.

  • How do you easily provision backing services like databases and inject the credentials into your app?
  • How does observability, e.g. logging and tracing work?
  • Is there an established standard for events?
  • Which baseline API gateway features should be included?

Sooner or later, Serverless 2.0 needs to be turned into a formal specification that defines the scope and details of compliant solutions.

Wrapping up at the Base Camp ⛺

In 2020 and beyond, you want to build and run your containerized apps the Serverless 2.0 way: On- and off-premise, you are looking for container-based serverless solutions. This opens up the potential for more portability between self-hosted and fully managed environments.

With the option to use the new Serverless 2.0 solutions from cloud providers, you can leverage increased development speed and efficiency, while assuring your CTO that you can keep the lock-in risk (migration effort) low.

Proprietary products that support the Serverless 2.0 interface will allow themselves to be abstracted away by other tools like OpenFaaS. To stay up-to-date on what Serverless 2.0 really means, you should keep an eye out for further developments in the Cloud Native Computing Foundation, especially the serverless working group, and individual projects that drive portability forward, like OpenFaaS.

At the Campfire

What are your thoughts on Serverless 2.0 ? Is it all another attempt to standardize what cannot be standardized without sacrificing too much utility? What trends do you see around containers and serverless?

How do you like this post format where I scale up and down the hype?

Let me know your thoughts on Twitter or Linkedin.

Big thanks to

“East Oakland, California by Sharon Hahn Darlin is licensed under CC BY 2.0


  1. If you believe that moving from Kubernetes A to B is actually a frictionless experience. As I have been reminded, it is very possible that vanilla k8s doesn’t really exist in practice, as each distribution / hosting comes with some amount of customizations. In addition, the pace of k8s feature releases can also impact your portability. Remember when RBAC became stable and you needed to pick between RBAC and non-RBAC configurations all the time?