TLDR: Run Unikernels on Normal Ec2 Instances Right Now.
Those who have not deployed Nanos unikernels to AWS or GCP might not understand how users deploy them. Traditionally, we take your application, say a go webserver, then we create a disk image out of it. This disk image is turned into an AMI (amazon machine image) and from there you can spin it up as an ec2 instance. What this means in practice is that all the orchestration is being done by Amazon - not you or kubernetes or anything like that. This is something a lot of our users and customers really like.
However, it is very common to walk into a larger engineering organization - say one with a few hundred engineers and even though a lot of people will be fully onboard and totally understand the benefits of unikernels they are unable to adopt them into their organization because of infrastructure gravity.
Infrastructure software is extremely sticky and it can be very hard to move from one paradigm to another. I'm sure many of our readers have been involved in projects or conversations about projects where it was proposed to move from one cloud to another such as AWS to GCP or Azure to AWS. Technically there isn't a lot of challenges involved in doing that, however, at scale it becomes increasingly harder to do. Organizational politics can come into play with disparate teams wishing to provision software in certain ways. Similarily, some people might have special pet software they do not wish to touch or migrate and that becomes objection fodder. So sometimes adopting unikernels has nothing to do with technology or lack of benefits but just organizational inertia.
Introducing NanoVMs Inception
Today we are taking more steps to address this and this is something many people have requested over the years. NanoVMs Inception allows one to deploy unikernels on top of existing normal ec2 instances versus deploying them as ec2 instances. If that is confusing to you, you probably have not ran OPS before and we'd encourage you to go boot a hello world on your preferred cloud first and then come back to reading this. We aren't talking about metal instances here or special nested virtualization instances - these are straight up plain ordinary t2 and t3 style instances. Typically this is not something you could do in the past as you would take a massive performance hit as you would not be able to use virtualization and instead have to rely on a bunch of emulation which becomes too slow to be useable in production. Remember one of the selling points of unikernels is that we can actually run many payloads faster than linux so anything that kills the performance benefit kills a main reason why people like these things to begin with.
Performance Benefits of Unikernels
In some of our recent testing we are seeing upwards of 4X performance increases for applications running in NanoVMs Inception as compared to the exact same instance type - a t3.medium vs a t3.medium - this is using no emulation, no nested virtualization. You also get to keep your security guarantees as-is. Unlike other unikernel projects out there we're not here to "take security seriously" - we are here to effect change.
Cost Savings of Unikernels
Not having to rely on metal instance types translates into huge cost savings. While you can get an a1.metal going for $0.40/hr the cheapest x86 metal instance is a whopping $3.80/hr - at those prices you might as well start racking your servers yourself (not an option for many companies that need or want on-demand compute).
But wait - there's more!
One of the benefits of unikernels several people have started to notice is their boot time can be incredibly small - measured in tens of milliseconds depending on the application. While a lot of servers will be 'always on' those that wish to build platforms on top of unikernels might wish to use this transient functionality. Being able to pause/resume workloads through firecracker running unikernel guests is a very interesting benefit, especially if you want to say spin up google chrome to run ai agent workloads in a fast, secure, transient manner for many tenants.
In the past running unikernels on AWS or GCP typically meant that you weren't going to get the no cold-start benefit. Now that you can run unikernels on top of normal ec2 instances means you can have your cake and eat it too.
Traditionally if you wanted to use something like firecracker for virtio-vsock or virtio-vhost operations with unikernels you needed your own servers - now with NanoVMs Inception you can simply turn on an ordinary instance and immediately start running firecracker based workloads with very little performance degradation all while getting extreme isolation and no cold starts.
It's Time To Build Your Own Unikernel Cloud
I really can't overstate how big of a deal NanoVMs Inception is. I encourage you all to try it out immediately - especially if you were one of those that specifically needed this functionality in the past.
NanoVMs is the fastest safest way to run production workloads and NanoVMs Inception opens the doors to a whole new universe to allow you to build your own cloud in the cloud.
Note: If you are on GCP and would like to try it there please get in touch with us.
Ad Nubibus!
Stop Deploying 50 Year Old Systems
Introducing the future cloud.