To Meet New Computing At The Edge Requirements, Try Unikernels

10/21/18

Daniel Dern

Supporting your organization's computing requirements "at the edge" -- away from the corporate office or data center -- has always faced challenges. "Big data" generated by IoT devices and other sources can often overwhelm even broadband connections to distant cloud centers -- and many "edge sites" have little or no connectivity. Equally, trying to analyze big data locally can overwhelm typical edge devices. And this can be problematic where timely alerts, decisions and responses require quick data crunching.

One technology that can turn this around is "unikernels" -- small, secure, purpose-built virtual machines for legacy and cloud-oriented virtual machine, container and microservice applications, that run in a fraction of a traditional VM or container's "space."

Here's a quick look at how and why unikernel software can help with your edge computing deployments.

WHO'S COMPUTING AT THE EDGE

"Edge computing" refers to processing being done remotely, rather than (or before) being done centrally (at your data centers or third-party clouds).

This can include computing being done in servers, computers even as small as Rasberry PI's that have operating systems, IoT devices, and even in remote routers (e.g., ones with Cisco's IOx).

Where traditional remote/field locations might have had a few, or few dozen, devices like Point-of-Sale systems, remote cameras and other security peripherals, and the like, today's edge environments may consist of dozens, hundreds, even thousands of locations with, in turn, hundreds or even thousands or more IoT data-generating devices per location. Collectively, simple sensor readings, video, and other instrumentation may be generating gigabytes per minute -- which includes data that requires near-instant analysis for similarly near-instant responses.

For example, stores selling food or other items that need refrigeration or freezing want to know about failures before food goes bad and must be thrown out -- which, in a large chain, can all too easily add up to $10 million or more per year.

For a single local store, a wireless sensor that can alert your phone might do the trick. But for a nationwide chain may have hundreds or thousands of stores, each with dozens-to-hundreds of IoT-enabled coolers.

Another increasingly sensor-and-data rich environment are educational institutions. Today's schools are wired up inside and out with environmental and security sensors, cameras, and locks. A small town may have only a few schools, but a major metropolitan city or a county-wide school district has hundreds of schools to monitor. Machine learning applications for video can, for example, distinguish between a water pistol and a real gun -- but aggregating all feeds to the cloud may inadvertently de-prioritize the one view that needs instant response.

Edge computing may also be critical for a single site.

For example, an off-coast oil rig can generate several terabytes of data each day -- and rely on timely analysis to identify event thresholds to issue alerts and trigger other quick responses -- and Internet connectivity is often inadequate or non-existent, so the crunching must be done locally.

Similarly, a self-driving car can't wait ten milliseconds for the cloud to analyze its surround, it has to decide quickly.

All these edge scenarios share common computing problems and concerns: running in often size-and-power-constrained resources, the need for quick data crunching for actionable results, and staying secure against external hacking threats.

UNIKERNELS TO THE EDGE COMPUTING RESCUE

One software approach that addresses the performance, security and administrative requirements of these scenarios is unikernel-based software architecture.

Traditional operating systems like Microsoft Windows and most Linux distros include tens of gigabytes of drivers and libraries. (For example, the Windows directory on the desktop PC I'm writing this article is 25 gigabytes, and contains over 125,000 files.) Adding more bulk, OSs are bundled with hundreds of utilities and user applications, from text editors, SSH (secure shell), and web servers to games like Nethack, Solitaire and Flight Simulator. This means they use up a lot of storage and RAM, take a while to boot up, and are full of potential 'attack surfaces.'

That's like using a Swiss Army Knife when all you need is a can opener... or an easily hot-wireable deli truck instead of a small drone to deliver a pre-made sandwich.

According to Russell Pavlicek's Unikernels from O'Reilly "Virtual machines loaded with full operating systems and thousands of utilities don’t make sense in the cloud. They waste resources and provide a wide attack surface with a target-rich environment, as demonstrated by massive data breaches in the past few years.

If the application's needs and target hardware environment are known, most of that traditional OS's "stuff" isn't needed. For example, says Ian Eyberg, CEO of NanoVMs, "You don't need support for 30 different Ethernet adapters that won't be there, or 28 audio drivers you won't listen to. You don't need USB. Your application only needs specific capabilities. You need a clock, you need to talk to the network, to the file system, et cetera."

An executable image consisting of the just-what's-needed OS libraries, the application proper, and any configuration data, is called a "unikernel." (Possibly as in, "a kernel with a single purpose.")

According to Unikernel.org, "Unikernels are specialized, single-address-space machine images constructed by using library operating systems." Wikipedia refers to unikernels as "sealed, fixed-purpose images."

Functionally, says NanoVMs' Ian Eyberg, "A unikernel is simply an application that has been boiled down to a small, secure, light-weight virtual machine.

This means that a unikernel version of an application will be significantly smaller than a traditional VM version of that application.

"Unikernels use only the OS resources necessary make their applications work," states Russell Pavlicek's in his Unikernels report. "Because these single-address-space machine images introduce low-level OS operations at compile time, they typically measure just kilobytes in size, with tiny attack surfaces."

The first such systems were Exokernel and Nemesis in the late 1990s. Today, there are roughly a dozen unikernel initiatives, including NanoVMs.

For edge computing, unikernels solve many of the challenges and constraints of traditional VMs. They run in much smaller hardware -- or you can run many more instances and other unikernels in the same hardware. They're much more secure, because they don't include SSH access or other vulnerabilities.

Unikernels also boot faster than a regular VM. For example, Some companies like NEC are reporting boot times of five milliseconds for their LightVM unikernels, versus 200-20,000 milliseconds for Docker VMs or containers (see http://sysml.neclab.eu/projects/lightvm).

Unsurprisingly, unikernel applications also run faster, says NanoVMs' Eyberg. "The reason is that the unikernel is a single process system. So there is less context switching between both kernel and user."

Even a small Amazon EC2 (Amazon Elastic Compute Cloud) instance on Amazon Web Services that's booted up already has 100+ programs running in it -- before a user installs any of their actual application, Eyberg points out. "It's an illusion that the operating system is running all of these programs concurrently. It has to constantly switch back and forth, which adds time. With one program there is none of that context switching, so the unikernel version runs much, much faster."

As you can see, using unikernels for your edge computing applications can address the security, performance and cost requirements of these increasingly-big, important edge computing tasks.

(Daniel P. Dern is a freelance technology and business writer.)

Stop Deploying 50 Year Old Systems

Introducing the future cloud.

Ready for the future cloud?

Ready for the revolution in operating systems we've all been waiting for?

Schedule a Demo