Ruby and Python are both popular interpreted languages. For many years developers that wished to deploy their Django, Rails and other applications written with these frameworks have routinely relied on front-end proxies such as Nginx and Haproxy and usually stuck those in front of a load balancer such as an ELB.
While we've been able to run both Ruby and Python as unikernels for some time now, one needed to keep the proxy/load balancer on a separate instance and each app server on a separate instance because Nanos is specifically single process.
You see in the normal setup you might have Nginx in front and then 3-4 ruby interpreters on the back all on the same instance. I've seen people do this even on on an t2-small which is inherently just one thread - meaning it's actually wasteful.
People often wonder why languages like this are 'slow'. The reason is that they are almost all universally single process and single threaded by their very nature. Keep in mind both python and ruby are from the 90s and we didn't have commodity smp enabled machines back then. Both have equivalents such as JRuby and Jython and a half-dozen other choices but we're talking about the most commonly available default install here.
So what does this have to do with unikernels? Unikernels are inherently single process systems with no support for running other programs. This gets rid of all sorts of performance and security issues that come with running many multiple programs on the same instance but if you aren't running something like the JVM or Go or Rust or a language with actual threading than you have to scale out horizontally a lot sooner than normal.
Well, not anymore.
There's a talented hacker we ran into that made a handful of repos that embed these interpreters into nginx itself for languages like php and python and ruby. We initially packaged up the php repo so we could start running payloads such as wordpress in an easier format but then we did a little bit of work and upgraded the python and ruby packages that were stuck in their 2.x versions to their 3.x equivalents.
For those of you that know of the company Kong or the framework OpenResty this is a very similar approach except you don't have to stop with just plugins - you can load your entire codebase in this way. I think this is really cool and would like to see more of this in the future. This practice gives these languages a lot more breathing room and will probably reduce your cloud bill.
These aren't just faster than running the built-in webserver they also are faster than the traditional front-end approach since you skip a hop in between. Want to try it out?
Run Ruby:
➜ r ops pkg load ngx_ruby_0.0.1 -p 8080
booting /Users/eyberg/.ops/images/nginx ...
en1: assigned 10.0.2.15
➜ ~ curl -XGET http://127.0.0.1:8080/content_by_ruby
Hello, Ngx_ruby. Current Time : 2021-04-14 22:47:39.825399269 +0000%
➜ ~ curl -XGET http://127.0.0.1:8080/content_by_ruby
Hello, Ngx_ruby. Current Time : 2021-04-14 22:47:42.827897918 +0000%
Run Python:
➜ r ops pkg load ngx_python_0.0.1 -p 8080
You are running an older version of Ops.
Update: Run `ops update`
booting /Users/eyberg/.ops/images/nginx ...
en1: assigned 10.0.2.15
➜ ~ curl -XGET http://127.0.0.1:8080/content_by_python
Hello, Ngx_Python at Wed Apr 14 22:48:58 2021
[1, 2, 3, 4, 5]
%
➜ ~ curl -XGET http://127.0.0.1:8080/content_by_python
Hello, Ngx_Python at Wed Apr 14 22:48:59 2021
[1, 2, 3, 4, 5]
%
Give it a spin and let us know how things go. We are definitely interested in accelerating this design.
What's in your nginx?