The first example is of a network service which is what came to mind first for me. Traditionally the kernel logic handling sending/receiving the packets and the app run in different security contexts and there is a cost to hop back and forth. There has been a large amount of work over the years to optimize the data exchange for this use case but this eliminates the idea of a problem at all (while also being more minimal than any minimal normal OS could hope to be).
You can also come up with more benefits by thinking alongs these lines, for example: the kernel doesn't need to keep track of which packets go to which process because there is only one process (the kernel itself). There could be a performance benefit thanks to this.
> more minimal
Which carries other benefits: boot time, memory overhead, etc. You could probably treat IncludeOS VMs like containers.
It’s mainly for use in stronger isolation (ie. VMs instead of just containers). In a container, the kernel is already up and the application just has to start. In a VM, that’s not the case. By making the application “the kernel”, very fast startup times are possible.
And how much of that is from the OS? It seems ridiculous to optimize something that is already so optimized rather than just optimizing your services. Maybe you don't need a dozen Vms and containers to do the job of one server. Maybe you should use efficient algorithms and tools.
If you're using runtime languages on the backed (node, python) then you're already failing the environment. An efficient compiled language can perform the same functions much more efficiently.
> If you're using runtime languages on the backed (node, python) then you're already failing the environment.
Not necessarily. If you make a living running a typical CRUD software-as-a-service offering, your code may really just be glue mapping HTTP requests to database requests. There, your choice of language doesn't count for much; your code isn't doing heavy lifting, the DBMS is.
It's quite possible your hardware requirements could be lower by writing in Python and tuning your database, rather than spending the same time writing non-performance-critical code in C++.
Frameworks like Tornado, for Python, are able to quite efficiently handle highly concurrent workloads, despite using a single-threaded, interpreted, language.
For computationally intensive code, then sure, a language like Python will need far more hardware horsepower to get the job done than C++ would.
>If you're using runtime languages on the backed (node, python) then you're already failing the environment. An efficient compiled language can perform the same functions much more efficiently.
This post is about an optimized environment for C++ services though - not node or python services.
As the scale of data infrastructure rapidly grows, the carbon footprint of that infrastructure grows as well. It is already non-trivial. Consequently, the widespread use of excessively wasteful software implementations that require several-fold the hardware infrastructure of a more efficient design are becoming material contributors to total carbon emissions, more so than many things we focus on for the sake of climate change.
Unlike some other methods for reducing carbon emissions, which require subsidies to be competitive, massively reducing server infrastructure footprint often improves the absolute economics through radical reductions in OpEx/CapEx for data intensive businesses.
I'm skeptical that you couldn't get similar reductions in carbon footprint through investing the money you gain by delivering early on renewable energy infrastructure and carbon sequestering charities. Of course, how that cost-benefit analysis works out depends entirely on required engineering hours and the growth of your business, which are both notoriously difficult to predict.
That said, if your goal is reducing expenditures, reducing your carbon footprint is a great cherry on top.
I think people confuse two important things.
1- Energy consumption of the lower stack versus total of a single implementation (could possibly be very low)
2- Energy consumption of the lower stack multiplied by everywhere it is deployed
1- Yes, optimizing for lowering this consumption by an app-developer can definitely be an ill-advised endeavor economically (you're saving 0.1-10% of your cost.. by adding a huge investment of time, possibly larger than the application development itself)
2- Optimizing across all deployments, can definitely move the needle, in a very significant way, against the effort required, since you're automatically deployed on 1000-1M places.
It's the same reason library developers (especially system/langugage/heavily-used-libraries) have a very good reason economically (and therefore ecologically as well) to optimize. Competition here is a huge benefit to everyone, including the environment.
> IncludeOS is a minimal unikernel operating system for C++ services running in the cloud and on real hardware.