Why we needed custom Container Engine and how easy it is to build

By the end of 2015 I started coding TreeScale networking technology from plain idea schema. At that time I didn’t know that match about Docker or containers, but Docker itself was part of the idea that TreeScale is built for.

TreeScale core technology is built to provide by default scalable Event system, for executing distributed applications using real time events and messaging https://github.com/treescale/treescale

After having core networking technology implemented, to close application execution cycle we obviously needed something like container, which will execute provided micro-function in an isolated environment. But after trying Docker integrations, especially after implementing Free Private Docker Registry we saw how uncomfortable is to work with Docker Daemon, which is by Docker’s design, is a main place to start, stop or get information about containers.

TreeScake Networking engine principleTreeScake Networking engine principle

Problems with Docker Daemon

By design Docker working everything from Daemon server to any client who can interact with it. That principle works just fine if you are working with non-real time systems, or you don’t have performance critical applications, but for all other cases having 1 point which can kill all your containers seems very scary!

All containers attached to Docker daemon and even for getting outputs or statistical information you need to interact with Docker daemon, which adds huge overhead if you have 10+ containers running for same daemon.

So for our case we needed independently running containers for each TreeScale node connected to our tree network. And because we needed that to work problematically Docker was not an option to go with.

Libcontainer and Open Container Project

Open Container Project is some sort of a regulation over Docker’s monopoly :) which is good! As an open source community they are providing tools, libraries to work with containers without even thinking about using Docker CE.

Libcontainer is a great option to integrate containers inside application problematically, which is something that initially developed by Docker’s open source community. Now it is part of the runc tool.

It is super easy to get started with your own container engine using libcontainer library, you only need some root directory and executable file to run a container out of it. That’s the simplest way to understand how containers work.

Making basic container engine

In most cases building container engine like Docker CE is just based on easy to use API implementations. The core of it is libcontainer and libnetworking , so having this basic example executed you will see how easy it is to build custom container engine, which could help you to get better application integrations if you need them.

import (
	_ "github.com/opencontainers/runc/libcontainer/nsenter"

func init() {
    factory, _ := libcontainer.New("/var/lib/container")
    if err := factory.StartInitialization(); err != nil {

func main() {
    process := &libcontainer.Process{
	Args:   []string{"/bin/bash"},
	Env:    []string{"PATH=/bin"},
	User:   "daemon",
	Stdin:  os.Stdin,
	Stdout: os.Stdout,
	Stderr: os.Stderr,

   err := container.Run(process)
   if err != nil {

   // wait for the process to finish.
   _, err := process.Wait()
   if err != nil {

   // destroy the container.

This code will start container at /var/lib/container directory and will open /bin/bash terminal using current stdin, stdout, stderr .

Using this type of implementation we’ve made fully distributed container engine with TreeScale networking on top of it.

The TreeScale Container Engine is going to be open source in near future, but for now we are solving real time application issues for large enterprises, who don’t feel so comfortable using Docker’s non scalable engine principle. And don’t get me wrong, Kubernetes, Docker Sawarm, Rancher and other tools making container clusters and making their job really good, but TreeScale Container engine is not competing with them, our main focus not an orchestration, but real time execution very similar to AWS Lambda, but more efficient and with less latency.