Function as a Service (FaaS) and serverless gained more and more traction over the last few years. Especially in combined edge and cloud scenarios serverless approach can be attractive since it can ensure low latency and high computing efficiency. Contrary to what the names suggests, in serverless, the actual servers are used in this computing model. In contrast to the traditional paradigm, howver, the developer simply doesn’t need to care about the server itself anymore, because the serverless platform takes care of all underlying infrastructure, deployment, scaling, providing an interface etc. The developer only needs to provide the actual code for their function.
Current serverless computing relies on container runtime engines. Docker, containerd, CRI-O and others enable the concept of containerization; a light-weight and fast way of virtualization. Applications are packed into containers with all their dependencies and libraries and can be deployed anywhere. In contrast to a virtual machine paradigm, containers run on top of a runtime engine and not a hypervisor. They do not contain a guest operating system but share the host’s kernel instead. This makes containers comparably more light-weight and faster to deploy.
Most serverless platforms rely on a container orchestrator, like Kubernetes, on top of the mentioned container runtime engine. Container orchestrators are used to manage large clusters of containers, including their life-cycles, storage and networking. These clusters are equipped with a control plane that manages the deployment process, the scaling of the containers, and the networking. The orchestrator also collects insights and metrics of the cluster and takes actions based on them to achieve or maintain the wanted state. The nodes execute the containers scheduled to them by the control plane. Kubernetes is the most prominent container orchestrator, originally developed and open-sourced by Google and in its base form aimed to run on homogeneous cloud server environments. To meet the requirements of less powerful and more heterogeneous devices, different versions of light-weight distributions were derived, like k3s, k0s, microShift and kubeEdge. These distributions provide the same core functionality as Kubernetes, but are dropping resource heavy cloud specific components, like certain API connectors.
FaaS and BaaS (Backend as a Service) are the subsets of the serverless computing method. FaaS allows the developer to focus entirely on writing their own code; the functions and nothing else. With BaaS, anyone can make use of third party services, for example offered by a cloud provider and no skills in programming are required. The serverless functions should be stateless, single purpose and are supposed to be simple and run for short periods of time. To build more complex applications, multiple functions can be chained together, by either client-side chaining, leaving full control to the developer, or server-side chaining, reducing the additional latency overhead introduced by the additional invocations. To run functions, the functions need to be containerized and deployed which is done by the serverless platform. The serverless platform also takes care of the execution of the functions, they are deployed and executed when the function gets triggered and de-allocated as soon as the function terminates.
The advantage of the serverless platform is that it can scale up an down quickly to meet the actual incoming traffic, even to zero if the application is not triggered at all. This leads to reduced costs for customers which are only charged when the application is triggered, freeing them of costs during idle time. The disadvantage of serverless functions is the so-called cold-start problem. Since the functions are usually de-allocated right after the execution finishes, new containers need to be deployed before the actual functionality can start executing which can introduce lags. That in mind, FaaS provides advantages when considering unknown incoming and heterogeneous traffic to the application, but in cases where the traffic is constant and known, normal deployments of virtual machines or containers can be more advantageous. We expect that the FaaS approach is likely to complement the traditional container deployments in the ICOS project, and beyond.
This project has received funding from the European Union’s HORIZON research and innovation programme under grant agreement No 101070177.