Load Balancer "as a Service"

Ensuring the greatest possible availability of an online service requires measures at various levels. Redundancy – a directly incorporated "plan B", so to speak – plays a key role here. Instead of engineering everything yourself, you can use our new load balancer service with immediate effect to create a sophisticated setup to optimize the continuous availability of your online service by means of redundancy.

From fail-over to load balancing

At, we have always endeavored to maximize the availability of our infrastructure and thus to guarantee interruption-free operation of your virtual servers. Failures are, however, still possible and, in addition, there are planned interruptions, e.g. when you update the software you use. You already have the option of Floating IPs as a mechanism for keeping your service available from your users' perspective: the IP address that users connect to can be moved, either in an automated manner or manually, from one virtual server to another, thus ensuring that requests can be processed while the original server is offline.

Our new load balancer offer goes even further. As opposed to with just an IP address, it is not only possible to divert incoming traffic from one server to another, but the load balancer can continuously distribute incoming connections – and thus the computing load – to two or more virtual servers. Additional health checks are used to regularly check the state of the virtual servers and if one of them does not respond as expected, it is taken out of rotation and incoming traffic distributed among the remaining correctly functioning servers. As opposed to with a Floating IP, it is also possible to configure separate sets of servers for processing requests to different TCP ports.

Redundancy within the load balancer itself

To ensure that the load balancer itself does not become a single point of failure, it is actually a pair of load balancers that run on separate hardware. The "virtual IP address", which is visible from the outside, is allocated – in a similar way to a Floating IP – to one of the two load balancers, and switches to the other load balancer if a problem is detected with the first one. While it is already possible to build a setup of this kind on your own with two additional virtual servers and a Floating IP, our load balancer service significantly reduces the effort required. Once it has been configured, the load balancer carries out its work without your having to worry about scripting checks and fail-overs or about maintaining additional servers.

Please note that the virtual IP address (VIP) is linked to a specific load balancer and will be deleted if the load balancer is deleted. So that you can offer your users a service with an IP address that remains the same, we recommend that you use a Floating IP in combination with load balancers. Floating IPs (but not Floating Networks) can also be moved between virtual servers and load balancers, which means that you can seamlessly replace an individual server with a load balancer setup.

Description of the calls in the API documentation.

A few tips

Creating and configuring load balancers is currently only possible via the API. The extensions to our Go SDK, Terraform provider and Ansible collection, which are based on our API, will be published over the next few days. Existing load balancers will also be displayed in our web-based cloud control panel; this method of configuration has been planned for a later date. It goes without saying that the API calls required to use our load balancer service are described in detail in our API documentation. You will also find sample requests and responses for every supported call there. Please note that the API specification is currently still designated as "beta", and we reserve the right to carry out further adjustments that are not fully compatible with the current state.

The virtual servers, which incoming connections are to be distributed between, need to be accessible from the load balancer via a private managed network. Two different use cases are supported for the load balancer itself: it can be created with a public IP address (VIP) and thus accept requests from the Internet; or the VIP may already be located in a private network itself, which means that the load balancer can be used e.g. for services within a Kubernetes cluster.

With load balancers "as a service" you can rely on a tried-and-tested concept with immediate effect without having to worry about the individual components yourself. As incoming traffic is always diverted completely automatically to a functioning system, it is simple for you to optimize the availability of your online services for your users. Reliability – as a service.

Our engineering for your VIP.
Your team

Back to overview