News
BackThese Components Make Up an "LBaaS"
cloudscale's load balancers are a well thought-out solution: they help you operate highly available setups and take a lot of tedious work off your shoulders; at the same time, they are so flexible that you can use them – with the right settings – in very different scenarios. This article looks at the various logical components of a load balancer and their options – so you can get the most out of them for your specific case.
The load balancer "as a whole"
In the background, a cloudscale load balancer consists of a redundant pair of virtual servers that we manage for you. Externally, they share an IP address (the so-called VIP, short for "virtual IP address"), which is active on one of the two systems and – similar to a Floating IP – is automatically and almost seamlessly moved to the other system if a problem is detected. In this way, we prevent the load balancer itself from becoming the single point of failure, while you save yourself the effort of having to build and maintain such a setup yourself.
From a logical perspective, the load balancer (or the load balancer object in the API) is like a bracket that "encloses" the components described below behind the VIP mentioned.
As always, you can find additional parameters as well as examples of API requests and responses for all of the objects mentioned in our comprehensive API documentation, so that you can try everything out in practice right away.
The listeners
A listener is the ear, so to speak, with which your load balancer listens for incoming connections. If you want to use your load balancer for HTTPS traffic, for example, you will typically set up a listener on TCP port 443. Particularly convenient: At this point, you can already define which clients are allowed to establish a connection at all. If you enter one or more IP addresses or ranges in allowed_cidrs
, then only these, but no other addresses, can connect to your listener.
What happens with the traffic once it has been received is determined by the pool you configure. You will usually specify a separate pool for each listener, but it is also possible to have several listeners point to the same pool.
The pools and their pool members
Essentially, a pool collects all incoming connections that can be handled in the same way. First and foremost, this means distributing the connections to one or more backend servers – the so-called pool members – which then process the requests. Separately for each pool member, you can configure the IP address and port at which it is ready to accept the connections from this pool. In our example, an HTTPS server needs to be running, which does not, however, need to be configured on port 443, but can be configured on any port individually for each pool member.
Directly for the pool itself, you configure the scheme according to which the connections are distributed between two or more pool members. Instead of a simple round_robin
, you can use least_connections
to route new connections to the pool member that currently has the fewest active connections, or use source_ip
to keep routing connections from a specific client to the same pool member, e.g. for persistent sessions on a website.
Also select the protocol
for your pool: With tcp
, the pool members "see" or log the IP address of the load balancer as the supposed client, as the payload data is forwarded from the load balancer to the backend, but a new TCP connection is established for this. You can work around this using proxy
or proxyv2
if your server software supports it (such as nginx): With this protocol, the load balancer can not only pass on the payload data from the original connection to the backend server, but also include information about the original client IP.
The health monitors
You can optionally configure a health monitor for each pool. This allows you to define under which circumstances the pool members are considered "healthy" – for example, if they respond to pings or return the expected HTTP status code in response to a configurable HTTP request. Using the health monitor, the load balancer can periodically check the individual pool members and continuously adjust the balancing so that incoming connections are only forwarded to functioning backend servers.
Last but not least, we would like to point out that a load balancer can either be accessible from the internet or only from one of your private networks, for example for services within a Kubernetes cluster. Publicly accessible load balancers can also be combined with Floating IPs. By the way, a single load balancer can be used for a large number of services/pools, each with its own set of pool members. Having said that, our load balancer "as a service" is not only highly flexible, but also particularly affordable at CHF 1.50 per day – making it the ideal upgrade for setups where availability matters to you.
Servers: Healthy.
Your cloudscale team