Load Balancing SmartFoxServer 2X with HAProxy

In this article we are going to explore several ways to use the open source HAProxy load balancer in conjunction with SFS2X, to increase the scalability and availability of a multiplayer project.

We are going to show different configurations for TCP and Websocket connections and several ways to setup the system for common use cases.

To get the most out of this tutorial we require a basic knowledge of what a Load Balancer is and how it works, and some familiarity with the basics of networking and the OSI Model.

» Introducing HAProxy

In the words of its creators, HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications.

This software is particularly well known in the Linux community and it is a a popular choice for many large traffic websites. It can operate both at layer 4 (TPC) and layer 7 (HTTP) of the OSI Model so it’s perfectly suitable to load balance a multiplayer service such as SmartFoxServer 2X.

In the next sections we’re going to install and setup HAProxy on a dedicated Linux machine to act as a Load Balancer for multiple SFS2X instances, supporting both TCP and Websocket clients.

» Round robin Load Balancing

In our first setup we aim at running a number of SFS2X instances behind a single HAProxy and let it balance the incoming client traffic. All users will point to the balancer’s IP address (or domain) which it will take care of passing the connection to one of the SFS2X instances.

In particular this will be done using the simple Round Robin algorithm, that distributes the connections across all instances evenly.

To get started we launched a new Ubuntu machine in AWS EC2 and installed HAProxy with this command:

sudo apt install haproxy

After a few seconds the load balancer should be up and running. We can verify it with this:

service haproxy status

Before we dive into the HAProxy configuration we also need to setup couple of SmartFoxServer 2X instances. We’ll skip over this section as you should already be familiar with this simple process and assume they are already set up.

For our example we assume the two SFS2X servers have private IP addresses of 10.0.0.10 and 10.0.0.11

» Configuring HAProxy

We can now proceed by editing the HAProxy’s configuration found under /etc/haproxy/haproxy.cfg

For example:

sudo nano /etc/haproxy/haproxy.cfg

HAProxy has a vast number of settings but is relatively simple to configure and the default haproxy.cfg file is pretty minimalistic. Essentially there are four main sections in the configuration, called:

  • globals: process-wide settings for performance and security
  • defaults: default values that you may or may not override in the next sections
  • frontend: defines the interface with the clients, such as listening ports
  • backend: describes the servers that are available for HAProxy to load balance

While the globals and defaults section appear only once in the .cfg file, there can be multiple frontend and backend sections. For example we could bind multiple ports on the frontend for different services, such as TCP socket and Websocket. Similarly we can define multiple backed sections that map the two frontends to their respective servers.

Let’s take a look at the configuration we have used for this setup to clarify what described so far.

Global

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon
        maxconn 50000

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM->
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

What you see here is 99% the standard settings from the default config file, we just added the maxconn parameter to allows a high number of incoming connections.

For the moment we can ignore the rest of the settings and move on to the next section.

Defaults

defaults
        log     global
        mode    tcp
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        timeout tunnel  180s

The mode tcp command tells HAProxy that by default incoming connections should be treated as layer-4 TCP connections, rather than layer-7 HTTP ones. We’ll return on this topic in a moment.

The various timeout commands control the client-to-balancer and balancer-to-server timeouts and can be changed as preferred. The last timeout tunnel is specific for Websocket connections, once again regulating the length of an idle connection.

Please note that unless otherwise specified, values are interpreted as milliseconds. In alternative you can add an “s” for seconds, “m” for minutes and “h” for hours.

Frontend

In this section of the .cfg file we can define two different frontends: one for the default TCP port 9933 and one for the Websocket traffic, on port 8080.

frontend sfs-socket
        bind *:9933
        default_backend servers-socket

frontend sfs-websocket
        mode http
        bind *:8080
        default_backend servers-websocket

The first thing to note is that we do not define a mode in the sfs-socket frontend as we have already declared mode tcp in the previous default section.

However we do declare mode http in the sfs-websocket frontend declaration to override the defaults.

Finally the default_backend command is referring to the backend blocks that we’re about to define.

Backend

Here we declare the two referenced backed items:

backend servers-socket
        balance roundrobin
        server sfs1 10.0.0.10:9933
        server sfs2 10.0.0.11:9933

backend servers-websocket
        mode http
        balance roundrobin
        option  forwardfor
        server sfs1 10.0.0.10:8080 check fall 3 rise 2
        server sfs2 10.0.0.11:8080 check fall 3 rise 2

The servers-socket block defines the targets of TCP load balancing, using the standard SFS2X TCP port value (9933).

Please Note: when defining the server endpoints we always use the private IP addresses of these machines, as load balancer and relative nodes must communicate in a fast, low-latency private network.

The server-websocket block is where we define the Websocket endpoints, using the default 8080 port.

The option forwardfor directive tells HAProxy to pass the original client IP address to the endpoint by adding an X-Forwarded-For (XFF) entry to the HTTP header file. This in turn is supported by SmartFoxServer by enabling it in the AdminTool > Server Configurator > Web Server

Finally the check fall 3 rise 2 parameters in the servers definition activates the Load Balancer’s health check system, specifying how many checks are required to determine whether a server is active or not.

  • fall 3: means that after 3 consecutive failed checks the server is excluded from the load balancing pool
  • rise 2: indicates that after 2 consecutive successful checks the server is added to the load balancing pool

» Caveats

Client IPs in TCP mode

When running in TCP mode there is no way for HAProxy to tell SmartFoxServer what is the original IP address of the client, therefore all users connecting via TCP will appear to have the IP address of the Load Balancer.

While HAProxy knows the original IP of the sender it has no clue about the content of the TCP packets it is receiving and therefore there is no way to inject that information somewhere in the data stream.

This is only possible when working in HTTP mode.

Other load balancing algorithms

Round-robin is one of many possible load balancing systems that can be used with HAProxy. Depending on the mode in use (tcp vs http) it is possible to choose from a wide list of algorithms.

You can learn more about what is available in the HAProxy documentation.

SSL Certificates

If you need to activate HTTPS for Websocket or TCP encryption you will need to perform extra configuration steps. In particular you will need to deploy your SSL certificate on the Load Balancer, since it is the entry point for all of your clients.

It is beyond the scope of this tutorial to walk you through the HAProxy SSL setup process, but you can read all the details in the excellent documentation provided by on their website.

What about UDP?

HAProxy does not support UDP load balancing, so the solution presented here would not be work with an SFS2X project that requires it.

Possible alternatives could be:

  • IPVS a Linux-only, kernel-based load balancing tool
  • Nginx another popular proxy/load balancer/web server

Single point of failure

As you have probably realized if all the client traffic goes through one Load Balancer this will become a single point of failure. How can we prevent a service blackout if the HAProxy becomes unavailable?

Typically we would need to run an active HAProxy and one or more passive replicas (i.e. with exactly the same configuration) that can take over whenever it is necessary.

Also these servers would need to be behind a Virtual IP address which can be pointed to a different machine when necessary, making the transition from a failed Load Balancer to an active one fully transparent.

If you’re interested in learning more about this technique you can find the details in this article by Oracle.

» Backup servers

An interesting feature in HAProxy is that of backup servers, which are instances defined in the backend section of the config that are not added to the load balancing pool until it becomes empty.

Let’s go back to our websocket setup for instance:

backend servers-websocket
        mode http
        balance roundrobin
        option  forwardfor
        server sfs1 172.31.21.251:8080 check fall 3 rise 2
        server sfs2 172.31.24.55:8080 backup check fall 3 rise 2

Here we have slightly modified the configuration by adding a backup keyword to the sfs2 instance (last line). What does it do?

By marking this server as backup the Load Balancer will no longer distribute clients among the two instances but rather keep sending users to sfs1 until it becomes unavailable.

When this happens the sfs2 instance becomes active and stays like that until one ore more non-backup servers are added to the pool. For example when sfs1 is restarted and becomes functional again.

This simple option can be useful for different scenarios, such as running a large Lobby server acting as the entry point for users, who can then be sent to other servers to play games. The single Lobby server could be run behind an HAProxy and with a backup Lobby ready to take over when necessary.

Here’s an hypothetical architecture for a cluster that combines what we just described with the previous setup:

We now have a Lobby Balancer as the main entry point for our application: here users will login, manage their profile, search for other users, create and manage their Buddy Lists and chat with them.

The Game Balancer instead will direct players to different game servers where they can be matched with other users and play.

Finally we have a main database, shared among all servers, that can be user to track active users, access their state, game statistics, leaderboards, buddy lists and more.

» Wrapping up

The topic of high availability and scalability in clusters is a huge subject and we’ve barely scratched the surface. We may be returning on this topic in the future with more articles but, for now, we hope to have provided enough concepts to get you started experimenting.

As usual, if you have any comments, questions or doubts let us know via our SmartFoxServer support forum.