haproxy.conf. Here is sample config, tested on some not very small environments:
log /var/run/syslogd.sock local0
stats socket /usr/share/haproxy/haproxy-stats.sock
timeout connect 5s #timeout during connect
timeout client 24h #timeout client->haproxy(frontend)
timeout server 60m #timeout haproxy->server(backend)
frontend access_clients 18.104.22.168:5222
frontend access_clients_ssl 22.214.171.124:5223
frontend access_servers 126.96.36.199:5269
server server1 10.0.0.1:5222 check fall 3 id 1005 inter 5000 rise 3 slowstart 120000 weight 50
server server2 10.0.0.2:5222 check fall 3 id 1006 inter 5000 rise 3 slowstart 120000 weight 50
server server3 10.0.0.3:5222 check fall 3 id 1007 inter 5000 rise 3 slowstart 120000 weight 50
server server1 10.0.0.1:5223 check fall 3 id 1008 inter 5000 rise 3 slowstart 240000 weight 50
server server2 10.0.0.2:5223 check fall 3 id 1009 inter 5000 rise 3 slowstart 240000 weight 50
server server3 10.0.0.3:5223 check fall 3 id 1010 inter 5000 rise 3 slowstart 240000 weight 50
server server1 10.0.0.1:5269 check fall 3 id 1011 inter 5000 rise 3 slowstart 60000 weight 50
server server2 10.0.0.2:5269 check fall 3 id 1012 inter 5000 rise 3 slowstart 60000 weight 50
server server3 10.0.0.3:5269 check fall 3 id 1013 inter 5000 rise 3 slowstart 60000 weight 50
I will not explain every single option, as this is done in excellent documentation, but i will interpret shortly what is going on here. As you can see config reflects graph introduced before, we have 3 “frontend” services (5222, 5223, 5269 - for client TLS, client SSL, and server-2-server), pointing to three backend servers. HAproxy in this example will spread the load equally on all backend servers for all services (leastonn+weight) also it will start accepting connections gradually in the event of failure (slowstart) to prevent connection storm hitting servers when they start. There is couple of options that you can fine-tuned for your needs - like timeouts, fail counts. This is base for your experiments with LB topic. Go and play with it!
One of main advantages of HAproxy is that it is extremely simple, fast to setup, highly reliable and have low footprint on hardware too. Thanks to all that pros we can imagine variety of usages like geographically dislocated proxy servers for XMPP (super interesting topic - will write on that some day), or cross-proxy for better availability.
This is not my last post about LB and XMPP, next stop is Amazon Elastic Load Balancing (ELB), which is great solutions for admins who host their servers on AWS. Stay tuned!