Virtual Server via Direct Routing

This page contains information about working principle of the Direct Routing request dispatching technique and how to use it to contruct server clusters.

Direct Routing request dispatching technique

This request dispatching approach is similar to the one implemented in IBM's NetDispatcher. The virtual IP address is shared by real servers and the load balancer. The load balancer has an interface configured with the virtual IP address too, which is used to accept request packets, and it directly route the packets to the chosen servers. All the real servers have their non-arp alias interface configured with the virtual IP address or redirect packets destined for the virtual IP address to a local socket, so that the real servers can process the packets locally. The load balancer and the real servers must have one of their interfaces physically linked by a HUB/Switch. The architecture of virtual server via direct routing is illustrated as follows:

When a user accesses a virtual service provided by the server cluster, the packet destined for virtual IP address (the IP address for the virtual server) arrives. The load balancer(LinuxDirector) examines the packet's destination address and port. If they are matched for a virtual service, a real server is chosen from the cluster by a scheduling algorithm, and the connection is added into the hash table which records connections. Then, the load balancer directly forwards it to the chosen server. When the incoming packet belongs to this connection and the chosen server can be found in the hash table, the packet will be again directly routed to the server. When the server receives the forwarded packet, the server finds that the packet is for the address on its alias interface or for a local socket, so it processes the request and return the result directly to the user finally. After a connection terminates or timeouts, the connection record will be removed from the hash table.

The direct routing workflow is illustrated in the following figure:

The load balancer simply changes the MAC address of the data frame to that of the chosen server and restransmits it on the LAN. This is the reason that the load balancer and each server must be directly connected to one another by a single uninterrupted segment of a LAN. If you meet some arp problem of the cluster, see the arp problem page for more information.

How to build the kernel

First, get a fresh copy of the Linux kernel source of the right version. Second, get a right version of IP virtual server patch and apply it to the kernel. Third, make sure that some kernel compile options must be selected. Fourth, rebuild the kernel. Once you have your kernel properly built, update your system kernel and reboot.

1. The VS patch for kernel 2.0.36

Kernel Compile Options:

Code maturity level options --->
    [*] Prompt for development and/or incomplete code/drivers

Networking options --->
    [*] Network firewalls
    ...
    [*] IP: forwarding/gatewaying
    ...
    [*] IP: firewalling
    ...
    [*] IP: masquerading
    ...
    [*] IP: ippfvs(LinuxDirector) masquerading (EXPERIMENTAL)
    Virtual server request dispatching technique---
    ( ) VS-NAT
    ( ) VS-Tunneling
    (X) VS-DRouting

And, you have to choice one scheduling algorithm.

    Virtual server scheduling algorithm
    (X) WeightedRoundRobin
    ( ) LeastConnection
    ( ) WeightedLeastConnection

[ ] IP: enabling ippfvs with the local node feature

Finally, cd the ippfvsadm source and type "make install" to install ippfvsadm into your system directory.

2. The IPVS patch for kernel 2.2.x

Kernel Compile Options:

Code maturity level options --->
    [*] Prompt for development and/or incomplete code/drivers

Networking options --->
    [*] Network firewalls
    ...
    [*] IP: forwarding/gatewaying
    ...
    [*] IP: firewalling
    ...
    [*] IP: masquerading
    ...
    [*] IP: masquerading virtual server support (EXPERIMENTAL)
    (12) IP masquerading table size (the Nth power of 2)
    <M> IPVS: round-robin scheduling
    <M> IPVS: weighted round-robin scheduling
    <M> IPVS: least-connection scheduling
    <M> IPVS: weighted least-connection scheduling
    <M> IPVS: locality-based least-connection scheduling
    <M> IPVS: locality-based least-connection with replication scheduling

Finally, cd the ipvsadm source and type "make install" to install ipvsadm into your system directory, or install ipvsadm rpm package.

3. The IPVS patch for kernel 2.4.x

Kernel Compile Options:

Code maturity level options --->
    [*] Prompt for development and/or incomplete code/drivers

Networking options --->
    [*] Network packet filtering (replaces ipchains)
    [ ]   Network packet filtering debugging
    ...
      IP: Netfilter Configuration  --->
      IP: Virtual Server Configuration  --->
	<M> virtual server support (EXPERIMENTAL)
	[*]   IP virtual server debugging
	(12)   IPVS connection table size (the Nth power of 2)
	--- IPVS scheduler
	<M>   round-robin scheduling
	<M>   weighted round-robin scheduling
	<M>   least-connection scheduling scheduling
	<M>   weighted least-connection scheduling
	<M>   locality-based least-connection scheduling
	<M>   locality-based least-connection with replication scheduling
	<M>   destination hashing scheduling
	<M>   source hashing scheduling
	--- IPVS application helper
	<M>   FTP protocol helper

My example for testing virtual server via direct routing

Here is my configure example for testing virtual server via direct routing. The configuration is as follows. I hope it can give you some clues. The load balancer has 172.26.20.111 address, and the real server 172.26.20.112. The 172.26.20.110 is the virtual IP address. In all the following examples, "telnet 172.26.20.110" will actually reach the real server.

1. For kernel 2.0.x

The load balancer (LinuxDirector), kernel 2.0.36

ifconfig eth0 172.26.20.111 netmask 255.255.255.0 broadcast 172.26.20.255 up
route add -net 172.26.20.0 netmask 255.255.255.0 dev eth0
ifconfig eth0:0 172.26.20.110 netmask 255.255.255.255 broadcast 172.26.20.110 up
route add -host 172.26.20.110 dev eth0:0
ippfvsadm -A -t 172.26.20.110:23 -R 172.26.20.112

The real server 1, kernel 2.0.36 (IP forwarding enabled)

ifconfig eth0 172.26.20.112 netmask 255.255.255.0 broadcast 172.26.20.255 up
route add -net 172.26.20.0 netmask 255.255.255.0 dev eth0
ifconfig lo:0 172.26.20.110 netmask 255.255.255.255 broadcast 172.26.20.110 up
route add -host 172.26.20.110 dev lo:0

When I am on other hosts, 'telnet 172.26.20.110' will actually connect the real server 1.

2. For kernel 2.2.x

The load balancer (LinuxDirector), kernel 2.2.14

ifconfig eth0 172.26.20.111 netmask 255.255.255.0 broadcast 172.26.20.255 up
ifconfig eth0:0 172.26.20.110 netmask 255.255.255.255 broadcast 172.26.20.110 up
echo 1 > /proc/sys/net/ipv4/ip_forward
ipvsadm -A -t 172.26.20.110:23 -s wlc
ipvsadm -a -t 172.26.20.110:23 -r 172.26.20.112 -g

The real server 1, kernel 2.0.36 (IP forwarding enabled)

ifconfig eth0 172.26.20.112 netmask 255.255.255.0 broadcast 172.26.20.255 up
route add -net 172.26.20.0 netmask 255.255.255.0 dev eth0
ifconfig lo:0 172.26.20.110 netmask 255.255.255.255 broadcast 172.26.20.110 up
route add -host 172.26.20.110 dev lo:0

More configuration examples

1. Real server running kernel 2.2.14 or later with hidden device

The load balancer (LinuxDirector), kernel 2.2.14

echo 1 > /proc/sys/net/ipv4/ip_forward
ipvsadm -A -t 172.26.20.110:23 -s wlc
ipvsadm -a -t 172.26.20.110:23 -r 172.26.20.112 -g

The real server 1, kernel 2.2.14

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/all/hidden
echo 1 > /proc/sys/net/ipv4/conf/lo/hidden
ifconfig lo:0 172.26.20.110 netmask 255.255.255.255 broadcast 172.26.20.110 up

You can configure the VIP on alias of other devices like dummy and hide it. Since it is the alias interface, you can configure as many VIPs as you want. An example using dummy device is as follows:

echo 1 > /proc/sys/net/ipv4/ip_forward
ifconfig dummy0 0.0.0.0 up
echo 1 > /proc/sys/net/ipv4/conf/all/hidden
echo 1 > /proc/sys/net/ipv4/conf/dummy0/hidden
ifconfig dummy0:0 172.26.20.110 up
ifconfig dummy0:1 <Another-VIP> up
...

2. Real servers runing kernel 2.2.x with redirect approach

The load balancer's configuration is the same as the example above. Real servers running kernel 2.2.x can be configured as follows:

echo 1 > /proc/sys/net/ipv4/ip_forward
ipchains -A input -j REDIRECT 23 -d 172.26.20.110 23 -p tcp
...

With this ipchains redirect commands, packets destined for the address 172.26.20.110 port 23 and the tcp protocol will be redirected to a local socket. Note that the service daemon must listen on all addresses (0.0.0.0) or on the VIP address (172.26.20.110 here).

3. Real servers having different network routes

In the virtual server via direct routing, the servers can follows the different network routes to the clients (different Internet links), it is good for performance. The load balancer and real servers use a private LAN to communicate. Here is a configuration example.

The load balancer (LinuxDirector), kernel 2.2.14

ifconfig eth0 <an IP address> ...
...
ifconfig eth0:0 <VIP> netmask 255.255.255.255 broadcast <VIP> up
ifconfig eth1 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255 up
ipvsadm -A -t <VIP>:23
ipvsadm -A -t <VIP>:23 -r 192.168.0.2 -g
...

The real server 1, kernel 2.0.36

ifconfig eth0 <a seperate IP address> ...
# Follow the different network route
...
ifconfig eth1 192.168.0.2 netmask 255.255.255.0 broadcast 192.168.0.255 up
route add -net 192.168.0.0 netmask 255.255.255.0 dev eth1
ifconfig lo:0 <VIP> netmask 255.255.255.255 broadcast <VIP> up
route add -host <VIP> dev lo:0