We need more information than this. For suggestions look at the information needed to solve problems.
The usual mistake is to have the default gw for the real-servers set incorrectly. For VS-NAT its the director and for VS-DR and VS-Tun, it's _not_ the director. Setting up an LVS by hand is tedious. You can use the configure script which will trap most errors in setup.
Usually you have problems with authd/identd. Simplest thing is to stop your service from calling the identd server on the client (i.e.disconnect your service from identd).
There isn't a simple answer.
The speed of the director is determined by the packet throughput from/to the clients and not by the number of real-servers. From the mailing list, 3-500MHz UMP directors running 2.2.x kernels with the ipvs patch can handle 100Mbps throughput. We don't know what is needed for 1Gpbs throughput, but postings on the mailing list show that top end UMP machines (eg 800MHz) can't handle it.
For the complicated answer, see the section on estimating director performance.
yes. For LVS'ed services, the director handles ICMP redirects and MTU discovery delivering them to the correct real-server. ICMP packets for non-LVS'ed services are delivered locally.
This means that no service is listening for your client's requests and that some machine at the end is replying that it is not listening for that service. If the LVS is working, then the director is forwarding packets to a real-server which doesn't have the service set up.
No they all seem to work OK enough. If you are going into production, you should test that yours work with a netpipe test (see the performance page).
LVS is kernel code. In particular the network code is kernel code. Kernel code is only SMP in 2.4.x kernels. To take advantage of SMP for LVS then you must be running a 2.4.x kernel.
Michael Brown michael_e_brown@dell.com
wrote on 26 Dec 2000
I've seen significant improvements using dual and quad processors with 2.4. Under 2.2 there are improvements but not astonishing ones. Things like 90% saturation of a Gig link using quad processors. 70% using dual processors and 55% using a single processor under 2.4.0test.
I haven't had much of a chance to do a full comparison of 2.2 vs 2.4, but most evidence points to >100% improvement for network intensive tasks.