Private interconnect

NoteThe Cluster Edition only supports the UDP network protocol for private interconnects. You cannot use the TCP network protocol for private interconnects.

A private interconnect is an essential component of a shared disk cluster installation. It is a physical connection that allows internode communication. A private interconnect can be a simple crossover cable with ethernet, or it can be a complex proprietary interconnect with a specialized proprietary communications protocol. When configuring more than two nodes, you typically require a switch. The switch enables high-speed communication between the nodes in the cluster.

The interconnect technology you use to connect the nodes should be scalable to handle the amount of traffic the application creates because of contention. The amount of traffic is directly proportional to the amount of inter-instance updates and inter-instance transfers. Sybase recommends that you implement the highest bandwidth, lowest latency interconnect available.

Sybase recommends that Linux environments have an interconnect bandwidth of 1GB Ethernet.

Adaptive Server CE supports the current standards for interconnects. Sybase recommends that you research the available interconnects to find the one that works best for your site.

Table 1-7 compares technologies available for interconnect with Adaptive Server CE.

Table 1-7: Comparing interconnect technologies

Technology

Bandwidth

Full Duplex Bandwidth

Maximum signal length

1GB and 10GB ethernet

1 and 10GB per second

2GB and 20GB per second

Km

Infiniband

2.5 and 10GB per second

5, 20, and 60GB per second

Km

The Cluster Edition supports Inifiniband: