The Cluster Edition requires:
Database devices in the Cluster Edition must support SCSI-3 persistent group reservations (SCSI PGRs). Cluster Edition uses SCSI PGRs to guarantee data consistency during cluster membership changes. Sybase cannot guarantee data consistency on disk subsystems that do not support SCSI PRGs (such a configuration is supported for test and development environments that can tolerate the possibility of data corruption).
Homogeneous hardware nodes. All nodes must run the same operating system version however, the number of processors and the amount of memory can vary from node to node.
All database devices, including quorum devices, must be located on raw partitions. You cannot use the Network File System (NFS).
Raw partitions must be accessible from each node using the same access path. Sybase recommends storage area network (SAN) connected devices.
Local user temporary databases do not require shared
storage and can use local file systems created as private devices—unlike
local system temporary databases, which do require shared storage
For test environments you can use a single node or machine to run multiple instances of the Cluster Edition in a cluster configuration. However, if you do so, you must use the local file system (not NFS) or SAN Storage for the database devices.
All hardware nodes must use Network Time Protocol (NTP) or a similar mechanism to ensure their clocks are synchronized.
All Adaptive Server Enterprise software and configuration files (including the $SYBASE directory, the interfaces file) must be installed on a Network File System (NFS) or a clustered file system (CFS or GFS) that is accessible from each node using the same access path. Supported versions of clustered file system are detailed in the next section.
A high-speed network interconnection (for example, a gigabit ethernet, infiniband) providing a local network connecting all hardware nodes participating in the cluster.
Sybase recommends that each node in the cluster have three physically separate network interfaces:
A public network – for clients to connect.
A primary private network – for cluster interconnect traffic.
A secondary private network – for cluster interconnect traffic.
The private networks should be physically separated from the public network, and are needed for security, fault-tolerance, and performance reasons. For fault-tolerance, the two private network cards should be on different fabrics so that a cluster will survive network failure.
The private interconnect fabrics should not contain links to any machines not participating in the cluster (that is, all cluster nodes should have their primary interconnect connected to the same switch, and that switch should not be connected to any other switches or routers).