The Cluster Edition requires:
Database devices in the Cluster Edition must support SCSI-3 persistent group reservations (SCSI PGRs). Cluster Edition uses SCSI PGRs to guarantee data consistency during cluster membership changes. Sybase cannot guarantee data consistency on disk subsystems that do not support SCSI PGRs (such a configuration is supported for test and development environments that can tolerate the possibility of data corruption).
Homogeneous hardware nodes. All nodes must run the same operating system version however, the number of processors and the amount of memory can vary from node to node.
The quorum must reside on its own device.
You must create a local system temporary database for each instance during the initial startup of the cluster and later on whenever you add an instance to the cluster. Create the local system temporary database on a shared device. You can create or drop a local system temporary database from any instance, but you can access it only from the owning instance.
Create local system temporary databases using the Adaptive Server plug-in or sybcluster. For more information see “Setting up local system temporary databases”.
All database devices, including quorum devices, must be located on raw partitions. You cannot use the Network File System (NFS).
WARNING! Avoiding the use of file system devices for clusters – The Cluster Edition is not designed to run on a file system; mounting a non-clustered file system on multiple nodes will immediately cause a corruption, leading to a total loss of the cluster and all of its databases. For this reason, Sybase does not support file system devices when running on multiple nodes.
Raw partitions must be accessible from each node using the same access path. Sybase recommends storage area network (SAN) connected devices.
Local user temporary databases do not require shared
storage and can use local file systems created as private devices—unlike
local system temporary databases, which do require shared storage
For test environments, you can use a single node or machine to run multiple instances of the Cluster Edition in a cluster configuration. However, if you do so, you must use the local file system (not NFS) or SAN Storage for the database devices.
All hardware nodes must use Network Time Protocol (NTP) or a similar mechanism to ensure their clocks are synchronized.
All Adaptive Server Enterprise software and configuration files (including the $SYBASE directory, the interfaces file) must be installed on a Network File System (NFS) or a clustered file system (CFS or GFS) that is accessible from each node using the same access path. Supported versions of clustered file system are detailed in the next section.
A high-speed network interconnection (for example, a gigabit Ethernet) providing a local network connecting all hardware nodes participating in the cluster.
Sybase recommends that each node in the cluster have two physically separate network interfaces:
A primary network – for cluster interconnect traffic.
A secondary network – for cluster interconnect traffic.
The primary and secondary networks should be physically separated from each other, and are needed for security, fault-tolerance, and performance reasons. For fault-tolerance, the two network cards should be on different fabrics so that a cluster will survive network failure.
The private interconnect fabrics should not contain links to any machines not participating in the cluster (that is, all cluster nodes should have their primary interconnect connected to the same switch, and that switch should not be connected to any other switches or routers).