A user-executed command that stops one or more instances at a specified time to initiate a planned failover, downtime, or other administrative task. An action changes the state of an instance. See Chapter 6, “Managing the Workload.”
An instance assigned to a logical cluster on which a logical cluster normally runs. See Chapter 6, “Managing the Workload.”
The movement of an established client connection from one instance to another. The client is migrated from the old to the new instance without the client application being aware of the migration. Client migration is used for dynamic load distribution and for administrative actions such as logical cluster failback. See Chapter 2, “Client Applications and Client/ Server Interaction,” for a complete description.
A collection of homogeneous nodes in a network that operate as a single system. Each node has its own CPU and memory. All nodes communicate with each other through private and high-speed communication pathways.
The server module that provides distributed locking services for the cluster. The CLM enables sharing of buffers, global objects, and metadata among the instances.
The migration of an established client connection to a different instance in an attempt to balance the workload within a logical cluster.
Adaptive Server provides a list of failover addresses to high-availability-aware clients when they connect. This allows multiple clients to fail over eliminates the need for the “HAFAILOVER” entry in the directory services or interfaces file.
The ability to switch automatically to another instance upon the failure or abnormal termination of a previously active node.
A set of failover instances defined for a logical cluster. Failover groups let you specify preference and order for failover instances. See Chapter 6, “Managing the Workload.”
An instance on which a logical cluster can run if one or more of its base instances fail. See Chapter 6, “Managing the Workload.”
A number that uniquely identifies a named instance in the Adaptive Server shared-disk cluster.
The state of an instance in a logical cluster as it is perceived by a logical cluster. Thus, an instance can be physically online, but offline to a given logical cluster. See Chapter 6, “Managing the Workload.”
A set of weighted metrics used to determine the relative workload on an instance in a logical cluster. You can create your own load profiles or use one of the profiles provided by Sybase. See Chapter 6, “Managing the Workload.”
A computed value of the overall load on an instance; a unitless number that can be used to compare relative workloads on different instances in a logical cluster, or on the same instance at different times. See Chapter 6, “Managing the Workload.”
Space for temporary tables and worktables. Each instance in the cluster has a local system temporary database that it alone can access.
A method of abstracting the physical cluster so that multiple application services can be established. A logical cluster supports fine-tuned management of the workload within the cluster by enabling application- or user-specific service level agreements, resource assignments, and failover rules. Applications connect directly to a logical cluster. See Chapter 6, “Managing the Workload.”
The mechanism by which an instance can direct an incoming client connection to a different instance in the cluster. Login redirection is used to route inbound connections to instances in a logical cluster and for load balancing. See Chapter 2, “Client Applications and Client/ Server Interaction,” for a complete description.
A logical cluster that accepts connections that have no defined route. By default, the system logical cluster has the open property, but you can grant the open property to another logical cluster. Only one logical cluster can have the open property at a time. See Chapter 6, “Managing the Workload.”
The shared-disk cluster, with a specific quorum disk, member instances, and interconnection information. All instances in the physical cluster have direct access to a single installation of the databases and are monitored and managed by the cluster membership service.
This device provides important information that defines the cluster, including the name of the cluster, the names of the instances in the cluster, the number of nodes, and their names. In addition, the quorum device holds state information about the instances in the cluster and defines cluster membership
The practice of setting aside an instance for a specific logical cluster and only allowing clients routed to that logical cluster to connect to it. To practice resource reservation, you must assign the open property to a logical cluster other than the system logical cluster. See Chapter 6, “Managing the Workload.”
A cluster configuration where all instances have direct access to all data on all shared disks. In the Cluster Edition, all instances have direct access to database devices and jointly manage the single installation of the databases.
Is comprised of multiple CPUs and a single RAM memory with a single operating system. The CPUs symmetrically serve and run all functionality of the operating system and applications. This is the non-clustered Adaptive Server environment.
A logical representation of the physical cluster. The system logical cluster is automatically created when the physical cluster is created, and it has the same name as the physical cluster. All background tasks run on the system logical cluster. See Chapter 6, “Managing the Workload.”