Start the Cluster Edition in a single instance until
the upgrade is complete.
Back up all old databases.
Start the earlier version of Adaptive Server:
Move to the old $SYBASE directory
cd $SYBASE
Source SYBASE.sh (Bourne shell) or SYBASE.csh (C shell).
source SYBASE.csh
Execute the runserver file:
$SYBASE/$SYBASE_ASE/install/RUN_server_name
In another window, change to the new $SYBASE directory
Source SYBASE.sh (Bourne shell) or SYBASE.csh (C shell) in the new $SYBASE directory:
source SYBASE.csh
Run the pre-upgrade test on the old server using the preupgrade utility, located at $SYBASE/$SYBASE_ASE/upgrade, where $SYBASE and $SYBASE_ASE are the values for the Cluster Edition.
Do not change the default packet size from 512 to 2048 until after the upgrade is complete.
If during pre-upgrade the default
network packet size is set to 2048 then you cannot login
to finish the pre-upgrade on a 12.5.x server because there
is no way to tell preupgrade to use 2048 bytes
as a packet size.
Execute the following:
$SYBASE/$SYBASE_ASE/upgrade/preupgrade -Sserver_name -Ppassword
Where:
$SYBASE_ASE – is the cluster edition of Adaptive Server
password – is the system administrator’s password
Correct all errors from the output of the pre-upgrade test. Re-run preupgrade until it succeeds without errors.
Restart the old Adaptive Server, if required.
Run the reserved word check on the old Adaptive Server:
Install the Cluster Edition version of installupgrade:
isql -Usa -Ppassword -Sserver_name -i$SYBASE/$SYBASE_ASE/scripts/installupgrade
Install the Cluster Edition version of usage.sql:
isql -Usa -Ppassword -Sserver_name -i$SYBASE/$SYBASE_ASE/upgrade/usage.sql
Log in to the old Adaptive Server and execute sp_checkreswords on all databases:
use sybsystemprocs go sp_checkreswords go
Correct any errors the reserved word check reveals.
Shut down the old Adaptive Server.
Copy the old Adaptive Server configuration file mycluster.cfg from the old $SYBASE directory to the new $SYBASE directory.
Create the cluster input file. For example mycluster.inp:
#all input files must begin with a comment[cluster] name = mycluster max instances = 2 master device = /dev/raw/raw101 config file = /sybase/server_name.cfg interfaces path = /sybase/ traceflags = primary protocol = udp secondary protocol = udp[management nodes] hostname = blade1 hostname = blade2[instance] id = 1 name = server_name node = blade1 primary address = blade1 primary port start = 38456 secondary address = blade1 secondary port start = 38466 errorlog = /sybase/install/server_name.log config file = /sybase/server_name.cfg interfaces path = /sybase/ traceflags = additional run parameters =[instance] id = 2 name = server_name_ns2 node = blade2 primary address = blade2 primary port start = 38556 secondary address = blade2 secondary port start = 38566 errorlog = /sybase/install/server_name_ns2.log config file = /sybase/server_name.cfg interfaces path = /sybase/ traceflags = additional run parameters =
For an example of what this input file must contain, see “Creating the cluster input file” for more information.
The first instance’s server_name should
be the name of the old server from which you are upgrading.
Add an additional entry to the interfaces file for each of the instances in your cluster input file (described in Step 9). See “Configuring the interfaces file” for more information.
Determine the raw device used for the quorum device. For the version of the Cluster Edition, use a raw device on shared disks. Do not use a file-system device.
Create the quorum device and start the new instance with the old master device:
$SYBASE/$SYBASE_ASE/bin/dataserver\ --instance=server_name\ --cluster_input=mycluster.inp\ --quorum_dev=/dev/raw/raw102 --buildquorum -M$SYBASE
The server_name you indicate
with the --instance parameter
must be the name of the server from which you are upgrading, and
the interfaces file must contain an entry for this instance. Any
additional options such as -M must be
present in the RUN_FILE as dataserver won’t read
them from the quorum. For complete dataserver documentation see
the Users Guide to Clusters.
Run the upgrade utility, where instance_name is the first instance in your cluster that has the same name as the server from which you are upgrading:
$SYBASE/$SYBASE_ASE/upgrade/upgrade -S instance_name -Ppassword
Log in to the instance. Create the local system temporary database devices and local system temporary databases for each of the instances in your cluster. The syntax is:
create system temporary database database_name for instance instance_name on device_name = size
See “Setting up local system temporary databases” for more detailed information.
Shut down the instance. Log in to the instance with isql and issue:
shutdown instance_name
Restart the cluster.
$SYBASE/$SYBASE_ASE/bin/dataserver \ --instance=server_name\ --quorum_dev=/dev/raw/raw102\ -M$SYBASE
Log in to the Cluster Edition and execute sp_checkreswords on all of databases. For example, log in to the instance and execute:
use sybsystemprocs go sp_checkreswords go
Correct any errors from the reserved word check.
Copy and modify the old run_server file to new directory. You must edit it to point to binaries in the correct $SYBASE directories:
Add this argument to the run_server file:
--quorum_dev=<path to the quorum device>
Remove these options, as the information is now stored in the quorum device.
-c
-i
-e
See “Creating the runserver files” for more information.
Start each of instance in the cluster:
cd $SYBASE/$SYBASE_ASE/install startserver -fRUN_server_name
Install the system procedures:
isql -Usa -Ppassword -Sserver_name -i$SYBASE/$SYBASE_ASE/scripts/installmaster
If Adaptive Server includes auditing, run installsecurity:
isql -Usa -P password -S server_name -i$SYBASE/$SYBASE_ASE/scripts/installsecurity
Run installcommit:
isql -Usa -Ppassword -Sserver_name -i$SYBASE/$SYBASE_ASE/scripts/installcommit