alter table

Description

Syntax

alter table [[database.][owner].table_name
	{add column_name datatype}
		[default {constant_expression | user | null}]
		{identity | null | not null} 
		[off row | in row]
		[[constraint constraint_name]
		{{unique | primary key}
			[clustered | nonclustered]
			[asc | desc]
			[with {fillfactor = pct,
				max_rows_per_page = num_rows,
				reservepagegap = num_pages}]
			[on segment_name]
		| references [[database.]owner.]ref_table 
			[(ref_column)]
			[match full]
		| check (search_condition)]
		[encrypt [with key_name] [decrypt_default value]],
		[, next_column]...
	| add {[constraint constraint_name]
	{unique | primary key}
			[clustered | nonclustered]
			 (column_name [asc | desc]	[, column_name [asc | desc]...])
		[with {fillfactor = pct,
				max_rows_per_page = num_rows, 
				reservepagegap = num_pages}]
		[on segment_name]
	| foreign key (column_name [{, column_name}...])
		references [[database.]owner.]ref_table
		[(ref_column [{, ref_column}...])]
		[match full]
	| check (search_condition)}
	| drop {column_name [, column_name]... 
		| constraint constraint_name}
	| modify column_name 
		[[encrypt [with key_name] [decrypt_default value]],
		datatype [null | not null] 
		[, next_column]...
	| replace column_name
		default {constant_expression | user | null}
		| decrypt_default {constant_expression | null}
		| drop decrypt_default	|
	lock {allpages | datarows | datapages} }
	| with exp_row_size=num_bytes
	| partition number_of_partitions
	| unpartition
	| partition_clause
	| add_partition_clause

alter table syntax for partitions:

partition_clause::=
	partition by range (column_name[, column_name]...) 
		 ([partition_name] values <= ({constant | MAX} 
			[, {constant | MAX}] ...) [on segment_name] 
			[, [partition_name] values <= ({constant | MAX} 
				[, {constant | MAX}] ...) [on segment_name]]...)

	| partition by hash (column_name[, column_name]...)
		{ (partition_name [on segment_name] 
			[, partition_name [on segment_name]]...) 
		| number_of_partitions 
			[on (segment_name[, segment_name] ...)]} 

	| partition by list (column_name) 
		 ([partition_name] values (constant[, constant] ...) 
			[on segment_name] 
			[, [partition_name] values (constant[, constant] ...) 
				[on segment_name]] ...)

	| partition by roundrobin 
		{ (partition_name [on segment_name] 
			[, partition_name [on segment_name]]...) 
		| number_of_partitions 
			[on (segment_name [, segment_name]...)]}

add_partition_clause::=
	add partition 
		{ ([partition_name] values <= ({constant | MAX} 
			[, {constant | MAX}]...) 
			[on segment_name] 
			[, [partition_name ] values <= ({constant | MAX} 
				[, {constant | MAX}] ...) 
				[on segment_name]]...) 

		| ([partition_name] values (constant[, constant] ...) 
			[on segment_name] 
			[, [partition_name] values (constant[, constant] ...) 
				[on segment_name]] ...)} 

alter table syntax for computed columns

alter table 
	add column_name {compute | as}
		computed_column_expression...
			[materialized | not materialized]
	drop column_name 
	modify column_name {null | not null | 
		{materialized | not materialized} [null | not null] | 
		{compute | as} computed_column_expression 
			[materialized | not materialized]
			[null | not null]}

alter table syntax for dropping partitions

alter table table_name drop partition
	partition_name [, partition_name]...

Parameters

table_name

is the name of the table to change. Specify the database name if the table is in another database, and specify the owner’s name if more than one table of that name exists in the database. The default value for owner is the current user, and the default value for database is the current database.

add

specifies the name of the column or constraint to add to the table. If Component Integration Services is enabled, you cannot use add for remote servers.

column_name

is the name of a column in that table. If Java is enabled in the database, the column can be a Java-SQL column.

datatype

is any system datatype except bit or any user-defined datatype except those based on bit.

If Java is enabled in the database, datatype can be the name of a Java class installed in the database, either a system class or a user-defined class. See Java in Adaptive Server Enterprise for more information.

default

specifies a default value for a column. If you specify a default and the user does not provide a value for this column when inserting data, Adaptive Server inserts this value. The default can be a constant_expression, user (to insert the name of the user who is inserting the data), or null (to insert the null value).

Adaptive Server generates a name for the default in the form of tabname_colname_objid, where tabname is the first 10 characters of the table name, colname is the first 5 characters of the column name, and objid is the object ID number for the default. Setting the default to null drops the default.

If Component Integration Services is enabled, you cannot use default for remote servers.

constant_expression

is a constant expression to use as a default value for a column. It cannot include global variables, the name of any columns, or other database objects, but can include built-in functions. This default value must be compatible with the datatype of the column.

user

specifies that Adaptive Server should insert the user name as the default if the user does not supply a value. The datatype of the column must be char(30), varchar(30), or a type that Adaptive Server implicitly converts to char; however, if the datatype is not char(30) or varchar(30), truncation may occur.

null | not null

specifies the Adaptive Server behavior during data insertion if no default exists.

null specifies that a column is added that allows nulls. Adaptive Server assigns a null value during inserts if a user does not provide a value.

The properties of a bit-type column must always be not null.

not null specifies that a column is added that does not allow nulls. Users must provide a non-null value during inserts if no default exists.

If you do not specify null or not null, Adaptive Server uses not null by default. However, you can switch this default using sp_dboption to make the default compatible with the SQL standards. If you specify (or imply) not null for the newly added column, a default clause is required. The default value is used for all existing rows of the newly added column, and applies to future inserts as well.

encrypt [with key_name]

specifies an encrypted column and the key used to encrypt it. You may use encrypt [with key_name] on columns with any system datatype other than text and image.

decrypt_default constant_expression

specifies that this column returns a default value for users who do not have decrypt permissions, and constant_expression is the value Adaptive Server returns on select statements instead of the encrypted value. The value can be NULL on nullable columns only. If the decrypt_value cannot be converted to the column’s data type, Adaptive Server catches the conversion error only when it executes the query.

identity

indicates that the column has the IDENTITY property. Each table in a database can have one IDENTITY column with a datatype of:

  • Exact numeric and scale of 0, or

  • Any of the integer datatypes, including signed or unsigned bigint, int, smallint, or tinyint.

IDENTITY columns are not updatable and do not allow nulls.

IDENTITY columns store sequential numbers, such as invoice numbers or employee numbers that are automatically generated by Adaptive Server. The value of the IDENTITY column uniquely identifies each row in a table.

off row | in row

specifies whether the Java-SQL column is stored separately from the row, or in storage allocated directly in the row.

The storage for an in row column cannot exceed 16K bytes, depending on the page size of the database server and other variables. The default value is off row.

constraint

introduces the name of an integrity constraint. If Component Integration Services is enabled, you cannot use constraint for remote servers.

constraint_name

is the name of the constraint. It must conform to the rules for identifiers and be unique in the database. If you do not specify the name for a table-level constraint, Adaptive Server generates a name in the form of tabname_colname_objectid, where tabname is the first 10 characters of the table name, colname is the first 5 characters of the column name, and objectid is the object ID number for the constraint. If you do not specify the name for a unique or primary key constraint, Adaptive Server generates a name in the format tabname_colname_tabindid, where tabindid is a string concatenation of the table ID and index ID.

Constraints do not apply to the data that already exists in the table at the time the constraint is added.

unique

constrains the values in the indicated column or columns so that no two rows can have the same non-null value. This constraint creates a unique index that can be dropped only if the constraint is dropped. You cannot use this option along with the null option described above.

primary key

constrains the values in the indicated column or columns so that no two rows can have the same value and so that the value cannot be NULL. This constraint creates a unique index that can be dropped only if the constraint is dropped.

clustered | nonclustered

specifies that the index created by a unique or primary key constraint is a clustered or nonclustered index. clustered is the default (unless a clustered index already exists for the table) for primary key constraints; nonclustered is the default for unique constraints. There can be only one clustered index per table. See create index for more information.

asc | desc

specifies whether the index is to be created in ascending (asc) or descending (desc) order. The default is ascending order.

with fillfactor=pct

specifies how full to make each page when Adaptive Server creates a new index on existing data. “pct” stands for percentage. The fillfactor percentage is relevant only when the index is created. As data changes, pages are not maintained at any particular level of fullness.

WARNING!  Creating a clustered index with a fillfactor affects the amount of storage space your data occupies, since Adaptive Server redistributes the data as it creates the clustered index.

The default for fillfactor is 0; this is used when you do not include with fillfactor in the create index statement (unless the value has been changed with sp_configure). When specifying a fillfactor, use a value between 1 and 100.

A fillfactor of 0 creates clustered indexes with completely full pages and nonclustered indexes with completely full leaf pages. It leaves a comfortable amount of space within the index B-tree in both clustered and nonclustered indexes. There is seldom a reason to change the fillfactor.

If the fillfactor is set to 100, Adaptive Server creates both clustered and nonclustered indexes with each page 100 percent full. A fillfactor of 100 makes sense only for read-only tables—tables to which no data will ever be added.

fillfactor values smaller than 100 (except 0, which is a special case) cause Adaptive Server to create new indexes with pages that are not completely full. A fillfactor of 10 might be a reasonable choice if you are creating an index on a table that will eventually hold a great deal more data, but small fillfactor values cause each index (or index and data) to take more storage space.

max_rows_per_page = num_rows

limits the number of rows on data pages and the leaf-level pages of indexes. Unlike fillfactor, the max_rows_per_page value is maintained until it is changed with sp_chgattribute.

If you do not specify a value for max_rows_per_page, Adaptive Server uses a value of 0 when creating the index. When specifying max_rows_per_page for data pages, use a value between 0 and 256. The maximum number of rows per page for nonclustered indexes depends on the size of the index key; Adaptive Server returns an error message if the specified value is too high.

For indexes created by constraints, a max_rows_per_page setting of 0 creates clustered indexes with full pages and nonclustered indexes with full leaf pages. A setting of 0 leaves a comfortable amount of space within the index B-tree in both clustered and nonclustered indexes.

If max_rows_per_page is set to 1, Adaptive Server creates both clustered and nonclustered leaf index pages with one row per page at the leaf level. You can use this to reduce lock contention on frequently accessed data.

Low max_rows_per_page values cause Adaptive Server to create new indexes with pages that are not completely full, use more storage space, and may cause more page splits.

WARNING!  Creating a clustered index with max_rows_per_page can affect the amount of storage space your data occupies, since Adaptive Server redistributes the data as it creates the clustered index.

reservepagegap = num_pages

specifies a ratio of filled pages to empty pages to be left during extent I/O allocation operations for the index created by the constraint. For each specified num_pages, an empty page is left for future expansion of the table. Valid values are 0 – 255. The default value, 0, leaves no empty pages.

on segment_name

specifies the segment on which the index exists or is to be placed. When using on segment_name, the logical device must already have been assigned to the database with create database or alter database, and the segment must have been created in the database with sp_addsegment. See your System Administrator or use sp_helpsegment for a list of the segment names available in your database.

If you specify clustered and use the on segment_name option, the entire table migrates to the segment you specify, since the leaf level of the index contains the actual data pages.

When using on segment_name for partitions, it specifies the segment on which to place the partition.

references

specifies a column list for a referential integrity constraint. You can specify only one column value for a column constraint. By including this constraint with a table that references another table, any data inserted into the referencing table must already exist in the referenced table.

To use this constraint, you must have references permission on the referenced table. The specified columns in the referenced table must be constrained by a unique index (created by either a unique constraint or a create index statement). If no columns are specified, there must be a primary key constraint on the appropriate columns in the referenced table. Also, the datatypes of the referencing table columns must exactly match the datatype of the referenced table columns.

If Component Integration Services is enabled, you cannot use references for remote servers.

foreign key

specifies that the listed columns are foreign keys in this table whose matching primary keys are the columns listed in the references clause.

ref_table

is the name of the table that contains the referenced columns. You can reference tables in another database. Constraints can reference up to 192 user tables and internally generated worktables. Use the system procedure sp_helpconstraint to check a table’s referential constraints.

ref_column

is the name of the column or columns in the referenced table.

match full

specifies that if all values in the referencing columns of a referencing row are:

  • Null – the referential integrity condition is true.

  • Non-null values – if there is a referenced row where each corresponding column is equal in the referenced table, then the referential integrity condition is true.

If they are neither, then the referential integrity condition is false when:

  • All values are non-null and not equal, or

  • Some of the values in the referencing columns of a referencing row are non-null values, while others are null.

check

specifies a search_condition constraint that Adaptive Server enforces for all the rows in the table. If Component Integration Services is enabled, you cannot use check for remote servers.

search_condition

is a Boolean expression that defines the check constraint on the column values. These constraints can include:

  • A list of constant expressions introduced with in

  • A set of conditions, which may contain wildcard characters, introduced with like

    An expression can include arithmetic operations and Transact-SQL functions. The search_condition cannot contain subqueries, aggregate functions, parameters, or host variables.

next_column

includes additional column definitions (separated by commas) using the same syntax described for a column definition.

drop

specifies the name of a column or constraint to drop from the table. If Component Integration Services is enabled, you cannot use drop for remote servers.

modify

specifies the name of the column whose datatype or nullability you are changing.

replace

specifies the column whose default value you want to change with the new value specified by a following default clause. If Component Integration Services is enabled, you cannot use replace for remote servers.

enable | disable trigger

enables or disables a trigger. For more information, see the System Administration Guide.

lock datarows | datapages | allpages

changes the locking scheme to be used for the table.

with exp_row_size=num_bytes

specifies the expected row size. You can only apply this parameter:

  • To datarows and datapages locking schemes.

  • To tables with variable-length rows.

  • When alter table performs a data copy, such as with alter table add or modify. You cannot use with exp_row_size=num_bytes with alter table lock change operations.

Valid values are 0, 1, and any value between the minimum and maximum row length for the table. The default value is 0, which means a server-wide setting is applied.

partition number_of_partitions

adds (number_of_partitions –1) empty partitions to an unpartitioned table (round-robin partitioned table with a single partition). Thus, the total number of partitions for the table becomes number_of_partitions. Even if Component Integration Services (CIS) is enabled, you cannot use partition for remote servers.

unpartition

changes a round-robin partitioned table without indexes, to an unpartitioned table. Even if CIS is enabled, you cannot use unpartition for remote servers.

partition by range

specifies records are to be partitioned according values in the partitioning column or columns. Each partitioning column value is compared with sets of user-supplied upper and lower bounds to determine partition assignment.

column_name

when used in the partition_clause, specifies a partition key column. A partition key column cannot be an encrypted column.

partition_name

specifies the name of a new partition on which table records are to stored. Partition names must be unique within the set of partitions on a table or index. Partition names can be delimited identifiers if set quoted_identifier is on. Otherwise, they must be valid identifiers.

If partition_name is omitted, Adaptive Server creates a name in the form table_name_partition_id. Adaptive Server truncates partition names that exceed the allowed maximum length.

values <= constant | MAX

specifies the inclusive upper bound of values for a named partition. Specifying a constant value for the highest partition bound imposes an implicit integrity constraint on the table. The keyword MAX specifies the maximum value in a given datatype.

on segment_name

when used in the partition_clause, specifies the segment on which the partition is to be placed. When using on segment_name, the logical device must already have been assigned to the database with create database or alter database, and the segment must have been created in the database with sp_addsegment. See your System Administrator or use sp_helpsegment for a list of the segment names available in your database.

partition by hash

specifies records are to be partitioned by a system-supplied hash function. The function computes the hash value of the partition keys that specify the partition to which records are assigned.

partition by list

specifies records are to be partitioned according to literal values specified in the named column. The partition key contains only one column. You can list up to 250 constants as the partition values for each list partition.

partition by round-robin

specifies records are to be partitioned in a sequential manner. A round-robin partitioned table has no partitioning key. Neither the user nor the optimizer knows in which partition a particular record resides.

add partition

applies only to range- or list-partitioned tables:

  • For range-partitioned tables – adds one or more partitions to the upper end of a range partitioned table.

  • For list-partitioned tables – adds one or more partitions with a new set of values.

compute | as

adds or drops a new computed column. Follow the same rules defined for the existing create table command and the alter table add rules.

computed_column_expression

is any valid T-SQL expression that does not contain columns from other tables, local variables, aggregate functions, or subqueries. It can be one or a combination of column name, constant, function, global variable, or case expression, connected by one or more operators. You cannot cross-reference between computed columns except when virtual computed columns reference materialize computed columns. An encrypted column cannot be referenced in a computed_column_expression.

materialized | not materialized

reserved keywords in the modify clause that specify whether the computed column is materialized, or physically stored in the table. By default, a computed column is not materialized (that is, not physically stored in the table). You can also use this piece of syntax to change the definitions of existing virtual computed columns; that is, to materialize them.

table_name drop partition partition_name [, partition_name]...

drops one or more list or range partitions. You cannot use alter table to drop a hash or round-robin partition.

For each partition you drop, Adaptive Server:

  • Deletes all data on the partition.

  • Deletes the partition definition from the system catalog.

  • Drops all corresponding local index partitions that refer to this data partition.

  • Regenerates the partition condition object of the base table and each local index.

  • Deletes all statistics information on this partition.

  • Rebuilds all global indexes.

NoteIf you attempt to drop a partition from a table that is referenced by another table, and the partition to be dropped and the referencing table are not empty, the command fails because of possible violations with the foreign-key constraint, and Adaptive Server displays error message 13971.

Examples

Example 1

Adds a column to a table. For each existing row in the table, Adaptive Server assigns a NULL column value:

alter table publishers 
add manager_name varchar (40) null

Example 2

Adds an IDENTITY column to a table. For each existing row in the table, Adaptive Server assigns a unique, sequential column value. The IDENTITY column could be type numeric or integer, and a scale of zero. The precision determines the maximum value (10 5 -1, or 99,999) that can be inserted into the column:

alter table sales_daily
add ord_num numeric (5,0) identity

Example 3

Adds a primary key constraint to the authors table. If there is an existing primary key or unique constraint on the table, you must drop the existing constraint first (see next example):

alter table authors
add constraint au_identification
primary key (au_id, au_lname, au_fname)

Example 4

Drops the au_identification constraint:

alter table titles 
    drop constraint au_identification

Example 5

Creates an index on authors; the index has a reservepagegap value of 16, leaving 1 empty page in the index for each 15 allocated pages:

alter table authors
add constraint au_identification
primary key (au_id, au_lname, au_fname)
with reservepagegap = 16

Example 6

Removes the default constraint on the phone column in the authors table. If the column allows NULL values, NULL is inserted if no column value is specified. If the column does not allow NULL values, an insert that does not specify a column value fails:

alter table authors
    replace phone default null

Example 7

Modifies the emp table to encrypt the ssn column and specifies decrypt default:

alter table emp modify ssn encrypt with key1 
    decrypt_default '000-00-0000'

Example 8

Decrypts credit card data that is longer sensitive:

alter table stolen_ccards
     decrypt ccard

If card was encrypted by a key protected by a user-defined password, precede this command with the set encryption key command.

Example 9

Adds an encrypted column to an existing table. Because keyname is omitted, Adaptive Server looks for the database default encryption key:

alter table sales_mgr
     add bonus money null encrypt

Example 10

Creates an encryption key and encrypt ssn column in existing employee table.

set encryption passwd '4evermore' for key ssn_key
alter table employee modify ssn
         encrypt with ssn_key

If ssn in the above example were an existing encrypted column encrypted by ‘key1’ the alter table would cause Adaptive Server to decrypt ssn using ‘key1’ and re-encrypt ssn using ‘ssn_key’.

Example 11

Adds a decrypt default to the salary column, which is already encrypted:

alter table employee replace salary 
    decrypt_default $0.00

Example 12

Removes the decrypt default for salary without removing the encryption property:

alter table employee replace salary drop 
    decrypt_default

Example 13

Changes an unpartitioned table to a range-partitioned table with three partitions, each of which is on a different segment:

alter table titles partition by range (total_sales)
    (smallsales values <= (500) on seg1,
    mediumsales values <= (5000) on seg2,
    bigsales values <= (25000) on seg3)

Example 14

Adds another range partition to the titles table:

alter table titles add partition 
     (vbigsales values <= (40000) on seg4)

Example 15

Changes the locking scheme for the titles table to datarows locking:

alter table titles lock datarows

Example 16

Adds the not-null column author_type to the authors table with a default of primary_author:

alter table authors
    add author_type varchar (20)
    default "primary_author" not null

Example 17

Drops the advance, notes, and contract columns from the titles table:

alter table titles
    drop advance, notes, contract

Example 18

Modifies the city column of the authors table to be a varchar(30) with a default of NULL:

alter table authors
    modify city varchar (30) null

Example 19

Modifies the stor_name column of the stores table to be NOT NULL. Its datatype, varchar(40), remains unchanged:

alter table stores
    modify stor_name not null

Example 20

Modifies the type column of the titles table and changes the locking scheme of the titles table from allpages to datarows:

alter table titles
    modify type varchar (10)
    lock datarows

Example 21

Modifies the notes column of the titles table from varchar(200) to varchar(150), changes the default value from NULL to NOT NULL, and specifies an exp_row_size of 40:

alter table titles
    modify notes varchar (150) not null
    with exp_row_size = 40

Example 22

Adds, modifies, and drops a column, and then adds another column in one query. Alters the locking scheme and specifies the exp_row_size of the new column:

alter table titles
    add author_type varchar (30) null
    modify city varchar (30)
    drop notes
    add sec_advance money default 1000 not null
    lock datarows
    with exp_row_size = 40

Example 23

Add a virtual computed column:

alter table authors 
    add fullname compute au_fname + ' ' + au_lname

Example 24

Changes a virtual computed column to a materialized computed column:

alter table authors modify fullname materialized

Usage


Restrictions

WARNING! Do not alter the system tables.


alter table and encrypted columns

You cannot use alter table to encrypt or decrypt a column belonging to a clustered or placement index. To encrypt or decrypt this column, drop the index, alter the column, and re-create the index.

You cannot use alter table to encrypt or decrypt a column if the table has a trigger defined. To decrypt the column, drop the trigger, alter the column, then re-create the trigger.

You can encrypt these data types:

The underlying data type of encrypted data on disk is varbinary. Null values are not encrypted.

Modifying the data type of the encrypted column belonging to a clustered or placement index results in the index being out of order, and alter table displays an error. To modify the type:

  1. Drop the index

  2. Modify the type

  3. Recreate the index

alter table reports an error if you:

NoteReferential integrity between encrypted columns is supported when the columns are encrypted with the same key. For details, refer to Encrypted Columns Guide, chapter “Encrypting Data.”


Getting information about tables


Specifying ascending or descending ordering in indexes


Using cross-database referential integrity constraints


Changing defaults


Setting space management properties for indexes


Conversion of max_rows_per_page to exp_row_size


Using reservepagegap


Partitioning tables for improved performance


Using computed columns


Adding IDENTITY columns


Altering table schema


Restrictions for modifying a table schema


Restrictions for modifying text and image columns


Modifying tables with unitext columns

The following restrictions apply when you use alter table to modify unitext columns:


Changing locking schemes


Adding Java-SQL columns

Standards

ANSI SQL – Compliance level: Transact-SQL extension.

See Chapter 1, “System and User-Defined Datatypes” in Reference Manual: Building Blocks for datatype compliance information.

Permissions

alter table permission defaults to the table owner; it cannot be transferred except to the Database Owner, who can impersonate the table owner by running the setuser command. A System Administrator can also alter user tables.

Auditing

Values in event and extrainfo columns of sysaudits are:

Event

Audit option

Command or access audited

Information in extrainfo

3

alter

alter table

  • Roles – Current active roles

  • Keywords or optionsadd column, drop column, modify column, replace column, add constraint, or drop constraint

  • Previous value – NULL

  • Current value – NULL

  • Other information – NULL

  • Proxy information – Original login name, if a set proxy is in effect

See also

Commands create index, create table, dbcc, drop database, dump transaction, insert, setuser

System procedures sp_chgattribute, sp_help, sp_helpartition, sp_rename