When Adaptive Server reads a page from a compressed dump, it selects a compressed block from the dump, decompresses it, and extracts the required page. The decompression in Adaptive Server is done using larger buffers from a special memory pool. The size of the pool is configured using:
sp_configure ‘compression memory size’, size
This is a dynamic configuration parameter, and the size is given in 2KB pages. If size is set to 0, no pool is created and a compressed dump cannot be loaded.
To determine the optimal size for your pool, consider these two factors:
The block I/O used by the Backup Server. By default, this block I/O is 64KB, but it could have been changed using the with blocksize option in the dump database command.
The number of concurrent users decompressing blocks within all archive databases. Each concurrent user requires two buffers each the same size as the block I/O.
As an absolute minimum, allow one concurrent user two buffers per archive database.