When you configure Adaptive Server for parallel query processing, the optimizer evaluates each query to determine whether it is eligible for parallel execution. If it is, the query is divided into components that process simultaneously. The results are combined and delivered to the client in a shorter period of time than it would take to process the query serially as a single component.
For the same throughput, processing a query in parallel requires more work from Adaptive Server and additional resources than processing a query in serial. It also involves evaluating more complex trade-offs to achieve optimal performance. Fully enabled parallel query processing requires multiple processes, engines, and partitions, resulting in increased overhead for Adaptive Server, additional CPU requirements, and increased disk I/O.
You can configure various levels of parallelism, each providing a performance gain and requiring corresponding trade-offs in physical resources. Chapter 13, “Introduction to Parallel Query Processing,” in the Performance and Tuning Guide introduces the Adaptive Server parallel query processing model and concepts. It also discusses the trade-offs between resources and performance gains for different levels of parallelism.
In the Performance and Tuning Guide, see:
Chapter 5, “Locking in Adaptive Server,” for information on how Adaptive Server supports locking for parallel query execution.
Chapter 9, “Understanding Query Plans,” for information on new showplan messages added for parallel query execution.
Chapter 14, “Parallel Query Optimization,” for details on how the Adaptive Server optimizer determines eligibility for parallel execution.
Chapter 15, “Parallel Sorting,” to understand parallel sorting topics.
Chapter 17, “Controlling Physical Data Placement,” for information on partitioned tables, creating clustered indexes on partitioned tables, and parallel processing.
In the System Administration Guide, see:
Chapter 11, “Setting Configuration Parameters,” for details on how you can configure various levels of parallelism, each providing a performance gain and requiring corresponding trade-offs in physical resources.