Risks

To denormalize you should have a thorough knowledge of the application. Additionally, you should denormalize only if performance issues indicate that it is needed.

For example, the ytd_sales column in the titles table of the pubs2 database is a denormalized column that is maintained by a trigger on the salesdetail table. You can obtain the same values using this query:

select title_id, sum(qty)
    from salesdetail
    group by title_id

Obtaining the summary values and the document title requires a join with the titles table:

select title, sum(qty)
    from titles t, salesdetail sd
    where t.title_id = sd.title_id
    group by title

If you run this query frequently, it makes sense to denormalize this table. But there is a price to pay: you must create an insert/update/delete trigger on the salesdetail table to maintain the aggregate values in the titles table.

Executing the trigger and performing the changes to titles adds processing cost to each data modification of the qty column value in salesdetail.

This situation is a good example of the tension between decision support applications, which frequently need summaries of large amounts of data, and transaction processing applications, which perform discrete data modifications.

Denormalization usually favors one form of processing at a cost to others.

Any form of denormalization has the potential for data integrity problems that you must document carefully and address in application design.