The Leading Educational Resource for IT Professionals

TechTip: DB2 for i Optimization Strategies, Part 5

by Victoria Mack August 12, 2016 0 Comments

Learn how to take an ax to common suboptimal SQL programming practices.

 

This series is almost done, yet there are so many illustrations of potential code improvements to offer to the DB2 for i development community. This tip discusses possible performance improvements when replacing legacy cursors with set operations and when using temporary tables. See the references at the end of this tip for links to prior tips in this series.

 

Avoid Unnecessary Cursors

A cursor is a control structure that allows database code to process a result set in a row-by-row fashion. However, database engines including DB2 are optimized to process data in sets rather than row by row. In most cases, processing data row by row with SQL should be avoided.

 

In the early years of DB2 for i SQL, there were many operations that required row-by-row processing, However, as the language improved with new features, the need to use cursors to accomplish a task diminished greatly.

 

To illustrate, consider a temporary table that stores a schedule of important events:

 

DECLARE GLOBAL TEMPORARY TABLE EVENT_SCHEDULE (

EVENT_ID INT,

EVENT_STAMP TIMESTAMP NOT NULL,

EVENT_DUR INT,

EVENT_DESC VARCHAR(50),

EVENT_INFO VARCHAR(2048))

WITH REPLACE

;

 

For the sake of example, say a stored procedure is required to populate this temporary table with this information. Thereafter, the EVENT_ID column should be assigned as a sequential number based on the sorting of the EVENT_STAMP column. Further, it is required that the EVENT_ID is reset to one when a new year is encountered. Here’s a procedure that does this work:

 

DECLARE @ROW_ID INT NOT NULL DEFAULT 0;

DECLARE @END_OF_DATA CHAR(1) DEFAULT 'N';

DECLARE @PRV_YEAR INT NOT NULL DEFAULT 0;

DECLARE @EVENT_ID INT;

DECLARE @EVENT_STAMP TIMESTAMP;

 

DECLARE TEMP CURSOR FOR

SELECT EVENT_ID,EVENT_STAMP

FROM SESSION.EVENT_SCHEDULE

ORDER BY EVENT_STAMP

FOR UPDATE OF EVENT_ID

;

DECLARE CONTINUE HANDLER FOR NOT FOUND

   SET @END_OF_DATA='Y';

 

-- Populate data in temporary table

INSERT INTO SESSION.EVENT_SCHEDULE

SELECT 0 AS EVENT_ID,

       SHOW_TIME,

       SHOW_DUR,

       SHOW_DESC,

       SHOW_INFO

FROM SHOW_SCHEDULE

WHERE CATEGORY='FAMILY'

ORDER BY SHOW_TIME;

 

-- Assign a sequential event id to each row

OPEN TEMP;

FETCH TEMP INTO @EVENT_ID,@EVENT_STAMP;

WHILE @END_OF_DATA='N' DO

   IF YEAR(@EVENT_STAMP)<>@PRV_YEAR THEN

       SET @PRV_YEAR=YEAR(@EVENT_STAMP);

       SET @ROW_ID=0;

   END IF;

   SET @ROW_ID=@ROW_ID+1;

 

   UPDATE SESSION.EVENT_SCHEDULE

       SET EVENT_ID=@ROW_ID>

 

     WHERE CURRENT OF TEMP;

EVENT_ID=@ROW_ID</span">

 

 

   FETCH TEMP INTO @EVENT_ID,@EVENT_STAMP;

 

END WHILE;

 

CLOSE TEMP;

 

 

 

The cursor in the example does nothing except read the temporary table and assign a sequential number to the EVENT_ID column, with the EVENT_ID being reset with each new calendar year encountered. However, since V5R4, there has been an easy way to accomplish this same task without a cursor by using the ROW_NUMBER function:

 

 

 

INSERT INTO SESSION.EVENT_SCHEDULE

 

SELECT ROW_NUMBER()

 

       OVER(PARTITION BY YEAR(SHOW_TIME)

 

           ORDER BY SHOW_TIME) AS EVENT_ID,

 

       SHOW_TIME,

 

       SHOW_DUR,

 

       SHOW_DESC,

 

       SHOW_INFO

 

FROM SHOW_SCHEDULE

 

WHERE CATEGORY='FAMILY'

 

ORDER BY SHOW_TIME;

 

 

 

Here, the ROW_NUMBER function does the same work that the cursor loop accomplished: It provides a sequential row number for the EVENT_ID column. The PARTITION BY causes the numbering sequence to be reset at the start of each new year. All of the work is done with the INSERT statement, and the looping logic and single row updates associated with the cursor are eliminated.

 

 

 

There are still some cases where a cursor is useful, such as calling a stored procedure for each row in a result set or bringing data into a COBOL program for legacy report generation. However, it’s still a good practice to review legacy cursor code to see if there are cursors that are eligible to be refactored as a set operation.

 

 

 

 

Incidentally, in IBM i 7.3, there are several new OLAP functions and features that allow developers to do marvelous things, such as create running total columns, calculate averages or percentages on a subset of rows within the result set, and look back or peek ahead to other row values within the result set. All of these features greatly reduce the number of circumstances where cursor logic is required.

 

Keep Temporary Tables Small and Filter, Filter, Filter

One important principle of using temporary tables is to keep their content to a minimum both in terms of row count (filtering) and row width. Row width refers to the average number of bytes it takes to store the data for a single row, and this factor depends on the number of columns and selected data types in the table definition,

 

What developers often forget is that the temporary table is supposed to give DB2 a break by giving it a small scratchpad to work with. However, if a developer builds a huge temporary table, this work area for DB2 is going to hog plenty of resources and offset any performance benefit that might be had by using it rather than directly accessing the application’s tables.

 

Let me illustrate one way that I see this principal ignored. In many reporting scenarios, a stored procedure is written to do complex data gathering and calculations for a report. The generic code flow is as follows:

  1. Query application tables and insert required data into temporary table.

  2. Perform complex operations on temporary table and update it with results.

  3. Return results to caller (based primarily on the temporary table’s content) for report rendering.

 

Say you’re tasked with writing an ugly report that tries to calculate the profit on items sold. For purchased items, there’s logic that attempts to correlate a sales order line amount with an averaged purchase order cost (or perhaps a more complicated calculation such as a FIFO cost). For manufactured items, there’s logic that attempts to correlate the sales order line amount with the cost of the work order(s) that built the item(s). In the end, the cost is deducted from the sales amount to determine the profit.

 

The first step in this process is to gather the data into a temporary table similar to the statement shown below:

 

DECLARE GLOBAL TEMPORARY TABLE SalesData

AS (

SELECT sod.SalesOrderId,sod.SalesOrderDetailId,sod.CarrierTrackingNumber,

       sod.OrderQty,sod.ProductId,sod.UnitPrice,

       soh.PurchaseOrderNumber,soh.AccountNumber,soh.CustomerId,

       soh.SalesPersonId,soh.TerritoryId,

       st.Name AS StoreName,

       t.Name AS TerritoryName,t.Group AS TerritoryGroup,

       p.Name As ProductName,p.ProductNumber,p.ListPrice,p.Color,

       CAST(NULL AS INT) AS NumberWorkOrders,

       CAST(NULL AS INT) AS NumberScrapItems,

       CAST(NULL AS DEC(19,4)) AS ScrapCost,

       CAST(NULL AS INT) AS LaborHours,

       CAST(NULL AS DEC(19,4)) AS AvgLaborCost,

       CAST(NULL AS DEC(19,4)) AS AvgMaterialCost,

       CAST(NULL AS INT) AS FirstWorkOrderId,

       CAST(NULL AS INT) AS LastWorkOrderId,

       CAST(NULL AS TIMESTAMP) AS FirstWorkOrderStartDate,

       CAST(NULL AS TIMESTAMP) AS LastWorkOrderStartDate,

       CAST(NULL AS INT) AS NoDaysToBuild,

       CAST(NULL AS DEC(38,4)) AS AvgPOCost,

       CAST(NULL AS INT) AS FirstPurchaseOrderId,

       CAST(NULL AS INT) AS LastPurchaseOrderId,

       CAST(NULL AS DATE) AS FirstPurchaseOrderDate,

       CAST(NULL AS DATE) AS LastPurchaseOrderDate

FROM SalesOrderDetail sod

JOIN SalesOrderHeader soh ON soh.SalesOrderId=sod.SalesOrderId

JOIN SalesTerritory t ON t.TerritoryId=soh.TerritoryId

JOIN Customer c ON c.CustomerId=soh.CustomerId

JOIN Store st ON st.BusinessEntityId=c.StoreId

JOIN Product p ON p.ProductId=sod.ProductId

WHERE soh.OrderDate BETWEEN '2005-01-01' AND '2008-12-31'

)

WITH DATA

WITH REPLACE

 

The columns in green (initialized with NULLs) represent storage for the calculations that need to be made and populated in the table. Next, consider the columns highlighted with red. This is the mistake that I often see: Temporary tables are asked to store too much data at the first step in the process. The columns in red are either not used in the report (SalesOrderDetailId) or are prematurely collected at this step.

 

Why do I say “prematurely”? Because the hypothetical stored procedure will have to process this table many times while doing the purchase order cost or manufacturing cost calculations. These extra columns add to the row width and therefore require disk and memory to store and process. If there are tens or hundreds of thousands of rows in the temporary table, that equates to more disk I/Os, a larger memory requirement to process the data, and more CPU time to process all of the rows in the table.

 

To improve efficiency, do not store columns like names and descriptions in your temporary tables unless the tables will be relatively small or the data is required for immediate use. In the above scenario, I wouldn’t retrieve the columns in red until the final step of the procedure that returns the result set to the caller.

 

One other item to consider about row width is that appropriately sized data types can help as well. Consider these column definitions:

 

CAST(NULL AS TIMESTAMP) AS FirstWorkOrderStartDate,

CAST(NULL AS TIMESTAMP) AS LastWorkOrderStartDate,

CAST(NULL AS INT) AS NoDaysToBuild,

CAST(NULL AS DEC(38,4)) AS AvgPOCost,

 

The developer may have picked these data types because they matched what was in the base table. However, many potential improvements can be made, such as changing a TIMESTAMP to a DATE (assuming the time portion isn’t required for the final report) and changing the “average PO cost” and “number of days to build” to reasonable sizes:

 

CAST(NULL AS DATE) AS FirstWorkOrderStartDate,

CAST(NULL AS DATE) AS LastWorkOrderStartDate,

CAST(NULL AS SMALLINT) AS NoDaysToBuild,

CAST(NULL AS DEC(13,4)) AS AvgPOCost,

 

By my calculation, the illustrated change in data types will save somewhere in the neighborhood of 30 bytes/row. I’ll discuss the importance of choosing appropriate data types in more detail in a future tip. For now, just note that reducing the number of bytes per row can allow the database engine to cram more rows on a single disk page, which also translates to lower memory and I/O usage when processing the table. For a table with a large number of columns and rows, this data type optimization can yield large benefits.

 

At election time, the joke “vote early, vote often” is often repeated. I have a similar phrase for SQL coding: “filter early, filter often,” which means filter as much data as possible as early as possible within the code.

 

Another suboptimal programming technique I see with temporary table usage is the removal of data from it, just after DB2 took the time to collect it and insert it! After an INSERT is done into a temporary table, sometimes I’ll see statements like this immediately following:

 

-- Remove misc items from temporary table

DELETE FROM session.SalesData s

WHERE EXISTS (

SELECT *

   FROM Product p

   WHERE p.ProductId=s.ProductId

     AND p.ProductType='M' -- Misc Item

)

 

This is a somewhat trivial example, but it does occur in production programs. The INSERT statement itself should filter out these rows! Besides making DB2 insert and then delete data, it can make processing less efficient if enough rows are deleted.

 

Recall that the database engine stores row data in “pages” on disk. If these disk pages are full, DB2 is going to be more efficient because each page read will give DB2 a maximal amount of row data to process. If, for the sake of argument, 20 percent of the rows are removed from the temporary table with a DELETE statement, when DB2 reads the table, for each page of data read from disk, DB2 is going to get only 80 percent of the rows that it could have retrieved had the disk page been completely full with row data. Again, this scenario isn’t too much of a concern if the data in the table is relatively small.

 

I would only consider the use of a separate DELETE statement if:

  • the criteria were so convoluted that it would make the INSERT statement very difficult to understand

  • it would hinder DB2 from creating an efficient access plan for the INSERT statement.

 

Allow DB2 to Do Easy Lifting

If your boss asks you to move a load of bricks from his car to the office basement (while still doing your normal work), would you hope for a big pile or a small pile? Obviously, if it’s a large pile, it’s gonna take a while and potentially delay completion of your other essential tasks. Likewise with DB2, the fewer pages (aka bricks) of data it has to process, the faster it can get the job done and serve other requests.

 

Avoiding cursor usage and optimizing temporary table usage are two good ways of giving DB2 for i “less to lift” in order to get its job done.

 

References

DB2 for i Optimization Strategies, Part 1

DB2 for i Optimization Strategies, Part 2

DB2 for i Optimization Strategies, Part 3

DB2 for i Optimization Strategies, Part 4

 

About the author: Michael Sansoterra




Victoria Mack
Victoria Mack

Author



Also in MC Press Articles

Bluemix: A Viable Option for Power Customers

by Victoria Mack August 19, 2016 0 Comments

Just what is Bluemix, and what could it mean for you? An interview with an IBMer reveals the answers.

steve pitcherWritten by Steve Pitcher

Last week, I sat down with Adam Gunther, Director of Cloud Developers Services at IBM, to talk about IBM Bluemix. I told Adam I wasn’t a developer up front, but I wanted him to explain just exactly how my small-to-medium-sized business with an investment in on-premises infrastructure could really take advantage of Bluemix. I wasn’t disappointed.

Continue Reading →

Midrange MQ in an Open-Source World

by Victoria Mack August 19, 2016 0 Comments

MQ on IBM i continues to adapt to the needs of modern IT environments.

andrew schofieldWritten by Andrew Schofield

IBM MQ has been a familiar part of the corporate IT landscape for over 20 years. It’s been through a few name changes, but the fundamental idea of using asynchronous messaging to decouple communication between applications is as important now as it has ever been. Of course, over such a long period of time, there have been huge changes—in particular, the way that developers work using the Internet and open-source, and the rise of cloud computing. Therefore, we at IBM are doing many things in MQ to make sure that existing systems remain relevant and able to interact with the latest tools and platforms.

Continue Reading →

Using Scope in Linear-Main Programs to Create More Stable Applications

by Victoria Mack August 19, 2016 0 Comments

Linear-main RPG programs eliminate the RPG logic cycle and add new levels of variable scoping to protect your code from bugs down the road.

brian mayWritten by Brian May

While I am no expert in the RPG logic cycle, I have had to deal with it in older applications over the years. Most RPG developers have dealt with a logic cycle program at least once. I can honestly say I have never written a new logic cycle program, but I have seen others in the community doing it. This article is not intended to start a religious war about cycle programming. There are some who will never give it up. Instead, this article will demonstrate how to create a program without the logic cycle and concentrate on what I think is a very useful benefit to using linear-main procedures in program.

Continue Reading →