Small Intersection, Indirect Broad Criteria

Small Intersection, Indirect Broad Criteria

An indirect criterion is one that applies to a column in a table that you are joining only for the purpose of evaluating the criterion. The retrieval of a small result set through the intersection of two or more broad criteria, as in the previous situation "Small Intersection of Broad Criteria," is often a formidable assignment. Obtaining the intersection of the large intermediary result sets by joining from a central table, or even through a chain of joins, makes a difficult situation even more daunting. This situation is particularly typical of the "star schema" that I discuss in some detail in Chapter 10, but you'll also encounter it fairly frequently in operational databases. When you are looking for that rare combination of multiple nonselective conditions on the columns of the row, you must expect to perform full scans at some point. The case becomes particularly interesting when several tables are involved.

The DBMS engine needs to start from somewhere. Even if it can process data in parallel, at some point it has to start with one table, index, or partition. Even if the resulting set defined by the intersection of several huge sets of data is very small, a boot-strapping full table scan, and possibly two scans, will be requiredwith a nested loop, hash join, or merge join performed on the result. The difficulty will then be to identify which combination of tables (not necessarily the smallest ones) will result in the least number of rows from which the final result set will be extracted. In other words, we must find the weakest point in the line of defense, and once we have eliminated it, we must concentrate on obtaining the final result set.

Let me illustrate such a case with a real-life Oracle example. The original query is a pretty complicated query, with two tables each appearing twice in the from clause. Although none of the tables is really enormous (the biggest one contains about 700,000 rows), the problem is that none of the nine parameters that are passed to the query is really selective:

    select (data from ttex_a,
    from ttrcapp ttrcap_a,
         ttrcapp ttrcap_b,
         tstg tstg_a,
         ttex ttex_a,
         ttex ttex_b,
    where ( ttraoma.txnum = topeoma.txnum )
      and ( ttraoma.bkcod = tbooks.trscod )
      and ( ttex_b.trscod = tbooks.permor )
      and ( ttraoma.trscod = ttrcap_a.valnumcod )
      and ( ttex_a.nttcod = ttrcap_b.valnumcod )
      and ( ttypobj.objtyp = ttraoma.objtyp )
      and ( ttraoma.trscod = ttex_a.trscod )
      and ( ttrcap_a.colcod = :0 ) -- not selective
      and ( ttrcap_b.colcod = :1 ) -- not selective
      and ( ttraoma.pdtcod = tpdt.pdtcod )
      and ( tpdt.risktyp = trgppdt.risktyp )
      and ( tpdt.riskflg = trgppdt.riskflg )
      and ( tpdt.pdtcod = trgppdt.pdtcod )
      and ( trgppdt.risktyp = :2 ) -- not selective
      and ( trgppdt.riskflg = :3 ) -- not selective
      and ( ttraoma.txnum = tstg_a.txnum )
      and ( ttrcap_a.refcod = :5 ) -- not selective
      and ( ttrcap_b.refcod = :6 ) -- not selective
      and ( tstg_a.risktyp = :4 ) -- not selective
      and ( tstg_a.chncod = :7) -- not selective
      and ( tstg_a.stgnum = :8 ) -- not selective

When run with suitable parameters (here indicated as :0 to :8), the query takes more than 25 seconds to return fewer than 20 rows, doing about 3,000 physical I/Os and hitting data blocks 3,000,000 times. Statistics correctly represent the actual contents of tables (one of the very first things to check), and a query against the data dictionary gives the number of rows of the tables involved:

    TABLE_NAME                    NUM_ROWS
    --------------------------- ----------
    ttypobj                           186
    trgppdt                           366
    tpdt                             5370
    topeoma                         12118
    ttraoma                         12118
    tbooks                          12268
    ttex                           102554
    ttrcapp                        187759
    tstg                           702403

A careful study of the tables and of their relationships allows us to draw the enemy position of Figure, showing our weak criteria represented as small arrows, and tables as boxes the size of which approximately indicates the number of rows. One thing is especially remarkable: the central position of the tTRaoma table that is linked to almost every other table. Unfortunately, all of our criteria apply elsewhere. By the way, an interesting fact to notice is that we are providing two values to match columns risktyp and riskflg of TRgppdtwhich is joined to tpdt on those very two columns, plus pdtcod. In such a case, it can be worth contemplating reversing the flowfor example, comparing the columns of tpdt to the constants provided, and only then pulling the data from trgppdt.

The enemy position

Most DBMS allow you to check the execution plan chosen by the optimizer, either through the explain command or sometimes by directly checking in memory how something has been executed. When this query took 25 seconds, the plan, although not especially atrocious, was mostly a full scan of tTRaoma followed by a series of nested loops , using the various indexes available rather efficiently (it would be tedious to detail the numerous indexes, but suffice to say that all columns we are joining on are correctly indexed). Is this full scan the reason for slowness? Definitely not. A simple test, fetching all the rows of ttraoma (without displaying them to avoid the time associated with displaying characters on a screen) proves that it takes just a tiny fraction, hardly measurable, of the elapsed time for the overall query.

When we consider the weak criteria we have, our forces are too feeble for a frontal attack against tstg, the bulk of the enemy troops, and even ttrcap won't lead us very far, because we have poor criteria against each instance of this table, which intervenes twice in the query. However, it should be obvious that the key position of ttraoma, which is relatively small, makes an attack against it, as a first step, quite sensibleprecisely the decision that the optimizer makes without any prompting.

If the full scan is not to blame, then where did the optimizer go wrong? Have a look at Figure, which represents the query as it was executed.

What the optimizer chose to do

When we check the order of operations, it all becomes obvious: our criteria are so bad, on face value, that the optimizer chose to ignore them altogether. Starting with a pretty reasonable full scan of ttraoma, it then chose to visit all the smallish tables gravitating around ttraoma before ending with the tables to which our filtering criteria apply. This approach is the mistake. It is likely that the indexes of the tables we first visit look much more efficient to the optimizer, perhaps because of a lower average number of table rows per key or because the indexes more closely match the order of the rows in the tables. But postponing the application of our criteria is not how we cut down on the number of rows we have to process and check.

Once we have taken ttraoma and hold the key position, why not go on with the tables against which we have criteria instead? The join between those tables and ttraoma will help us eliminate unwanted rows from ttraoma before proceeding to apply joins with the other tables. This is a tactic that is likely to pay dividends sinceand this is information we have but that is unknown to the optimizerwe know we should have, in all cases, very few resulting rows, which means that our combined criteria should, through the joins, inflict heavy casualties among the rows of ttraoma. Even when the number of rows to be returned is larger, the execution path I suggest should still remain relatively efficient.

How then can we force the DBMS to execute the query as we want it to? It depends on the SQL dialect. As you'll see in Chapter 11, most SQL dialects allow directives, or hints, to the optimizer, although each dialect uses different syntax for such hintstelling the optimizer, for instance, to take on the tables in the same order as they are listed in the from clause. The trouble with hints is that they are more imperative than their name suggests, and every hint is a gamble on the futurea bet that circumstances, volumes, database algorithms, hardware, and the rest will evolve in such a way that our forced execution path will forever remain, if not absolutely the best, at least acceptable. In the particular case of our example, since nested loops using indexes are the most efficient choice, and because nested loops don't really benefit from parallelism, we are taking a rather small risk concerning the future evolution of our tables by ordering tables as we want them processed and instructing the optimizer to obey. Explicitly forcing the order followed to visit tables was the approach actually taken in this real-life case, which resulted in a query running in a little less than one second, with hardly fewer physical I/Os than before (2,340 versus 3,000not too surprising since we start with a full scan of the very same table) but since we "suggested" a more efficient path, logical I/Os fell dramaticallyto 16,500, down from over 3,000,000with a noticeable result on the response time.

Remember that you should heavily document anything that forces the hand of the DBMS.

Explicitly forcing the order in which to visit tables by using optimizer directives is a heavy-handed approach. A more gentle way to obtain the same result from the optimizer, provided that it doesn't savagely edit our SQL clauses, may be to nest queries in the from clause, thus suggesting associations like parentheses would in a numerical expression:

    select (select list)
    from (select ttraoma.txnum,
          from ttraoma,
               tstg tstg_a,
               ttrcapp ttrcap_a
         where tstg_a.chncod = :7
           and tstg_a.stgnum = :8
           and tstg_a.risktyp = :4
           and ttraoma.txnum = tstg_a.txnum
           and ttrcap_a.colcod = :0
           and ttrcap_a.refcod = :5
           and ttraoma.trscod = ttrcap_a.valnumcod) a,
         ttex ttex_a,
         ttrcapp ttrcap_b,
         ttex ttex_b,
    where ( a.txnum = topeoma.txnum )
     and ( a.bkcod = tbooks.trscod )
     and ( ttex_b.trscod = tbooks.permor )
     and ( ttex_a.nttcod = ttrcap_b.valnumcod )
     and ( ttypobj.objtyp = a.objtyp )
     and ( a.trscod = ttex_a.trscod )
     and ( ttrcap_b.colcod = :1 )
     and ( a.pdtcod = tpdt.pdtcod )
     and ( tpdt.risktyp = trgppdt.risktyp )
     and ( tpdt.riskflg = trgppdt.riskflg )
     and ( tpdt.pdtcod = trgppdt.pdtcod )
     and ( tpdt.risktyp = :2 )
     and ( tpdt.riskflg = :3 )
     and ( ttrcap_b.refcod = :6 )

It is often unnecessary to be very specific about the way we want a query to be executed and to multiply esoteric hints; the right initial guidance is usually enough to put an optimizer on the right track. Nested queries making explicit some table associations have the further advantage of being quite understandable to a qualified human reader.

A confused query can make the optimizer confused. Clarity and suggested joins are often enough to help the optimizer provide good performance.

 Python   SQL   Java   php   Perl 
 game development   web development   internet   *nix   graphics   hardware 
 telecommunications   C++ 
 Flash   Active Directory   Windows