Joined: 29 Nov 2002 Posts: 382 Topics: 27 Location: Chennai, India
Posted: Sat Dec 14, 2002 2:02 am Post subject: CICS-DB2 Problem
Hi all,
One of my colleagues asked me a question which had something to do with both CICS and DB2. So, instead of putting it in either CICS or DB2 forums, I have put it here. Ok, So here goes the problem.
Suppose, we have got 100 rows in a DB2 table. We have to display all the 100 rows, 10 at a time on to a CICS screen. We have to code the CICS-DB2 program pseudo-conversationally. Now, hoe do we go about doing it.
My answer was that we declare and open a cursor and retrieve the first 10 records and display it on a map using the SEND MAP command. Now, after it has been displayed, we have to end the task as it is pseudoconversation. When the task ends, the cursor also closes. So, basically before ending the task, we will store the last displayed key somewhere(in a TSQ or pass it through COMMAREA). For displaying the next 10 records in the next task, we will again have to open the cursor and retrieve the records from the beginning but ignore the first 10 records and display the next 10 records. This process will continue.
Do you guys have any better idea on how to do this while still maintaining pseudoconversation. If instead of Db2, we had VSAM files, it could have been a lot simpler. On each task, we would have straight jumped to the last displayed record and continued browsing from there. In db2 cursors, there is no method of skipping records and that is the problem.
Joined: 13 Dec 2002 Posts: 45 Topics: 3 Location: 3rd Block from the SUN
Posted: Sat Dec 14, 2002 4:04 am Post subject:
You could use scrollable cursors .... Find below the functionality & syntax.
I hope you could find this helpful.
-----------------------------------------------------------------------------------
Scrollable Cursors
Probably the most significant new application development enhancement made to DB2 V7 is scrollable cursors. A scrollable cursor provides the ability to scroll forward and backward through the data once the cursor is open. This can be achieved using nothing but SQL - no host language code (e.g., COBOL, C) is required to facilitate a scrollable cursor in DB2 V7. A scrollable cursor makes navigating through SQL result sets much easier. There are two types of DB2 scrollable cursors:
SENSITIVE -- updateable; can access data changed by the user or other users
INSENSITIVE -- not updateable; will not show changes made
To use scrollable cursors you must use declared temporary tables, another new feature of DB2 Version 7. Declared temporary tables are discussed later in the section on data management. DB2 uses a declared temporary table to hold and maintain the data returned by a scrollable cursor.
Scrollable cursors allow developers to move through the results of a query in multiple ways. The following key words are supported when fetching data from a scrollable cursor:
NEXT - will FETCH the next row, the same way that the pre-V7 FETCH statement functioned
PRIOR - will FETCH the previous row
FIRST - will FETCH the first row in the results set
LAST - will FETCH the last row in the results set
CURRENT - will re-FETCH the current row from the result set
BEFORE - positions the cursor before the first row of the results set
AFTER - positions the cursor after the last row of the results set
ABSOLUTE n - will FETCH the row that is n rows away from the first row in the results set
RELATIVE n - will FETCH the row that is n rows away from the last row fetched
For both ABSOLUTE and RELATIVE, the number n must be an integer. It can be either a positive or a negative number, and it can be represented as a numeric constant or as a host variable.
All of the FETCH options for scrollable cursors also reposition the cursor before fetching the data. For example, consider the following cursor logic:
DECLARE csr1 SENSITIVE STATIC SCROLL CURSOR
FOR SELECT FIRSTNAME, LASTNME
FROM DSN8710.EMP
ORDER BY LASTNME;
OPEN csr1;
FETCH LAST csr1 INTO :FN, :LN;
Issuing this SQL will declare a scrollable cursor named csr1, open that cursor, and then FETCH the last row from the cursor's results set. The FETCH LAST statement will reposition the cursor to the last row of the results set, and then FETCH the results into the host variables as specified. Scrollable cursors reduce the amount of time and effort required to move backward and forward through the results of SQL queries.
But as helpful as scrollable cursors are, do not make every cursor a scrollable cursor. Scrollable cursors require substantially more overhead than a traditional, non-scrollable cursor. Analyze the requirements of your applications and deploy scrollable cursors only where it makes sense to do so.
Limiting the Number of Rows Fetched
Application developers frequently need to retrieve a limited number of qualifying rows from a table. For example, maybe you need to list the top ten best selling items from inventory, or a list of the top five most expensive products (i.e., highest price tag). There are several ways to accomplish this prior to DB2 V7 using SQL, but they are not necessarily efficient.
The first reaction is to simply use the WHERE clause to eliminate non-qualifying rows. But this is simplistic, and often is not sufficient to produce the results desired in an optimal manner. What if the program only requires that the top ten results be returned? This can be a somewhat difficult request to formulate using SQL alone.
Consider, for example, an application that needs to retrieve only the top ten most highly paid employees from the EMP sample table. You could simply issue a SQL request that retrieves all of the employees in order by salary, but only use the first ten retrieved. That is easy; for example:
SELECT EMPNO, FIRSTNME, LASTNAME, SALARY
FROM DSN8710.EMP
ORDER BY SALARY DESC;
You must specify the ORDER BY clause with the DESC key word. This sorts the results into descending order, instead of the default, which is ascending. Without the DESC key word, the "top ten" would be at the very end of the results set, not at the beginning.
But that does not really satisfy the requirement - retrieving only the top ten. It merely sorts the results into descending sequence. So the results would still be all employees in the table, but in the correct order so you can view the "top ten" salaries very easily. The ideal solution should return only the top ten employees with the highest salary and not merely a sorted list of all employees.
You can code some "tricky" SQL to support this request for all versions of DB2, such as the following:
SELECT EMPNO, FIRSTNME, LASTNAME, SALARY
FROM DSN8710.EMP A
WHERE 10 (SELECT COUNT(*)
FROM DSN8710.EMP A
WHERE A.SALARY < B.SALARY)
AND SALARY IS NOT NULL
ORDER BY SALARY DESC;
This SQL is portable from version to version of DB2 (as well as to another DBMS, such as Oracle or SQL Server). And, of course, you can change the constant 10 to any number you wish, thereby retrieving the top 20, or top 5, as deemed necessary by the needs of your application.
Since the SALARY column is nullable in the EMP table, you must remove the nulls from the results set. And the ORDER BY is required to sort the results in the right order. If it is removed from the query, the results will still contain the top ten, but they will be in no particular order.
DB2 V7 provides an easier and less complicated way to limit the results of a SELECT statement - the FIRST key word. You can code FETCH FIRST n ROWS, which will limit the number of rows that are fetched and returned by a SELECT statement.
Additionally, you can specify a new clause -- FETCH FIRST ROW ONLY clause -- on SELECT INTO statements when the query can return more than one row in the answer set. Doing so informs DB2 to ignore the other rows.
There is one difference between the new V7 formulation and the other SELECT statement we reviewed, and that is the way "ties" are handled. A tie occurs when more than one row contains the same value. The previous query we examined may return more than 10 rows if there are multiple rows with the same value for price within the top ten.
Using the FIRST key word DB2 will limit the number of rows returned to ten, even if there are other rows with the same value for price as the number ten row in the results set. The needs of your application will dictate whether ties are to be ignored or included in the result set. If all "ties" need to be included in the results set, the new V7 feature may not prove to be helpful.
-----------------------------------------------------------------------------------
Joined: 29 Nov 2002 Posts: 382 Topics: 27 Location: Chennai, India
Posted: Mon Dec 16, 2002 12:01 am Post subject:
Hi Nutcracker,
Thanks a lot for your detailed explanantion of Scrollable cursors. I knew about Scrollable cursors but they will not solve any problem in my case because this installation has DB2 v5. I am sorry, I should have mentioned it beforehand. What I was looking for was any conventional solution to the problem. There are of course some solutions using TSQs(like the one that I had mentioned), but those are not what I could call elegant solutions.
Joined: 13 Dec 2002 Posts: 45 Topics: 3 Location: 3rd Block from the SUN
Posted: Tue Dec 17, 2002 6:13 am Post subject:
You could do the following to get thru your problem.
As you are using a TSQ, then you could store the information from DB2 directly into the TSQ. As you have earlier mentioned, you had 100 records in a DB2 table, then you could declare a cursor & retrieve all the rows and write into a TSQ. You could then use the TSQ to read the first 10 records you need to display on the CICS screen. Once the control comes back into the program, you could continue from position 11 by using the ITEM or NEXT options; till all the records have been read.
Let me know if this helps solve the problem.
Joined: 29 Nov 2002 Posts: 382 Topics: 27 Location: Chennai, India
Posted: Tue Dec 17, 2002 7:57 am Post subject:
Hi Nutcracker,
This solution will be the best one provided we are sure that the user is going to browse all the 100 records 10 at a time. Let us consider the case when the user has browsed only 20 records(Pressed PF8 twice for two pages) and then decides to end the browse. In that case, our putting all the records into the TSQ goes for a waste. Consider the case when there are 1000 records.
Actually, after thinking about this problem, I came to the conclusion that the exact solution depends on the case - the number of records to be retrieved, the number of pages the user is expected to view etc.
If there are a lot of records and then user is expected to browse only a few of them, we can very well code an extra WHERE condition on the primary key (GREATER THAN CURSOR) in the SELECT query of the cursor. We then open the cursor each time the user requests a new page.
On the contrary if there are a few records and the user is expected to view all of them, then as you had said, we can write all the records from the table to a TSQ in the beginning and do the paging from the TSQ itself.
Please let me know your comments on this.
Joined: 29 Nov 2002 Posts: 382 Topics: 27 Location: Chennai, India
Posted: Tue Dec 17, 2002 11:41 pm Post subject:
Hi Nutcracker,
What will be the role of extrapartitioned TDQs in this. I mean how are they going to help in this case. They are useful when you need to pass data from a CICS application program to the batch environment(say).
Dynamic SQLs are a strict no-no in our installation. Being very expensive and resource intensive, they are used only for very special cases.
Joined: 13 Dec 2002 Posts: 45 Topics: 3 Location: 3rd Block from the SUN
Posted: Wed Dec 18, 2002 6:23 am Post subject:
Your question kept me thinking last night.
TSQ/TDQ/MQ, all of them are quite effecient in terms of processing
speed and ability to hold a large amount of data.
However, as you have mentioned it does not make sense to write everything into the TSQ, say for instance if you have 5 million rows. It may be quite a load on the resources. In such a case you need to arrive at a cut off point with the consent of the users.
For example, you could decide that only 5,000 records can be viewed at any point in time for a query. You can control this through the program logic. This would be faster than having a select for each page up/page down.
Quote:
What will be the role of extrapartitioned TDQs in this. I mean how are they going to help in this case. They are useful when you need to pass data from a CICS application program to the batch environment(say).
Extra-partitioned TDQs do work in an entirely CICS region though it is mainly to interact with the batch. You could write the resultant rows from the query to the Extra TDQ, usually a file & assign this file to the region or Txn and read the file as usual.
BTW, I am curious to know if this is just a hypothetical or a real-time quesiton, the colleague of yours asked you in the first place.
Joined: 29 Nov 2002 Posts: 382 Topics: 27 Location: Chennai, India
Posted: Wed Dec 18, 2002 8:04 am Post subject:
Hi Nutcracker,
This is a real-time question. And if you think about it, you do encounter such cases in CICS-DB@ applications. After all paging is inevitable in certain CICS applications. In one of our applications, we process this the same way you told - writing all the records into a TSQ at the beginning and doing all the paging from the TSQ. At the same time, paging using a CURSOR(where you define a GREATER THAN/LESS THAN WHERE clause on the Primary key) seems to be the most logical way of doing it, although I have never done it before.
Dave - You really told me a new thing today about Dynamic SQLs sometimes outperforming Static SQLs. Can you please elaborate on this a little. I mean, can you provide some hypothetical case where this can happen. Logically, it seems that static SQLs are more efficient bcoz you get to calculate the optimum access path before running the SQL.
Joined: 02 Dec 2002 Posts: 151 Topics: 3 Location: Perth, Western Australia
Posted: Wed Dec 18, 2002 8:31 am Post subject:
The access path can quickly degrade and become inefficient. That is why you should rebind your plans or packages and run runstats on a regular basis. The only problem with dynamic SQL is that it has to determine the access path at runtime, which could be bad but will probably be good. Check out the REOPT(VARS) bind option.
We have a commercial product that uses dynamic SQL because in our testing on all versions of DB2 it was quicker for 50% of our queries and required less maintenance. Also, dynamic SQL is a feature of JDBC etc and has superb optimization in new releases of DB2 UDB...
To say dynamic SQL is inefficient is incorrect. _________________ Dave Crayford
If it helps, I faced this solution by coding a dynamic setup....kind of like "if in TSQ, use, if not obtain and place in TSQ..." Essentially it would only go to the database as needed and afterwards if the same data was repeated it would stick to the TSQ...if the user only viewed 2 screens (10 records per screen), it would only pull 20 records and put it into TSQ...basically it only consults the database if it isn't already in TSQ...
The only downside to this is if the data in the table changes readily while this browse transaction is going...if it's relatively static and people aren't updating it like mad, then this should work...otherwise if it's being updated like mad, it's really better to go to the database in each instance so the user sees the most accurate data...but if just showing the data is what is important and not the age of the data, then this should work...
Hope this helps...(and I can provide a short example too if requested)...
Joined: 29 Nov 2002 Posts: 382 Topics: 27 Location: Chennai, India
Posted: Tue Mar 25, 2003 12:27 am Post subject:
Glenn,
Will this TSQ be permanent or will it be there untill the lifetime of the transaction. If the TSQ is permanent and is sort of a IO area in the main memory for the database, then although access will no doubt be faster but it is sort of an unnecessary data redundancy. Any thoughts?
the TSQ exists for the logical lifetime of the transaction only (i.e. if a "quit" action is opted for out of the browse screen, the TSQ goes away, otherwise the TSQ stays). The basic idea is to try and pull data from the database and use it as a storage area for the data instead of repeatedly getting the same data from the database for each user. Just an attempt to turn down the number of I/Os (and actually was quite successful in speeding up these transactions from when database loads were used entirely).
Basically a lot of thought was put into this when I tried to improve upon this in the shop I worked these on: The original solution that was taken by these programs was to fetch 10 records for the first screen, but for the second screen fetch 10 records, ignore them, then fetch the next 10. Bad solution, but we didn't know anything better at the time, and provided a poor user response for the transaction and a heavy load on the database besides.
Background: these browsers were all over the systems in the programs (basically used very heavily) and showed a minimum of 1 record (1 screen) and a maximum of around 400 records at the time I did these changes, and the maximum was only due to increase with the time in service of these programs. The examples are given with the assumption of one screen being 10 data items.
Through the process of thinking these transactions out, we came up with several solutions which were all experimented with in development.
1) save keys for 1st and 10th record and go to the database on each page of data using two different cursors (forward and backward) and fetch only enough records to display the screen. Fine solution, but there was the problem of going to the database, maintaining two cursors, and in some cases the programs got to be nightmares for cases where drilling down in the data was requested by the user. Some of these transactions went down through 3 levels of data, walking the ER diagram for related data.
The worst program had 12 static cursors (with complex statements, long lists of WHERE statements, joins) and over half of the source lines (non-comment) either EXEC SQL blocks or code selecting between different cursors. I wrote the program with the 12 static cursors as a dynamic SQL program (to experiment and educate myself) and the code for forming the statement to be run was a nightmare in itself to code and even more of a nightmare to maintain.
2) Pull the whole screen display into the TSQ at the beginning of the program. Basically this worked brilliantly for lessening the database usage, and the transaction responded wonderfully upon paging. The problem was even for the record counts I gave, the initial load time was HORRID for even a data group of 100 items...what would happen if these screen items scaled up even to 1000? Plus, what if the user looks at the first screen of a 1000 item set and quits? You've wasted 990 fetches...
3) The thought from here was to try and find a middle ground - pull from the database only if necessary. The data didn't change a lot, so this option was considered acceptable in that sense by the users, and if it did while the transaction was running, the transaction could be exited and new data can be displayed.
Basically the option was considered to pull a screen from the database only if it does not exist in the TSQ. This means storing the screen in the TSQ if pulled from the database. If you think this through logically, this eliminates the code to access the database on a backwards scroll (it's always in the TSQ!). Then on a forward scroll, you check to see if it is in the TSQ. If it is, then display the TSQ. Otherwise go to the database and then store that screen in the TSQ.
This was felt an acceptable trade-off, since it did not page a significant amount of the database to build the screen each time. When the database was accessed exactly 10 records are always accessed. The backwards database access code disappears, so maintenance is easier (shorter program), and the code is smaller (even faster potentially). The only items that are stored in TSQ are the items the user has viewed. Each item that has viewed is only accessed once in terms of the database. Actually a very noticable speed increase occurred on these programs.
4) There wasn't enough time and this idea didn't seem promising enough, but there was a thought of trying to use a shared TSQ for all the instances of the transaction currently running. The problem there was knowing whether the data were current or not, and finding a good reliable way to keep this data current. If a screen is in the TSQ (it very well couldn't if no user browsed the transaction to that point), and how to locate & update the data within the TSQ. Of course as you can imagine, what happens if a value is deleted or a value is added? How do you maintain the data in TSQ/memory and keep it sorted as the user expects? I have some ideas if I were to attempt this today (mainly involving a balanced binary search tree with the first elements of the TSQ in it along with references to the TSQ item).
Hope this helps. I will note too it's a VERY GOOD IDEA to have proper indexes defined on the key fields you display in a program like this. I will say though this took a good while to get working right - I can say it's definitely not a beginner's program or algorithm. If you need a short example with the basic idea, the offer is still on the table. Hope this helps.
In addition: *HOW* do you partition the TSQ. This was another question...ultimately with some testing there too, it was found out that storing a single screen item in a TSQ (10 TSQ items per screen) was ultimately much less efficient and elegant than storing the whole screen in one TSQ record. Hope that helps.
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum