View previous topic :: View next topic |
Author |
Message |
pjcoder Beginner
Joined: 11 Feb 2009 Posts: 2 Topics: 1
|
Posted: Wed Feb 11, 2009 10:44 am Post subject: Dynamic increase in internal table array size - COBOL |
|
|
Hey, i have an internal working storage array which will hold the entire DB2 table records in it. Currently the internal table size is fixed to be of some X times size. Now the size of the actual DB2 table (number of records) has increased beyond the specified X number of records in array declaration size.
Is only the manual updation of internal table size in the COBOL module a remedy?
Can you recommend any ways that can automate the increase in internal table size declared in program relatively to the increase in number of records in DB2 table? Thanks in advance.. |
|
Back to top |
|
|
kolusu Site Admin
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
|
Posted: Wed Feb 11, 2009 12:10 pm Post subject: |
|
|
pjcoder,
You can have multiple internal tables each not exceeding 16 MB limit. Predefine 5 or 6 tables and load them accordingly.
On the other hand I don't understand as to why you need an internal table to read the db2 table data. |
|
Back to top |
|
|
jim haire Beginner
Joined: 30 Dec 2002 Posts: 140 Topics: 40
|
Posted: Wed Feb 11, 2009 2:08 pm Post subject: |
|
|
He may want to do a multi-row fetch into an internal table? In that way you would minimize your I/O when processing records by doing less fetches. |
|
Back to top |
|
|
pjcoder Beginner
Joined: 11 Feb 2009 Posts: 2 Topics: 1
|
Posted: Wed Feb 11, 2009 11:43 pm Post subject: |
|
|
Thanks.
Module that i am working on is from vintage days . I cannot alter the module's design much.
It's a COBOL-BATCH module and they use internal array table to load the internal table using multifetch. The values in the internal array are later used at different points of the module multiple times. Now it's the situation that the array holding the DB2 records is overflowing which causes serious issues of leaving behind the vital information behind.
Instant solution is increasing the array size to an X. But this situation may incur again, so is there any design approach that could be handles to prevent this situation anymore. |
|
Back to top |
|
|
rk_pulikonda Beginner
Joined: 27 May 2003 Posts: 22 Topics: 2 Location: India
|
Posted: Thu Feb 12, 2009 1:40 pm Post subject: |
|
|
If you are using for reporting purpose
You can unload the columns from required table(s) and do a sort. Give this file as input to your cobol program and generate report. _________________ Thanks
-Ram |
|
Back to top |
|
|
haatvedt Beginner
Joined: 14 Nov 2003 Posts: 66 Topics: 0 Location: St Cloud, Minnesota USA
|
Posted: Sat Feb 14, 2009 2:18 pm Post subject: |
|
|
PJCODER,
I've written a number of programs which load an entire DB2 table into a memory cache and then simulate either a SELECT statement or a CURSOR. The key to making this type of program tolerant of an overflow condition is to have it load up as much data as the cache (arrays) will allow and then when searching the cache, if the cache is full and the search key is greater than the last COMPLETE key in the cache, then issue a DB2 call to retrieve the requested data.
The value of this technique is that it will continue to function correctly even when the amount of data exceeds the size of the cache. Also when the amount of data exceeds the cache size, you still incur the performance benefit of the cache when accessing data for which the search key is <= the last complete key loaded.
it is imperative to save the last COMPLETE key when loading the cache and not the last key being loaded. The reason being that when simulating a process which used a CURSOR, the last key in the cache may not be complete in this situation.
I've seen performance improvements in the order of 85 to 99 percent depending on the amount of redundant DB2 calls which can be eliminated by the cache.
good luck.
ps... what i've usually done to implement this is to write a separate module to load the cache and another separate module to emulate a single DB2 call either a SELECT or a OPEN / FETCH if emulating a cursor. Then I write a simple batch program which reads an input file of the predicate keys and for each input record it does the DB2 call and / or call the cache subroutines. The value of this is that it eliminates all of the other business logic from the testing of the caching process. Also it makes it very easy to test to ensure that the cache process is returning exactly the same data as the DB2 call. Lastly by executing the program twice, once only using the DB2 calls and the second time only using the cache programs, you can get a very accurate performance comparison as the programs both used the same input file and returned the same information...
hope this is enough to get you started....
ps... when loading the data for cursors especially I always use two arrays the first containing only the key values, a pointer to the data table containing the non key data (USAGE INDEX for speed) and a count of how many rows exist in the data table for this specific key entry. This makes emulating a cursor an easy task. _________________ Chuck Haatvedt
email --> clastnameatcharterdotnet
(replace lastname, at, dot with appropriate
characters) |
|
Back to top |
|
|
|
|