Terry_Heinze and Jsharon1248,
Yes, CC1 is a 2 byte Alphanumeric and the Input of CC1 has 10 Million records ? NOT CC1 have that much. Since the Input of CC1 is from some other table. Current SQL is below, which is working fine. But taking long time to execute, No indexed. Each time this CURSOR is Opened 10 Million times. To avoid this situation, looking for your help
Code:
Declare Cur_Valvalue CURSOR for
SELECT CC2 FROM CHILDTB
WHERE CC1 = :WS-INPUT-DATA
The input WS-INPUT-DATA is from different other table. We can't JOIN that table and with this CHILDTB. I'm NOT sure how 1 dimension will suit. If we declare 1 dimension. each time i have Open this CHILDTB cursor and load into 1 dimensional table to find the matching records.
I'm sorry if i confuse you, i'm preparing the same problem in-detail and post a "new post" in the Db2 section. Until if you get any clue please share with me.
Joined: 08 Aug 2007 Posts: 291 Topics: 2 Location: Chicago
Posted: Fri Apr 10, 2009 2:18 pm Post subject:
Don't create a new post. That will only anger the masses.
Let's talk about your input file. Where does it come from? I think that if you investigate this fully, you'll find that the source is a DB2 table. The file might be created by DSNTIAUL, or some other program, but ultimately, if you search back far enough, I'll bet you'll find a SELECT statement. At that point, you have the opportunity to add the join to the CHILDTB. That's option 1, and probably the best solution. Now, for option 2. You can sort your input file by CC1. Now in your program, when you FETCH the results of Cur_ValValue Cursor, save the results in a WS table. The trick now is to not reissue the OPEN until the value of CC1 changes on your input file. You will cut your processing considerably by sorting the input and only OPENing the cursor for new CC1 values. The last option is what you've already discussed, dumping the CHILDTB table into a COBOL WS table and performing a SEARCH fore every input record.
Joined: 14 Nov 2003 Posts: 66 Topics: 0 Location: St Cloud, Minnesota USA
Posted: Fri Apr 10, 2009 8:02 pm Post subject:
hisabarinath,
If I understand, what you are trying to do it to emulate the following cursor with an array in COBOL.
Code:
Declare Cur_Valvalue CURSOR for
SELECT CC2 FROM CHILDTB
WHERE CC1 = :WS-INPUT-DATA
I have used a technique of loading the entire DB2 table into memory for small DB2 tales and then using a search process to emulate the DB2 cursor. My technique would involve fetching the data from "wider" cursor using the following.
Code:
Declare Cur_Valvalue CURSOR for
SELECT CC1,
CC2
FROM CHILDTB
ORDER BY CC1 ASC
, CC2 ASC
You could then load the data into an array as follows.
Code:
01 TC-TABLE-CHILDB.
05 TC-ENTRY OCCURS 5000 TIMES
INDEXED BY TC-INDEX.
10 TC-CC1 PIC X(5).
10 TC-CC2 PIC x(5).
I have encountered performance issues as number of entries in the array grows. IE, as more primary keys (unique TC-CC1 values in your case) are added. The tricky part is that when emulating the "OPEN CURSOR" command, you need to find the first entry in the array for that value of TC-CC1.
To resolve this issue I have used two arrays with the first array being an INDIRECTION ARRAY which will contain 1 row for each unique value of TC-CC1. The second array will contain only the non key values from your cursor. This technique works very well regardless of the order input keys for the cursor being emulated.
Code:
01 TC1-TBL-CB.
05 TC1-ENTRY OCCURS 0 TO 500 TIMES
DEPENDING ON BCNTR-TC1-ENTRIES-USED
ASCENDING KEY IS TC1-KEY
INDEXED BY TC1-INDEX.
10 TC1-KEY.
15 TC1-CC1 PIC X(5).
10 TC1-TC2-INDEX USAGE INDEX.
10 TC1-TC2-COUNT PIC S9(4) COMP.
01 TC2-TBL-CB.
05 TC2-ENTRY OCCURS 2000 TIMES
INDEXED BY TC2-INDEX.
10 TC2-CC2 PIC X(5).
now the OPEN CURSOR is replaced by a binary search of the TC1 array looking for the key value of CC1.
then the fetch loop becomes...
Code:
SET TC2-INDEX TO TC1-TC2-INDEX (TC1-INDEX).
PERFORM TC1-COUNT (TC1-INDEX) TIMES
MOVE TC2-CC2 (TC2-INDEX) to output
do your output here
SET TC2-INDEX UP BY +1
END-PERFORM.
this process does not require the CLOSE CURSOR to be emulated.
I have used this technique with DB2 tables up to 100,000 keys and a couple of million rows. Recently I implemented this in a CICS region using a shared memory area. Testing shows a 96+ percent reduction in cpu time. In your situation the elapsed savings would probably be greater than that.
The process of loading the two arrays requires that you check for a control break and only add an entry to the TC1 table when a control break occurs. When a control break occurs you first need to update the TC1-COUNT field of the previous key. Then increment the TC1-INDEX and build the entry for the next key. You need to save the index value of the TC2 array into the TC1 entry so that it points are the first value in the TC2 array for that key.
It sounds complicated, but it is very fast
REMEMBER --> the dollars that you save by writing efficient code could be the ones which pay your salary.
In the current economic climate, people who use their employers resources wisely by writing efficient code are more valuable to the employer. _________________ Chuck Haatvedt
email --> clastnameatcharterdotnet
(replace lastname, at, dot with appropriate
characters)
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
Posted: Sun Apr 12, 2009 8:43 pm Post subject:
haatvedt,
Aren't you wasting CPU cycles by doing a BINARY Search? If your intention is to avoild the multiple OPEN+FETCH+CLOSE cursor for each key then you can avoid it and no binary search is involved.
Code:
Declare Cur_Valvalue CURSOR for
SELECT CC1
,CC2
FROM CHILDTB
ORDER BY CC1
1. Open the cursor with ORDER by on CC1 and without any WHERE clause
2. Now read your file (it also needs to be sorted on the CC1)
3. If file key equal Db2 key value perform your program logic
4. If file key less than db2 key value read the file
5. If file key greater than db2 key value fetch the cursor till you get to the File key value
Repeat step 3 thru 5 till the end of the file and close the cursor when you are done.
I don't see the real need for a binary search or may be I don't see the value of a single fetch vs min of 12 binary search iterations?
Joined: 14 Nov 2003 Posts: 66 Topics: 0 Location: St Cloud, Minnesota USA
Posted: Sun Apr 12, 2009 9:15 pm Post subject:
Kolusu,
You're absolutely correct if the input driver is in CC1 sequence. Sorry, I was not more clear when stating that this technique was useful when the input data was not in sequence.
I've used this technique in subroutines where the input requests to the subroutine are not in sequential order. I've also used this in CICS transactions, via a shared memory area, for requests which are in a random sequence.
When the data can be sorted into sequence, a simple array which can be loaded from a cursor when a control break occurs is the most efficient process. This assumes that multiple input records can have the same key. You could actually load either the cursor or the input data into an array. I generally choose which ever has the lesser maximum number of occurrences
I would not initialize the array but instead count each entry as its added to the array so as to eliminate the need to initialize the array.
Thanks,
Chuck H. _________________ Chuck Haatvedt
email --> clastnameatcharterdotnet
(replace lastname, at, dot with appropriate
characters)
All times are GMT - 5 Hours Goto page Previous1, 2
Page 2 of 2
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum