View previous topic :: View next topic |
Author |
Message |
Martin Beginner

Joined: 20 Mar 2006 Posts: 133 Topics: 58
|
Posted: Wed Mar 07, 2007 12:59 pm Post subject: Replacing DB2 restart logic with a sort step |
|
|
Hi All,
Here' what is currently happening in PROD.
There is a cobol DB2 program which reads the input file and inserts few fields from the input file into DB2 table.
We have a restart logic in place i.e:
- We do a commit once we insert 1000 rows into the table.
- We then write out the last inserted row into a control file.
- If the program abends,then the the program will read the record in the control file and start the processing that record.
The problem is that the input file has to read 1000 times to find the matching record. This has created few performance issues.
Now here's what we are planning to do if the program abends.
- Write a sort step which runs before the program runs.
- This sort step will need to find a matching record in the input file ( with the control file).
- The step will then need to write out all the records which are after the matching record.
- If the control file is empty , then all the records in the input file to the output file. ( This will be the case when the program ran succesfully and did not write out any record the control file)
- This output file will then be fed to the program.
Now I have a few questions:
- Any gaps in the above logic?
- Can anyone help me with the sort step?
We have syncsort on our system...
Here's the version of SYNCSORT
SYNCSORT FOR Z/OS 1.2.1.1R U.S. PATENTS: 4210961, 5117495 (C)
z/OS 1.6.0
Input file and control file is FB file with LRECL = 333. I needa to match the entire record before writing out all records which are after the matched record.
All fields in the file are numeric fields.
-Mt |
|
Back to top |
|
 |
kolusu Site Admin

Joined: 26 Nov 2002 Posts: 12378 Topics: 75 Location: San Jose
|
Posted: Wed Mar 07, 2007 1:22 pm Post subject: |
|
|
Martin,
very poor design. Using sort to filter out records will add more run time.
Actually the pgm is quite simple. I don't understand as why you have performance issues for just reading 1000's of records.
pgm logic.
1. Read the file. Increment the input records read counter.
2. Insert into Db2
3. Issue commit. Once issued write the last record # committed to a control file.
4. In case of abend read the control file and skip all the records untill you have reached your commit record # then repeat step # 2 thru 4
It is as simple as that. read loop should NOT cause a performance issue.
Kolusu _________________ Kolusu
www.linkedin.com/in/kolusu |
|
Back to top |
|
 |
Martin Beginner

Joined: 20 Mar 2006 Posts: 133 Topics: 58
|
Posted: Wed Mar 07, 2007 1:31 pm Post subject: |
|
|
kolusu,
The issue here is that there are times when the input will have more than 100 thousanda record and the commit count in the program is 1000. what is the program abended just before we issue the last commit( This is what has been happening frequently).
Any other ways to do this. The client is hell bent upon using a SORT step in here. Here's what i was planning to do :
- Run the prod version of the program with Max num of records and capture the CPU time utlized.
- Run the version of the program with the sort changes and compare the cput time utlized with the proda version.
I guess this is the onle way I can convince them.
Any other solutions? |
|
Back to top |
|
 |
Martin Beginner

Joined: 20 Mar 2006 Posts: 133 Topics: 58
|
Posted: Wed Mar 07, 2007 1:32 pm Post subject: |
|
|
Could any one help me with the sort step here ?
Help much appreciated!! |
|
Back to top |
|
 |
dbzTHEdinosauer Supermod
Joined: 20 Oct 2006 Posts: 1411 Topics: 26 Location: germany
|
Posted: Wed Mar 07, 2007 1:51 pm Post subject: |
|
|
you could make the input an indexed file.
or
you could write all your input to an MQS queue, then using RRSAF during your update program, you would not have restart issues.
as said before, very poor design, especially since you are talking about a potential 100K records.
during your restart procedure either sort takes the time to read to the restart point and then copy everything after, or your program reads to the restart point then start processing.
a FB file (properly Blocked) of 333 record, using the BUFFNO parm, you could load almost any size input file directly into memory. What performance issue do you have reading 90,000, 333 byte records? nano-seconds...
customer is not always right ---- unless you have not told us something about the environment (hardware & software - not people). _________________ Dick Brenholtz
American living in Varel, Germany |
|
Back to top |
|
 |
kolusu Site Admin

Joined: 26 Nov 2002 Posts: 12378 Topics: 75 Location: San Jose
|
Posted: Wed Mar 07, 2007 1:57 pm Post subject: |
|
|
martin,
Ok here is a way to get the unprocessed records. In your pgm everytime you issue a commit write this 1 record to the 80 byte FB recfm control file.
Code: |
OPTION COPY,SKIPREC=n
|
where n is the last record # which was committed. for ex if your input has 15,000 and you commit at 1000 records then every time you commit write a record to control file.
after 1st commit the control file will have
Code: |
OPTION COPY,SKIPREC=1000
|
after 2nd commit once again you will overwrite the same 1 line
Code: |
OPTION COPY,SKIPREC=2000
|
Now let us say your pgm abended , you will use this control card file to skip the records
Code: |
//STEP0100 EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//SORTIN DD DSN=YOUR INPUT FILE,
// DISP=SHR
//SORTOUT DD DSN=YOUR OUTPUT FILE,
// DISP=(NEW,CATLG,DELETE),
// UNIT=SYSDA,
// SPACE=(CYL,(X,Y),RLSE)
//SYSIN DD dsn=your control card file,disp=shr
/*
|
Hope this helps...
Cheers
Kolusu _________________ Kolusu
www.linkedin.com/in/kolusu |
|
Back to top |
|
 |
dbzTHEdinosauer Supermod
Joined: 20 Oct 2006 Posts: 1411 Topics: 26 Location: germany
|
Posted: Wed Mar 07, 2007 2:22 pm Post subject: |
|
|
make sure that when the program has a good eoj, to write a skiprec=0000 _________________ Dick Brenholtz
American living in Varel, Germany |
|
Back to top |
|
 |
acevedo Beginner

Joined: 03 Dec 2002 Posts: 127 Topics: 0 Location: Europe
|
Posted: Wed Mar 07, 2007 4:47 pm Post subject: Re: Replacing DB2 restart logic with a sort step |
|
|
Martin wrote: |
- We do a commit once we insert 1000 rows into the table.
|
In my opinion 1 commit every 1000 inserts...could make the table unavailable to other programs (-911/-904)... I would reduce the number of inserts for every commit. |
|
Back to top |
|
 |
Nic Clouston Advanced
Joined: 01 Feb 2007 Posts: 1075 Topics: 7 Location: At Home
|
Posted: Thu Mar 08, 2007 3:40 am Post subject: |
|
|
Instead of storing the last record committed why not just store the COUNT of records committed so far?. If the user is so picky then surely the storage saved would be important to him - it also means you do not have to compare records on restart - just read past them. You still have to remember to set the count to 0 at EOJ  _________________ Utility and Program control cards are NOT, repeat NOT, JCL. |
|
Back to top |
|
 |
|
|