View previous topic :: View next topic |
Author |
Message |
manu Beginner
Joined: 26 Dec 2002 Posts: 47 Topics: 19
|
Posted: Wed Jan 18, 2017 6:32 am Post subject: Batch job's CPU time difference between Production and TEST |
|
|
There is a Cobol batch job which takes around 500 CPU mins in production and in test region it tooks only 300 mins. The job has a tape input file with about 400 million records and it filters the records based on the criteria in a set of other input files. All the processing is with in Cobol internal tables and data is all in files with minimal DB2. With the same code and same input files the job took 200 CPU mins more in production and I am not really sure why. The output files matches perfectly.The only other difference I see is the EXCP count between prod and test which is more in prod by 2000K. The prod and test DB has the same data and there is very minimal DB2 in this job. Any idea on why there is a big CPU mins difference between prod and TEST? If it is due to different processor etc. I would think PROD should be more faster. Please let me know if you have any comments /guesses or tips.Thank you for your inputs. |
|
Back to top |
|
|
kolusu Site Admin
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
|
Posted: Wed Jan 18, 2017 11:53 am Post subject: Re: Batch job's CPU time difference between Production and T |
|
|
manu wrote: | The only other difference I see is the EXCP count between prod and test which is more in prod by 2000K. |
Since it is the EXCP count, I suggest you look into the Blocksize parm.
1. Is the COBOL program defined with BLOCK CONTAINS 0 RECORDS ?
2. Are you getting the same blocksize both in test and prod?
3. Are the language parms both the same in both test and prod? You can display the LE parms using the RPTOPTS parm
Code: |
//STEP0100 EXEC PGM=yorupgm,
// PARM='/RPTOPTS(ON),MSGFILE(CEEDOPT)'
//CEEDOPT DD SYSOUT=*
... |
_________________ Kolusu
www.linkedin.com/in/kolusu |
|
Back to top |
|
|
haatvedt Beginner
Joined: 14 Nov 2003 Posts: 66 Topics: 0 Location: St Cloud, Minnesota USA
|
Posted: Wed Jan 18, 2017 11:41 pm Post subject: |
|
|
If you have the Strobe tool from Compuware or a similar tool from another vendor I would suggest that you use that to analyze the performance.
Are any of the files in your job VSAM ? If so I would look carefully at the VSAM buffering as it can make a very significant impact on both the EXCP counts (I/O) and also the cpu time. Strobe will do a very good job of showing information about the dataset characteristics and also the I/O counts.
If you are using VSAM and SMB (system managed buffering), then perhaps that is something to look at as well. The default buffering could be different in prod and test.
Since the output is matching, I suspect that the dataset characteristics are different between prod and test. I would look at blocksize as Kolusu suggests and also the buffering.
If you have Strobe and can generate a text report, perhaps you could post the #DSC (dataset characteristics) and #LSR (local shared resources) secions. Also the #PSU (program summary usage) would be useful as well. _________________ Chuck Haatvedt
email --> clastnameatcharterdotnet
(replace lastname, at, dot with appropriate
characters) |
|
Back to top |
|
|
manu Beginner
Joined: 26 Dec 2002 Posts: 47 Topics: 19
|
Posted: Thu Jan 19, 2017 10:06 am Post subject: |
|
|
Thanks for your inputs. Will check and update. |
|
Back to top |
|
|
|
|