MVSFORUMS.com Forum Index MVSFORUMS.com
A Community of and for MVS Professionals
 
 FAQFAQ   SearchSearch   Quick Manuals   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Re-generating JCL from SDSF
Goto page 1, 2  Next
 
Post new topic   Reply to topic   printer-friendly view    MVSFORUMS.com Forum Index -> Job Control Language(JCL)
View previous topic :: View next topic  
Author Message
Phantom
Data Mgmt Moderator
Data Mgmt Moderator


Joined: 07 Jan 2003
Posts: 1056
Topics: 91
Location: The Blue Planet

PostPosted: Thu Oct 28, 2004 11:20 am    Post subject: Re-generating JCL from SDSF Reply with quote

I'm trying to re-generate my JCL from SDSF. Let me explain my requirement. Assume I have 5 steps in my JCL (R010 thro' R050) and my program is running in Step R030 and it finds that a parameter passed to it is wrong. Instead of just throwing an error message, I would like to automate this process. (Note: This will applicable for the case wherein we have only one alternative for the wrong parm passed to the program).

Now, I would like the program to be written in such a way that it should cancel the job and re-start the same job from R030. But this time the parm should be changed to the correct value.

We can handle this case in the program itself but most of the times the developers use SJ and submit the job again. If the program takes 5 minutes to complete, it can actually detect the error in the PARM only at the last minute. This is the reason why I don't want this situation to be handled in the program.

Now back to my requirement....I can store the information of the running job by calling SDSF in batch and then I can edit the spool output and resubmit the job. Everything seems fine till this point. But if I have instream control cards in my JCL then I have certain difficulties. I tried using INPUT ON keyword, but what happens is the instream control cards of all steps in my jcl are grouped into one and they are shown at the end of my jcl (in my spool extract dataset). While re-generating my jcl, I need to find a way to distinguish the control cards of each step.

Suppose I have my jcl like this.
Code:

//JOBCARD...
//*
//*
//R010   EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//SORTIN DD DSN=TSOID.TEMP.FILE.INPUT,DISP=SHR
//SORTOUT DD DSN=TSOID.OUTPUT.FILE,
//           DISP=(,CATLG,DELETE),
//           UNIT=SYSDA,
//           SPACE=(80,(1,1),RLSE)
//SYSIN   DD *
  SORT FIELDS=(1,8,CH,A)
  SUM FIELDS=(10,4,ZD)
/*
//*
//*
//R020   EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//SORTIN DD DSN=TSOID.TEMP.FILE.INPUT2,DISP=SHR
//SORTOUT DD DSN=TSOID.OUTPUT.FILE2,
//           DISP=(,CATLG,DELETE),
//           UNIT=SYSDA,
//           SPACE=(80,(1,1),RLSE)
//SYSIN   DD *
  SORT FIELDS=(1,50,CH,A,72,8,CH,A)
  SUM FIELDS=NONE
/*
//R030     EXEC PGM=MYPGM,PARM='WRONG PARM'
//STEPLIB  DD DSN=TSOID.LOAD,DISP=SHR
//SYSOUT   DD SYSOUT=*
//SYSIN    DD *
  EXT FLD=1,10
/*
//*
//R040   EXEC PGM=....
.....


When I extract the spool output in a dataset using INPUT ON I get the output like this.

Code:

JCL spool output....
...
...
  SORT FIELDS=(1,8,CH,A)
  SUM FIELDS=(10,4,ZD)
  SORT FIELDS=(1,50,CH,A,72,8,CH,A)
  SUM FIELDS=NONE
  EXT FLD=1,10
...


Now, how do I separate the instream control card information by each step and re-generate my JCL ?

Or is there some other easy way to re-build my JCL ?

Kindly help,
Thanks
Phantom
Back to top
View user's profile Send private message
kolusu
Site Admin
Site Admin


Joined: 26 Nov 2002
Posts: 12376
Topics: 75
Location: San Jose

PostPosted: Thu Oct 28, 2004 11:37 am    Post subject: Reply with quote

phantom,

Quote:

Now, I would like the program to be written in such a way that it should cancel the job and re-start the same job from R030. But this time the parm should be changed to the correct value.


You are unnecessarily complicating simple requirements.

Once you figured that the pgm had recieved a wrong parm value , generate the JCL from the step within the cobol program and submit it to the INTRDR. Also a move a return code of 16.

Now check this return code for the remaining steps and it will simply skip the steps. Once this job is completed the submitted Job will kick off.


Hope this helps...

Cheers

kolusu
_________________
Kolusu
www.linkedin.com/in/kolusu
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Phantom
Data Mgmt Moderator
Data Mgmt Moderator


Joined: 07 Jan 2003
Posts: 1056
Topics: 91
Location: The Blue Planet

PostPosted: Thu Oct 28, 2004 11:55 am    Post subject: Reply with quote

Kolusu,

I still have a problem. I don't want to skip subsequent steps ie. steps R040 & R050. Actually either of them receive input from R030 (my program). There might be cases where steps r040 & r050 may again contain instream control cards making it difficult for me to recreate the jcl. (I was just trying to give u an example).

Step R030 (in which my program is executed) should be executed again with the corrected PARM and it might feed subsequent steps (say R040, R050....). Since the first two steps ran successfully I would like to restart the job from R030 which had error in parm.

Kindly help,

Thanks,
Phantom
Back to top
View user's profile Send private message
kolusu
Site Admin
Site Admin


Joined: 26 Nov 2002
Posts: 12376
Topics: 75
Location: San Jose

PostPosted: Thu Oct 28, 2004 12:07 pm    Post subject: Reply with quote

Phantom,

You will not have any problem at all. Let me explain it with an example.

Your JCL is stored in a
Code:

PHANTOM.TEST.JCL(MYJCL)


Now in step030 you found that you received a wrong parm value. Now you will be generating the jcl to be submitted to the intrdr. Open the dataset PHANTOM.TEST.JCL(MYJCL) in your pgm and read the JCL till step030 and then write all other steps following it and submit it to the intrdr. By doing so you don't have any problem with the control cards.

Since you are re-submitting the job once again from step030, you need to skip the steps 40 & 50 in the orginal job. That is where the return code comes into picture.

Hope this helps...

Cheers

kolusu
_________________
Kolusu
www.linkedin.com/in/kolusu
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Phantom
Data Mgmt Moderator
Data Mgmt Moderator


Joined: 07 Jan 2003
Posts: 1056
Topics: 91
Location: The Blue Planet

PostPosted: Thu Oct 28, 2004 12:13 pm    Post subject: Reply with quote

Oops, There is something that I need to clarify. I am writing a program which everyone could use. So, there is no way I could know the JCL dataset from which the job has been submitted. Also, there could be many situations wherein my colleague might goes to SDSF and have a SJ over my job, change the input & output file names and run it on his name.

That is the reason why I said "Re-generating JCL from SDSF". Sorry for not mentioning this earlier.

Thanks,
Phantom
Back to top
View user's profile Send private message
kolusu
Site Admin
Site Admin


Joined: 26 Nov 2002
Posts: 12376
Topics: 75
Location: San Jose

PostPosted: Thu Oct 28, 2004 1:03 pm    Post subject: Reply with quote

Phantom,

If you are coding the program as a generic one , then You need to provide some generic DDname that you can access your program for re-creating the JCL.

Kolusu
_________________
Kolusu
www.linkedin.com/in/kolusu
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Phantom
Data Mgmt Moderator
Data Mgmt Moderator


Joined: 07 Jan 2003
Posts: 1056
Topics: 91
Location: The Blue Planet

PostPosted: Fri Oct 29, 2004 2:33 am    Post subject: Reply with quote

Kolusu,

Don't we have any other way to re-generate JCL from SDSF apart from asking everyone to give their JCL library ? I'm trying to build a tool (something like a comparison tool 3.13) which could be plugged in any JCL. The output of my tool can be fed to subsequent programs by the individual developer. At this point I can't go and ask them to create/use a generic jcl or specify their dataset name when they submit their job. They can indeed go to sdsf & have a SJ over a job submit it again. The tool will be executed in any step of jcl and its output could be fed to the next steps. So, there is no way to use a generic jcl. Only that particular step can be made generic.

Please help me to get around this problem. I have only one option - regenerate from SDSF.

Thanks a lot,
Phantom
Back to top
View user's profile Send private message
Bill Dennis
Advanced


Joined: 03 Dec 2002
Posts: 579
Topics: 1
Location: Iowa, USA

PostPosted: Fri Oct 29, 2004 8:01 am    Post subject: Reply with quote

Can you invoke batch SDSF in step1 to do an SJ on yourself and PRINT into an assigned dataset name for later editing and resubmit?

Regards,
Bill
Back to top
View user's profile Send private message
Phantom
Data Mgmt Moderator
Data Mgmt Moderator


Joined: 07 Jan 2003
Posts: 1056
Topics: 91
Location: The Blue Planet

PostPosted: Sat Oct 30, 2004 2:08 am    Post subject: Reply with quote

Bill,

Yup, that is what I'm trying to do. (But SJ doesnot seem to work in batch SDSF). All I could do is 'S'. In that case, I have problems with control cards used as instream. Please see my first post for the problems that I'm facing with instream control cards. All the instream control cards (for different steps) are grouped together and I'm unable to separate them back to individual steps.

Kindly help,

Thanks,
Phantom
Back to top
View user's profile Send private message
Cogito-Ergo-Sum
Advanced


Joined: 15 Dec 2002
Posts: 637
Topics: 43
Location: Bengaluru, INDIA

PostPosted: Sat Oct 30, 2004 8:39 am    Post subject: Reply with quote

Phantom,
I am late to this thread. So, apologies, if I sound repetitive.

Going through your first post, I can understand the following.

1. You have a step (executing a program) that receives a PARM.
2. This PARM may/may not be correct.
3. If wrong, then you want the step to be restarted with a correct PARM value.

If the above is right, can you please answer these questions?

1. Who decides the PARM value being wrong?
2. How is the correct PARM value created?
_________________
ALL opinions are welcome.

Debugging tip:
When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.
-- Sherlock Holmes.
Back to top
View user's profile Send private message
Phantom
Data Mgmt Moderator
Data Mgmt Moderator


Joined: 07 Jan 2003
Posts: 1056
Topics: 91
Location: The Blue Planet

PostPosted: Mon Nov 01, 2004 12:13 am    Post subject: Reply with quote

Cogito,

This is how my tool is going to work. This tool is basically built to save CPU & JOB run time. My program works absolutely fine when the input file(s) do not contain any duplicate records, but if they do have duplicates then I need to pass thro' the data twice (once to eliminate duplicates by inserting sequence no. as shown below). So, If my job takes 'X' minutes to complete the operation for unique set of records, it is expected to take nearly 2X times to eliminate duplicates and do the processing.

Code:

Input:
AAAAAA
AAAAAA
BBBBBB
CCCCCC
CCCCCC
DDDDDD

Input file : after eliminating duplicates
AAAAAA000001
AAAAAA000002
AAAAAA000003
BBBBBB000001
CCCCCC000001
CCCCCC000002
DDDDDD000001


Now, I am forced to check & eliminate duplicates in the input files everytime regardless of whether my file contains duplicates or not. So, I decided to give the users an overriding option. i.e if the user knows that the datasets does not contain any duplicates they can pass a PARM='NODUPS'.

But now I need to take care of manual errors. Ppl normally go to their spool and put an SJ over their last job, change the datasets and submit it again without realising that the new datasets has duplicates. This is what I'm trying to automate.

When NODUPS is passed as PARM, my program will not scan for duplicates, but if the file actually has some then the output will be wrong. Now, as I said in my earlier post, the program can detect this error but only at the last moment (when it generates the output). Instead of just throwing an error message I want to handle this case myself. I want my tool to act intelligent. when it knows that the NODUPS parm is wrong, it should correct the parm automatically and re-submit the job from the current step. This way when the user puts SJ over his last job again he won't repeat the mistake of giving a wrong PARM.

Also, I'm trying to introduce the concept of A.I (Artificial Intelligence) into my tool i.e "learning by experience". when the user passes wrong information (like the one explained above) the first time the tool has no idea but from the next time onwards it will correct itself without the user's intervention. i.e If the input datasets contain duplicates and if the user had given NODUPS option the first time, the tool will run with NODUPS and identify the error at the last moment and correct itself and resubmit the job. But from the next time onwards, it will immediately know that the user had given a wrong parm last time. So, this time it will kill the job and re-submit the job with the correct parm in the initial stages itself.

Hope I have explained my requirement in detail.

Looking forward for your kind support.

Thanks,
Phantom
Back to top
View user's profile Send private message
Cogito-Ergo-Sum
Advanced


Joined: 15 Dec 2002
Posts: 637
Topics: 43
Location: Bengaluru, INDIA

PostPosted: Mon Nov 01, 2004 1:21 am    Post subject: Reply with quote

Hi Phantom,
It seems your ultimate aim is to ensure that your program processes records whether or not it has duplicates. I think, the processing program must be written such that it would process records whether or not duplicates are available. Otherwise, what you are trying to do seems to me like an overkill.

Is this a new program or an existing program?
_________________
ALL opinions are welcome.

Debugging tip:
When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.
-- Sherlock Holmes.
Back to top
View user's profile Send private message
kolusu
Site Admin
Site Admin


Joined: 26 Nov 2002
Posts: 12376
Topics: 75
Location: San Jose

PostPosted: Mon Nov 01, 2004 6:10 am    Post subject: Reply with quote

phantom,

As I mentioned earlier you are making the tool too complicated. First let me tell you the problems you have generating the JCL from SDSF.

1. Do you have authority to read everybody's spool?

2. Let us say you do have authority to read , but how do you make sure that you are reading the correct output? .i.e Are you going to read the control blocks and get the jobnumber ? You need the jobnumber which is unique because , the user might have the same jobname for another jobs also or he might have submitted the same job more than once . I cannot see how you will be able to do it without scanning the control blocks for the jobnumber.

You have to go thru scanning the control blocks just to figure out if it had dups? It ain't worth it.

From my understanding of your problem ,you can find out for your self if the program has dups or not.

Add this step if you have DFSORT
Code:

//DUPEFIND EXEC PGM=ICETOOL
//TOOLMSG  DD SYSOUT=*
//DFSMSG   DD SYSOUT=*
//IN       DD DSN=YOUR INPUT FILE,
//            DISP=SHR
//OUT      DD DSN=NO DUPS  FILE,
//            DISP=(NEW,CATLG,DELETE),
//            UNIT=SYSDA,
//            SPACE=(CYL,(1,1),RLSE)
//DUPES    DD DSN=YOUR DUPS FILE,
//            DISP=(NEW,CATLG,DELETE),
//            UNIT=SYSDA,
//            SPACE=(CYL,(1,1),RLSE)
//TOOLIN   DD *
  SELECT FROM(IN) TO(OUT) ON(1,N,CH) FIRST DISCARD(DUPES)
/*


Add this step if you have syncsort.
Code:

//DUPEFIND EXEC PGM=SORT
//SYSOUT   DD SYSOUT=*
//SORTIN   DD DSN=YOUR INPUT FILE,
//            DISP=SHR
//SORTOUT  DD DSN=NO DUPS  FILE,
//            DISP=(NEW,CATLG,DELETE),
//            UNIT=SYSDA,
//            SPACE=(CYL,(1,1),RLSE)
//SORTXSUM DD DSN=YOUR DUPS FILE,
//            DISP=(NEW,CATLG,DELETE),
//            UNIT=SYSDA,
//            SPACE=(CYL,(1,1),RLSE)
//SYSIN    DD *
  SORT FIELDS=(1,N,CH,A)
  SUM FIELDS=NONE,XSUM
/*


Now just read the dups file in your pgm for 1 record. If the file is empty then you don't have dups , or if it has one atleast one record then you need to process it as having dups.

By doing so you are eliminating the need to read the sysout and re-generate the jcl and re-submit the job.

This method also eliminated the user having to remember to send the right parm. Everything is automated and the user just needs to supply the input dataset. It is you who determines if the program has dups or not.

Hope this helps...

Cheers

Kolusu
_________________
Kolusu
www.linkedin.com/in/kolusu
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Phantom
Data Mgmt Moderator
Data Mgmt Moderator


Joined: 07 Jan 2003
Posts: 1056
Topics: 91
Location: The Blue Planet

PostPosted: Mon Nov 01, 2004 11:21 am    Post subject: Reply with quote

Ah!!! I am caught in a strange situation. I don't know what to do. Confused

Quote:

1. Do you have authority to read everybody's spool?


I do have the authority, but I don't need that actually. Say, I give you my tool, you are going to include that in your job and run the job from your id. So, I'm not going to read your spool but the program is going to do that. Already the program processes all control blocks (TCB, TIOT, JFCB etc...) to get the job information and also the information about the input datasets (Lrecl, Recfm etc...). So, the control block processing is not something new to my code.

Basically most of the I/O is handled by Sort. My program internally calls Sort. Sort is my major driving force. Sort is really fast but I'm trying to fine tune as much as possible.

Here is a live example:
I pass two input files each containing 3 million records (LRECL: 300). To have one pass over this information my program (which inturn calls sort) takes 20 seconds of CPU and 2 min & 20 seconds of job time to complete.

Now If I try to eliminate the duplicates using Sort and generate a sequence no. for the duplicate records using a program and then process the data, it is equivalent to having more than 2 pass over the dataset. So, this will take atleast / around twice the time to complete the job.

The thing to be noted is that the input files will be huge 90% of the time (in millions). whether or not I have duplicates in the input file I need to code this sort to check for duplicates, instead if I just have to regenerate the JCL it will just be a fraction of the time. (Control block processing is needed anyway, to know the LRECL & RECFM, so this cannot be considered as overhead).

So, here you go, this is my situation.

Please help,

Thanks,
Phantom
Back to top
View user's profile Send private message
Cogito-Ergo-Sum
Advanced


Joined: 15 Dec 2002
Posts: 637
Topics: 43
Location: Bengaluru, INDIA

PostPosted: Mon Nov 01, 2004 12:21 pm    Post subject: Reply with quote

Phantom,
I still think that, a SORT step to check for duplicates (rather than your program) before you run your program is better.
_________________
ALL opinions are welcome.

Debugging tip:
When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.
-- Sherlock Holmes.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic   printer-friendly view    MVSFORUMS.com Forum Index -> Job Control Language(JCL) All times are GMT - 5 Hours
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


MVSFORUMS
Powered by phpBB © 2001, 2005 phpBB Group