Posted: Thu Apr 21, 2005 4:31 am Post subject: CA7 Scheduling
I have three jobs A, B and C. A triggers B and B triggers C. A is triggered by an NDM process. The 2nd run of A should not start until the job C of the previous cycle is completed. So is there any option to accomplish this using CA7 scheduling ? We have no control over the NDM process, and the problem crops up when we have two succesive A jobs getting triggered one after the other quickly, in a narrow time window.
Joined: 26 Nov 2002 Posts: 12372 Topics: 75 Location: San Jose
Posted: Thu Apr 21, 2005 5:43 am Post subject:
v.suresh,
Merge JOB B and JOB C into JOB A. By doing so you will have only 1 job. So now even if JOB A is triggered within a Short span of time, it will just wait in the queue untill prior run of JOB A is completed.
This is one of those tricky situations where, because the jobs are event-driven, the scheduling system is not going to have a lot of control over how the jobs are processed. Certainly, kolusu's suggestion deserves consideration, as does a possible re-design of the entire process.
However, there may be an alternative:
Have job A, at the beginning, check for the existence of a control dataset. If the dataset exists, suspend the execution of A, or, end A and have it reload itself into the schedule to run again after a pre-determined time span (1 minute, 5 minutes, etc.). Have job A, when it has successfully and completely run, catalog the control dataset. Have job C delete the control dataset when it's done.
Joined: 26 Nov 2002 Posts: 12372 Topics: 75 Location: San Jose
Posted: Thu Apr 21, 2005 9:36 am Post subject:
Quote:
Have job A, at the beginning, check for the existence of a control dataset. If the dataset exists, suspend the execution of A, or, end A and have it reload itself into the schedule to run again after a pre-determined time span (1 minute, 5 minutes, etc.). Have job A, when it has successfully and completely run, catalog the control dataset. Have job C delete the control dataset when it's done.
Superk,
There is a likely chance getting into an infinite loop. Take the following scenario.
Code:
JOB A 1ST RUN at 11.00 AM
JOB A 2ND RUN AT 11.15 AM (1ST RUN IS NOT COMPLETE)
JOB A 3RD RUN AT 11.20 AM (1ST RUN IS NOT COMPLETE YET)
JOB A 4TH RUN AT 11.25 AM (1ST RUN IS NOT COMPLETE YET !!)
Now everytime you check the existence of the dataset and re-load the Job A and finally you will end up running JOB A in a loop.
The other alternative would be to let JOB A create a GDG version every time and at the end of the day trigger JOB B and JOB C which will process the GDG versions and delete all the versions after successful completion.
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum