Joined: 03 Jan 2003 Posts: 1014 Topics: 13 Location: Atlantis
Posted: Mon Nov 01, 2004 12:59 pm Post subject:
Why are you trying to do this in JCL? JCL is not a programming language. Why not just use a single step and call all of your programs from a driving REXX exec. I think you've decided that JCL is the answer to your problem before analysing your problem. Use the right tool for the JOB. JCL is not it.
Joined: 07 Jan 2003 Posts: 1056 Topics: 91 Location: The Blue Planet
Posted: Tue Nov 02, 2004 12:46 am Post subject:
semigeezer,
Quote:
I think you've decided that JCL is the answer to your problem before analysing your problem.
My tool is something like the 3.13 compare. It has both Foreground & Background execution modes. For the foreground one I use Rexx. Also, from the rexx online screen I have the option to submit a Batch Job. Since as I said earlier the input datasets that I use is always huge so Its better to run it as a batch job rather than in foreground mode.
Then, I would like my tool to be used as a plugin. For example, you can compare two datasets thro' fileaid and then pass the output of compare to your next set of programs right. I want my tool to have that capability.
If there is no way to re-generate the jcl from sdsf then my other alternative will be this: (As per everyone's advice).
1. For the first time, my tool does not know whether the input datasets have duplilcates or not. So, it would assume that it has duplicates and process the records.
2. From the next time onwards, it learns from experience (concept of Artificial Intelligence ) and it will do the operation depending on the previous run. i.e if it figures out that there were no duplicates in the last run, it won't run the sort to eliminate the duplicates.
3. Note: The second option may not always be true. There might be no duplicate records on a given day, but the next day the dataset might have duplicates, but since the volume of data is too huge, it won't be that frequent that a dataset liable to have duplicates come unique on one fine day. But if they do come this way, my tool will make a note of how frequent the dataset varies (dups to no-dups). If the variation is more, the tool will always process that dataset as having duplicates. If not, it will assume that the dataset has unique set of records. If the assumption goes wrong, it still can detect the problem but at the last stage and then it will re-execute the process again (Internally).
If at all, there is a way to re-generate jcl from sdsf, I would still like to know how.
Joined: 03 Jan 2003 Posts: 1014 Topics: 13 Location: Atlantis
Posted: Tue Nov 02, 2004 3:41 am Post subject:
I think you missed my point. JCL allocates files and calls programs (SORT, etc). Rexx allocates files and calls programs (SORT, etc). The difference is that with Rexx you have more than a simple sequential, unyielding process. The notion that running in batch precludes you from controlling your program flow with Rexx is what is causing you to search for this convoluted SDSF/JCL thing. All you have to do is run Rexx under IKJEFT01 (TSO). Anything related to allocations and program control you can do with JCL, you can do with Rexx and programs called from Rexx. The reverse is not true. You also need to ask if anyone else will want to maintain the proposed solution.
All times are GMT - 5 Hours Goto page Previous1, 2
Page 2 of 2
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum