View previous topic :: View next topic |
Author |
Message |
ed.sam13 Beginner
Joined: 09 Aug 2010 Posts: 31 Topics: 11
|
Posted: Wed May 27, 2015 1:28 pm Post subject: Re-numbering GDG generations |
|
|
I have an upstream system which pushes files on SFT and it comes to us on the mainframe as GDG versions. The requirement is to process all the files without missing any generation and also in the same sequence as the files came in. I was thinking about always reading G0001 generation, and once processed delete this generation. But there is a possibility that while i am processing the G0001 generation, another file comes in as G0002.
So in order to handle this scenario, at the end of the job i want to renumber all the GDG generations from 1.
For example, if I am processing G0001 and i get 3 GDG generations G0002, G0003 and G0004, at the end of the processing i want to re-number all of then to G0001, G0002 and G0003 so that i consistently read from G0001.
Is there a way to acheieve this? |
|
Back to top |
|
|
kolusu Site Admin
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
|
Posted: Wed May 27, 2015 2:50 pm Post subject: |
|
|
ed.sam13,
Why bother hard coding the names of the GDG generations? Let your upstream send in all the files thru out the day and at the end of the day, read all the generations with just base name and copy contents into a single back up file (LIFO order) and then delete all the generations. With z/OS 2.1 you can copy the GDG generations in FIFO order too.
Alternatively if you don't have the z/OS 2.1 system , then check the Smart DFSORT trick "Copy GDG records in first in, first out order" here
http://www-01.ibm.com/support/docview.wss?uid=isg3T7000094 _________________ Kolusu
www.linkedin.com/in/kolusu |
|
Back to top |
|
|
ed.sam13 Beginner
Joined: 09 Aug 2010 Posts: 31 Topics: 11
|
Posted: Thu May 28, 2015 9:54 am Post subject: |
|
|
hi Kolusu, Thanks for your reply. We have z/OS 1.13. Also I am supposed to validate the header and the trailer for each file and send out a error file for each file. |
|
Back to top |
|
|
kolusu Site Admin
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
|
Posted: Thu May 28, 2015 10:40 am Post subject: |
|
|
ed.sam13 wrote: | hi Kolusu, Thanks for your reply. We have z/OS 1.13. Also I am supposed to validate the header and the trailer for each file and send out a error file for each file. |
ed.sam13,
Look at the smart DFSORT trick which will let you copy the records in FIFO order. In that trick we are concatenating all the generations to SORTIN. You can slightly change it and create them as individual steps for each generation using the program that does the validation for each generation.
Btw if you let me know the details of the generations and validation logic then may be we can suggest an alternative.
1. What is the LRECL and RECFM of each gdg generation?
2. How is the header and trailer identified?
3. What kind of validation are you looking at?
4. Do you need to copy each generation into a new file or all generations into a single file? _________________ Kolusu
www.linkedin.com/in/kolusu |
|
Back to top |
|
|
ed.sam13 Beginner
Joined: 09 Aug 2010 Posts: 31 Topics: 11
|
Posted: Thu May 28, 2015 11:58 pm Post subject: |
|
|
hi Kolusu,
Thanks for your reply. We have Syncsort 2.1 in our shop. But anyway I will try that DFSORT trick.
To answer your questions.
1) The LRECL is 3000 bytes and the RECFM is FB
2) The Header or trailer record is identified by byte 1 of the record. It is H for header and T for trailer.
3) We get the date, sequence number of the file and hash keys which we validate against the database. The trailer has the record count which should tally against the number of detail records.
4) I dont need to copy at all. But I need to process the file and update database as soon as i receive them.
The file looks something like this
Code: | H|FILE_NAME|0000123|ABCDEFG|
D|...............
D|...............
T|2
|
Here 0000123 is the file sequence number and ABCDEFG is the hash key for the client. |
|
Back to top |
|
|
kolusu Site Admin
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
|
Posted: Fri May 29, 2015 10:18 am Post subject: |
|
|
ed.sam13 wrote: |
4) I dont need to copy at all. But I need to process the file and update database as soon as i receive them.
|
ed.sam13,
In that case why don't you schedule a job that runs as soon as the file is created which just runs the validation program? Even if you receive another file while you are validating the scheduler will put a hold on this job until you are done. So no issues of missing a dataset. _________________ Kolusu
www.linkedin.com/in/kolusu |
|
Back to top |
|
|
ed.sam13 Beginner
Joined: 09 Aug 2010 Posts: 31 Topics: 11
|
Posted: Fri May 29, 2015 12:50 pm Post subject: |
|
|
Thanks for your reply. Not sure if I follow you 100%.
The problem is that SFT is setup to create a new GDG generation whenever a new file comes in. Changing the SFT setup to write to a flat file is out of scope. The job will always process the last generation of the GDG. The file processing job can run for 45 mins to 1 hour, within which there is a chance that 2 new generations kick in. The job when it completes its current iteration when it restarts to process, it will skip 1 generation. |
|
Back to top |
|
|
kolusu Site Admin
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
|
Posted: Fri May 29, 2015 3:45 pm Post subject: |
|
|
ed.sam13 wrote: | Thanks for your reply. Not sure if I follow you 100%.
The problem is that SFT is setup to create a new GDG generation whenever a new file comes in. Changing the SFT setup to write to a flat file is out of scope. The job will always process the last generation of the GDG. The file processing job can run for 45 mins to 1 hour, within which there is a chance that 2 new generations kick in. The job when it completes its current iteration when it restarts to process, it will skip 1 generation. |
Ed.sam13,
I don't want you to change the SFTP process. But have it kick off your validation job. So every time the SFTP process runs it will kick off another job and with dependency on the validation job, so you would never miss a generation.
Or alternatively have 1 job at the end of the day (when you know for sure you wont receive any files) to validate all the files at once by just reading the base. _________________ Kolusu
www.linkedin.com/in/kolusu |
|
Back to top |
|
|
|
|