View previous topic :: View next topic |
Author |
Message |
Prayank Beginner
Joined: 13 May 2003 Posts: 18 Topics: 9 Location: Indore,India
|
Posted: Tue May 13, 2003 6:10 am Post subject: Sequential files in COBOL |
|
|
I want to read the records of a seqeuntial file iteratively. How do I get back to the first record after encountering End of file. I want to do it without closing and opening the file again and agian |
|
Back to top |
|
|
Prayank Beginner
Joined: 13 May 2003 Posts: 18 Topics: 9 Location: Indore,India
|
Posted: Tue May 13, 2003 7:59 am Post subject: |
|
|
Hi Ravi,
I need to read that file more than 2 lac times in one execution of the program. So I think it won't be a good idea to close and open for efficiency reasons.
Your thoughts ?? |
|
Back to top |
|
|
sp_shukla Beginner
Joined: 09 Apr 2003 Posts: 9 Topics: 0
|
Posted: Tue May 13, 2003 9:26 am Post subject: |
|
|
Prayank,
I don't know what is the requirement but it's weird that you want to read a sequential file again after reading it once already. As it seems from your statement that tou are opeining the file in READ mode so the status of file is not going to change even after the first read so why you want to read it again.
Anyway whatever may be your requirement it's not possible(at least what I know) to do it without reopening the file. Better to tell us your requirement so may be we can suggest you some different approch.
SP |
|
Back to top |
|
|
Prayank Beginner
Joined: 13 May 2003 Posts: 18 Topics: 9 Location: Indore,India
|
Posted: Tue May 13, 2003 10:58 am Post subject: |
|
|
Requirement: Let's say I have 2 files F1 and F2. I want to check for all records in F1 that they be present in F2. So I pick a record from F1 and scan all records of F2 until record match is found or end of file is met. Now I repeat this for next record from F1 and so on. Neither of them is a sorted file. |
|
Back to top |
|
|
sp_shukla Beginner
Joined: 09 Apr 2003 Posts: 9 Topics: 0
|
Posted: Tue May 13, 2003 12:26 pm Post subject: |
|
|
Prayank,
Sort both the files before coding the program. It will make your life a lot easier. Also If you have EASYTRIEVE (unless you want to do it in COBOL only)this will not take more than 15 lines of code. You can use SORT utility for sorting your files.
SP |
|
Back to top |
|
|
CaptBill Beginner
Joined: 02 Dec 2002 Posts: 100 Topics: 2 Location: Pasadena, California, USA
|
Posted: Tue May 13, 2003 3:32 pm Post subject: |
|
|
Prayank,
If you really want to do this in the way you describe, then OPEN and CLOSE the file is the only way.
If you can sort the files, then sp_shukla's method is best.
I think ICETOOL can also be used, but this would be better answered by one of the experts such as kolusu. Even Frank Yaeger admits he is amazing in what he can do with DFSORT and ICETOOL. |
|
Back to top |
|
|
kolusu Site Admin
Joined: 26 Nov 2002 Posts: 12375 Topics: 75 Location: San Jose
|
Posted: Tue May 13, 2003 5:24 pm Post subject: |
|
|
Prayank,
If your sole intention is to read the same file twice, then you can allocate the same file to different DD names with DISP=SHR.
Code: |
SELECT EMPLOYEE-FILE ASSIGN EMP
|
Code: |
SELECT SUB-EMPLOYEE-FILE SUBEMP
|
Now in the JCL assign the input file for both EMP & SUBEMP DD names with DISP=SHR
This will save just a couple of close's and opens of files.
Hope this helps...
cheers
kolusu |
|
Back to top |
|
|
sp_shukla Beginner
Joined: 09 Apr 2003 Posts: 9 Topics: 0
|
Posted: Tue May 13, 2003 9:13 pm Post subject: |
|
|
Kolusu,
It seems from Prayank's requirment that he needs to open and close the file for 200 thousand times if he has 200 thousand records in file1!!!
Prayank,
If both of your files have 200 thousand records than, with the approch you are trying to follow, I think you will end up writing a program which is no where even close to a efficient one. Believe me SORT the files and then write a simple program. This will save lots of your time, efforts as well as the CPU resources.
SP |
|
Back to top |
|
|
semigeezer Supermod
Joined: 03 Jan 2003 Posts: 1014 Topics: 13 Location: Atlantis
|
Posted: Tue May 13, 2003 10:07 pm Post subject: |
|
|
If it is only 200,000 records, if they aren't long records, why not just read the whole thing into storage and use that? If this is a task that will be done often, why not? if they are 100 byte records, that is only 20MB +overhead. Hash the records and it is much less. If you sort them on the way in, especially in a balanced tree, the thing will be very fast, a few seconds or less. On the other hand, rereading a file containing static information 200,000 times? well lets just say that it won't win you any friends when the bill for CPU and I/O comes in.
Actually, why not just sort them and use an existing compare program? No code to write at all that way, except maybe for some basic postprocessing.
How much effort you put in depends on how often you will do this. But reading 40 billion records ( assuming 2 200K files, 200,000x200,000=40,000,000,000) is definitely not the way to go. (If you get an 8 millisecond read per record, it would take almost 89 hours in just read time. Add program and system overhead, and you may as well go on a long vacation and hope it is done when you return. assuming the data is blocked well, you should do better than 8ms/record though.) |
|
Back to top |
|
|
Glenn Beginner
Joined: 23 Mar 2003 Posts: 56 Topics: 3
|
Posted: Tue May 13, 2003 11:13 pm Post subject: |
|
|
Sorting both files really will pay off (it'll be more efficient than reading the other file one time for each record)...even if it's a temporary output...
If there's some problem with you changing the job stream to add these sorts then you have other problems beyond anything we can help you with your bosses and management. |
|
Back to top |
|
|
slade Intermediate
Joined: 07 Feb 2003 Posts: 266 Topics: 1 Location: Edison, NJ USA
|
Posted: Wed May 14, 2003 12:52 am Post subject: |
|
|
I worked on a pgm that did 80K open/closes it ran 3+ hrs. We rewrote it to avoid the o/cs and it ran in under 10 minutes. If you're looking at 400k o/cs, that translates to 24hrs processing just for the opens.
There's an old Jewish saying:
"Pay the 2 dollars."
You do the sort and it's a simple file match pgm from there. |
|
Back to top |
|
|
prakal Beginner
Joined: 14 Mar 2003 Posts: 22 Topics: 1
|
Posted: Wed May 14, 2003 10:55 pm Post subject: |
|
|
semigeezer,
Could you please let me know as to how to hash the data in a mainframe enviroment.
I have heard a couple of my collegues who work in Java talk about hashing. If there is a link that you can provide to give more information on data hashing in mainframe, it would help.
Thanks
Prakal. |
|
Back to top |
|
|
Dhruvi Beginner
Joined: 21 Mar 2003 Posts: 5 Topics: 1 Location: India
|
Posted: Tue Jul 08, 2003 1:23 am Post subject: |
|
|
Hi Prayank,
I did the same task in that you are facing problem.I did it in very simple way.First I sorted both of the file.Then read the first record of file F1,comapre it with the first record of file F2, and gave the following onditions,
If F1 is Equal F2
"record matched"
Go and read next record from F1
If F1 is less than F2
"record F1 can't be matched to any record of the file F2 so drop the record and pick next from F1"
If F1 is greater than F2
" Record F1 may exist in file F2 so read nect record from file F2"
Continue search
I also needed compare two big sequential files.And in this mathod there is no need to opne and close the files again and again.
Search will be stopped whenever record in F1 will be greater than F2 record and F2 file reaches at End of file.
Hope it will help U,
regards,
Dhruvi |
|
Back to top |
|
|
ofer71 Intermediate
Joined: 12 Feb 2003 Posts: 358 Topics: 4 Location: Israel
|
Posted: Tue Jul 08, 2003 6:15 am Post subject: |
|
|
And I'm just asking:
Why to do this kind of job programmatically when you have great tools
like ICETOOL, SORT, SUPERC, IEBCOMPR etc.......?
O.
________
Chevrolet Kingswood
Last edited by ofer71 on Thu Mar 17, 2011 10:33 am; edited 1 time in total |
|
Back to top |
|
|
Dhruvi Beginner
Joined: 21 Mar 2003 Posts: 5 Topics: 1 Location: India
|
Posted: Wed Jul 09, 2003 1:23 am Post subject: |
|
|
Because we need this processing inside the prgram.How could we use SUPERC or any other tool for a process inside a program.
Bacause program is not performing only the compares,a lot of other things needs to be processed in the program!!! |
|
Back to top |
|
|
|
|