Joined: 13 Dec 2002 Posts: 45 Topics: 3 Location: 3rd Block from the SUN
Posted: Mon Aug 23, 2004 11:00 am Post subject:
Quote:
<snip> run aproximately 32000 updates <snip>
Do you mean to say your job has 32000 update statements intream? Do all the SQL updates statements belong to the same table or spread across the database?
I would suggest you to break them into different steps & use Batch Load utility.
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
Posted: Mon Aug 23, 2004 11:17 am Post subject:
pavani,
Quote:
1. Is there any limit on the maximum number of SQLs that can be used in the JCL inorder for me to execute in the production.
2. If there is any limit, then how do we know the limit of the number of SQLs to be run?
There is no limit for the no: of sql statements. But there is a limit for the length of sql statement. The maximum length for a statement is 32765 bytes.
Quote:
3. What is the best way for me to run these updates in the production where I can use only JCL job.
Generate the update statements and code a commit after every 500 records and you can just 1 step jcl to update the table.
Code:
//STEP0100 EXEC PGM=IKJEFT01
//SYSTSIN DD *
DSN SYSTEM(xxxx)
RUN PROGRAM(DSNTEP2) -
PLAN(DSNTEP2) -
LIB('DB2P.RUNLIB.LOAD')
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
YOUR UPDATE STATEMENT 001;
YOUR UPDATE STATEMENT 002;
YOUR UPDATE STATEMENT 003;
YOUR UPDATE STATEMENT 004;
...
YOUR UPDATE STATEMENT 500;
COMMIT;
YOUR UPDATE STATEMENT 501;
...
/*
Just to be more specific, I am going to update only one table. But I still need some more clairification - Does each SQL update stmt have a max of 32765 bytes or all the SQL update stmts pgeher that are instream can have a maximum of 32765 bytes?
There is no way that I can test. If at run it will directly in the production and I dno't want to take a risk.
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
Posted: Mon Aug 23, 2004 1:13 pm Post subject:
Pavani,
Each sql statement can have a max of 32765 bytes. And you can have 'N' number of sql statements. make sure that you have a delimiter between each sql statement.
Since the number of updates are very high, I would like to have a fileaid step to create Update SQL stmts. But now the question, is possible to insert a COMMIT stmt after every 500th SQL stmt that is created thru Fileaid?
Or what is the other alternative for me to insert a commit stmt after ever 500th record rather than inserting it manually.
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
Posted: Mon Aug 23, 2004 1:52 pm Post subject:
Pavani,
I usually use an Easytrieve program to generate the sql statements. Are you using file-aid DB2 for generating the sql statements or regular file-aid? Let me try if there is a way to code a commit statement after every 500 records via file-aid
I am using regualr Fileaid only. Basically my input file(Foxed) will have 32000 records containing the primary key data of the table in a specific format. I use this as the input file in Fileaid to frame the SQL udpates and create an output file that will now contain 32000 Udpate stmts.
Till this I can use Fileaid. But how can I insert the COMMIT; after every 500th SQL update that is created.
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
Posted: Mon Aug 23, 2004 2:33 pm Post subject:
Pavani,
Can you post a sample from your INPUT?(just 5 records) and also post the a sample of how the output update statements. May be we can use SORT to generate the update statements. Post the LRECL,RECFM of the input dataset also
Joined: 15 Dec 2002 Posts: 637 Topics: 43 Location: Bengaluru, INDIA
Posted: Tue Aug 24, 2004 2:55 am Post subject:
I do not quite get the problem here. It seems, you want populate the column FILED3 (or, is it FIELD3) with 'ABCD' for all the rows in the table. Because, your UPDATE statements are applying to all of the rows in the table.
If indeed, you want to populate the column FILED3, then you should:
1. Unload the table in a falt file.
2. Use DFSORT/Syncsort to format the file accordingly.
3. Load-replace the formatted file onto the table.
Unless there is something else to this problem, you are using a highly circuitious method to accomplish something simple. _________________ ALL opinions are welcome.
Debugging tip:
When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.
-- Sherlock Holmes.
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
Posted: Tue Aug 24, 2004 6:04 am Post subject:
pavani,
The following JCl will give you the desired results. I just have shown 1 set of control cards. You just need to repeat the same control cards another 63 times as 32000/500 = 64
You need to consider this entire control cards(there are 2 sets) as one set and now just repeat this for another 63 times. When you are in the edit mode put an RR on the first line of control and RR63 on the last line of the control cards. you got yourself all the control cards necessary to insert a commit after every 500 records.
Cogito: I guess pavani has a table and she is just updating a few records. ex: if you have a table with 5 million rows and you are updating only 32000 records, it is a good idea to generate update statements instead unloading and realoding the entire data.
Joined: 15 Dec 2002 Posts: 637 Topics: 43 Location: Bengaluru, INDIA
Posted: Tue Aug 24, 2004 8:35 am Post subject:
Kolusu,
I am not adept in DB2 etc. But, it is just that, writing 32000 queries sounded far-fetched to me. One query doing 32000 updates is fine with me.
And, yet, unloading table, doing selective formatting and then load-replace may not be as good as an idea as writing 32000 queries?
I thought, unload-file formatting-reload via JCL was better than DB2 queries in JCL... _________________ ALL opinions are welcome.
Debugging tip:
When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.
-- Sherlock Holmes.
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
Posted: Tue Aug 24, 2004 9:03 am Post subject:
Cogito,
I will try to explain you with an example. In our system we have a table which right now has 22 million rows. usually our weekend process loads the data into the system. It so happened that there is a bug in one of the program and that bug affected around 300,000 rows. This bug was noticed on a monday when the customers are busy looking at the data on the screens.
Now let us consider the options of fixing the data.
option: 1 unload/reload :
1. Unload the entire table - (30 - 45 minutes)
2. Reformat the file using utility/pgm - (15 minutes)
3. Reload the data - (45 - 60 minutes)
The only problem with this approach is that the table will be unavailable for atleast an hour. The customers cannot live without the data for a complete hour on a monday.
option: 2 unload specific keys and update them
1. unload only wrong data keys - (5 minutes)
2. generate the update sql statements -(5 minutes)
3. Run the sql updates - (5 minutes)
There is absolutely zero down time and the customers don't even know that the data is updated.
option: 3 unload specific keys and reload them
1. unload only wrong data keys - (5 minutes)
2. generate the delete sql statements -(5 minutes)
3. Run the sql deletes statements - (5 minutes)
4. create a load file for the wrong keys - (5 minutes)
5. Load resume the wrong keys with the correct data -( 15 minutes)
There is a window of 15 minutes downtime as the utility has exclusive control over the table.But 15 minutes is downtime is reasonable and no one can complain about that.
Now I think you are in better position to pick optimal solution.
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum