View previous topic :: View next topic |
Author |
Message |
user5 Beginner
Joined: 29 Dec 2006 Posts: 9 Topics: 4
|
Posted: Thu Mar 29, 2007 4:55 am Post subject: Daily job to delete data older than 15 day w/o EX access |
|
|
Hi All,
I have a table that is being shared across 3 regions(US/EMEA/JAPA).
Thus this table is being used to Insert or Update for all 24 hours. ie I cannot hold an exclusive lock on the table at any instance.
By business, this table must hold data for the last 15 days.
Thus each day, I run a job to delete the records older than 15 days.
The # of records to be deleted each day is around 6 lacs.
While the deletion is happenning there is a possibility that the table is required to insert or update a record.
How can I process this requirement.
What I have done now, is...
I wrote a delete pgm. In this I declared a cursor to fetch records older than 15 days. This cursor fetches the first 1000 records only. The program fetches and deletes the records until end of cursor.
I commit this change & then open the same cursor again for the next 1000 records. This coninues until end of data.
I understood from my DBA that Lock Escalation happens after 1000 pages.The record length in my case is such that only 1000 records can be accomodated in 1000 pages. Thus my cursor is delcared to fetch first 1000 rows only.
fyi... I have specified an isolation level of CS in my Bind card for my pgm
Thanks & Regards,
User5 |
|
Back to top |
|
|
kolusu Site Admin
Joined: 26 Nov 2002 Posts: 12376 Topics: 75 Location: San Jose
|
Posted: Thu Mar 29, 2007 8:35 am Post subject: |
|
|
If you are concerned about the locks then I would suggest an alternative.
1. Unload the records which are to be deleted. Use the cursor declaration sql and unload all the rows to a flat file
2. Now read in this file and delete the records. By doing this you are not locking any pages. You will also have a backup of the rows deleted from the table.
Kolusu _________________ Kolusu
www.linkedin.com/in/kolusu |
|
Back to top |
|
|
vini_srcna Beginner
Joined: 17 Jul 2005 Posts: 9 Topics: 0
|
Posted: Fri Mar 30, 2007 4:07 am Post subject: |
|
|
I wish there could have been a window to run your program.
But still deleting 6 lac records using the DELETE statement is a kind of overhead.
Why cant you delete all the records at one shot..? May be you will get a timeout or other will get it. Thats the reason i was talking about batch window.
Is that a partition table space ..? _________________ Thanks,
Vinay Kumar,
IBM Certified DB2 UDB Database Administrator for Z/OS.
IBM Certified Application Developer for DB2 V8 Family. |
|
Back to top |
|
|
Cogito-Ergo-Sum Advanced
Joined: 15 Dec 2002 Posts: 637 Topics: 43 Location: Bengaluru, INDIA
|
Posted: Fri Mar 30, 2007 5:39 am Post subject: |
|
|
Suppose, you altered your programs to refer to a VIEW instead of the TABLE ? This VIEW would have the records that you are NOT planning to delete. _________________ ALL opinions are welcome.
Debugging tip:
When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.
-- Sherlock Holmes. |
|
Back to top |
|
|
vini_srcna Beginner
Joined: 17 Jul 2005 Posts: 9 Topics: 0
|
Posted: Fri Mar 30, 2007 10:52 am Post subject: |
|
|
Quote: |
Suppose, you altered your programs to refer to a VIEW instead of the TABLE ? This VIEW would have the records that you are NOT planning to delete.
|
Did you mean to say, the view should be defined as to have the records whose data is NOT OLDER THAN 15 days in the WHERE predicate of the view definition..?
This does not makes any difference to the concurrency. View wont have any data stored in it. It is not a physical object which can contain data. View would just direct you to the base table.
USER5:
You may have to think about the logging feature in DB2. If you are deleting 6 lac records using a DELETE statement, DB2 tries to log it to the log file which would be the additional over head. If you requirement thinks that this data dont have to be logged, then you can avoid it from logging in the table definition. But this would have negative impact on Recovery. It would be okay if you are having some backup's. Data cannot be recovered unless it is logged (For LOG apply phase in recovery.) _________________ Thanks,
Vinay Kumar,
IBM Certified DB2 UDB Database Administrator for Z/OS.
IBM Certified Application Developer for DB2 V8 Family. |
|
Back to top |
|
|
Cogito-Ergo-Sum Advanced
Joined: 15 Dec 2002 Posts: 637 Topics: 43 Location: Bengaluru, INDIA
|
Posted: Sat Mar 31, 2007 5:13 am Post subject: |
|
|
Quote: | This does not makes any difference to the concurrency. View wont have any data stored in it. It is not a physical object which can contain data. View would just direct you to the base table. |
hmm....You are right. _________________ ALL opinions are welcome.
Debugging tip:
When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.
-- Sherlock Holmes. |
|
Back to top |
|
|
Manas Biswal Intermediate
Joined: 29 Nov 2002 Posts: 382 Topics: 27 Location: Chennai, India
|
Posted: Wed Apr 11, 2007 10:12 am Post subject: |
|
|
vini_srcna,
Quote: | If you requirement thinks that this data dont have to be logged, then you can avoid it from logging in the table definition. |
Can you prevent logging by specifying it in the table definition. Pls let me know how. I had no idea about that.
user5,
On your requirement, I agree with Kolusu. Unloading would be a good option. It would certainly be better than a cusror delete in terms of concurrency.
But If I understand correctly, there does not seem to be any problem in your present approach. Provided your LOCKMAX is enough, your present cursor should only lock data older than 15 days and not the new data that you are trying to update. If there is a chance of lock escalation to the table level, then you can ask your DBA to tune the LOCKSIZE and LOCKMAX parameters for your tablespace.
HTH...Regards,
Manas _________________ There is no path to peace. Peace is the path.
- Mahatma Gandhi (1869-1948) |
|
Back to top |
|
|
|
|