Row and Statement based Trigger Question - IBM AS400

This is a discussion on Row and Statement based Trigger Question - IBM AS400 ; Hi All, This question is related to iSeries V5R4 and db2. I want to implement an AFTER DELETE trigger to save the deleted rows to an archive table, I initially defined it as a FOR EACH STATEMENT trigger that would ...

+ Reply to Thread
Results 1 to 11 of 11

Thread: Row and Statement based Trigger Question

  1. Row and Statement based Trigger Question

    Hi All,

    This question is related to iSeries V5R4 and db2.

    I want to implement an AFTER DELETE trigger to save the deleted rows
    to an archive table, I initially defined it as a FOR EACH STATEMENT
    trigger that would insert all the deleted rows in one operation like
    this:

    CREATE TRIGGER MyTable_TD
    AFTER DELETE ON MyTable
    REFERENCING OLD TABLE AS Deleted
    FOR EACH STATEMENT
    BEGIN
    INSERT INTO MyTableA SELECT * from Deleted
    END

    This worked pretty well, but if I was bulk deleting millions of
    records, the delete would run for a long time without writing any
    archive records. Only once all the deletes completed would any archive
    records get written. If the operation was interrupted, the delete
    would end incomplete, and no archive records are written at all - the
    records are lost forever. The archive table is not journaled.

    I can see a performance benefit in this approach as the inserts are
    done as a single operation (although there may be a penalty of
    individual inserts into a temp table).

    I am now attempting to rewrite this as a FOR EACH ROW based trigger,
    on the expectation that one row insert will occur immediately after
    each row deleted, so if the operation is interrupted, at most one row
    is lost.

    This is what I am trying to do:
    CREATE TRIGGER MyTable_TD
    AFTER DELETE ON MyTable
    REFERENCING OLD ROW AS Deleted
    FOR EACH ROW
    BEGIN
    INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
    END

    But of course the trigger payload is invalid as Deleted is a row
    reference not a table reference.

    So my question is this: in a row based trigger, how can I reference
    the old deleted row in my insert statement. I cannot use a list of
    individually named columns as a) I want this to be generic and b) I
    want it to be low maintenance and c) the underlying table has hundreds
    of columns and referencing them individually would be a royal PITA!

    Thanks in advance for any advice,
    JohnO

  2. Re: Row and Statement based Trigger Question

    On May 9, 11:45 am, JohnO wrote:
    > Hi All,
    >
    > This question is related to iSeries V5R4 and db2.
    >
    > I want to implement an AFTER DELETE trigger to save the deleted rows
    > to an archive table, I initially defined it as a FOR EACH STATEMENT
    > trigger that would insert all the deleted rows in one operation like
    > this:
    >
    > CREATE TRIGGER MyTable_TD
    > AFTER DELETE ON MyTable
    > REFERENCING OLD TABLE AS Deleted
    > FOR EACH STATEMENT
    > BEGIN
    > INSERT INTO MyTableA SELECT * from Deleted
    > END
    >
    > This worked pretty well, but if I was bulk deleting millions of
    > records, the delete would run for a long time without writing any
    > archive records. Only once all the deletes completed would any archive
    > records get written. If the operation was interrupted, the delete
    > would end incomplete, and no archive records are written at all - the
    > records are lost forever. The archive table is not journaled.
    >
    > I can see a performance benefit in this approach as the inserts are
    > done as a single operation (although there may be a penalty of
    > individual inserts into a temp table).
    >
    > I am now attempting to rewrite this as a FOR EACH ROW based trigger,
    > on the expectation that one row insert will occur immediately after
    > each row deleted, so if the operation is interrupted, at most one row
    > is lost.
    >
    > This is what I am trying to do:
    > CREATE TRIGGER MyTable_TD
    > AFTER DELETE ON MyTable
    > REFERENCING OLD ROW AS Deleted
    > FOR EACH ROW
    > BEGIN
    > INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
    > END
    >
    > But of course the trigger payload is invalid as Deleted is a row
    > reference not a table reference.
    >
    > So my question is this: in a row based trigger, how can I reference
    > the old deleted row in my insert statement. I cannot use a list of
    > individually named columns as a) I want this to be generic and b) I
    > want it to be low maintenance and c) the underlying table has hundreds
    > of columns and referencing them individually would be a royal PITA!
    >
    > Thanks in advance for any advice,
    > JohnO


    I found this thread:
    http://groups.google.co.nz/group/com...5649e6e626354d

    In which Serge replied in the negative. Serge, as I said, the
    statement based trigger works, but it has a problem when I am deleting
    millions of rows.

  3. Re: Row and Statement based Trigger Question

    JohnO wrote:
    > On May 9, 11:45 am, JohnO wrote:
    >> Hi All,
    >>
    >> This question is related to iSeries V5R4 and db2.
    >>
    >> I want to implement an AFTER DELETE trigger to save the deleted rows
    >> to an archive table, I initially defined it as a FOR EACH STATEMENT
    >> trigger that would insert all the deleted rows in one operation like
    >> this:
    >>
    >> CREATE TRIGGER MyTable_TD
    >> AFTER DELETE ON MyTable
    >> REFERENCING OLD TABLE AS Deleted
    >> FOR EACH STATEMENT
    >> BEGIN
    >> INSERT INTO MyTableA SELECT * from Deleted
    >> END
    >>
    >> This worked pretty well, but if I was bulk deleting millions of
    >> records, the delete would run for a long time without writing any
    >> archive records. Only once all the deletes completed would any archive
    >> records get written. If the operation was interrupted, the delete
    >> would end incomplete, and no archive records are written at all - the
    >> records are lost forever. The archive table is not journaled.
    >>
    >> I can see a performance benefit in this approach as the inserts are
    >> done as a single operation (although there may be a penalty of
    >> individual inserts into a temp table).
    >>
    >> I am now attempting to rewrite this as a FOR EACH ROW based trigger,
    >> on the expectation that one row insert will occur immediately after
    >> each row deleted, so if the operation is interrupted, at most one row
    >> is lost.
    >>
    >> This is what I am trying to do:
    >> CREATE TRIGGER MyTable_TD
    >> AFTER DELETE ON MyTable
    >> REFERENCING OLD ROW AS Deleted
    >> FOR EACH ROW
    >> BEGIN
    >> INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
    >> END
    >>
    >> But of course the trigger payload is invalid as Deleted is a row
    >> reference not a table reference.
    >>
    >> So my question is this: in a row based trigger, how can I reference
    >> the old deleted row in my insert statement. I cannot use a list of
    >> individually named columns as a) I want this to be generic and b) I
    >> want it to be low maintenance and c) the underlying table has hundreds
    >> of columns and referencing them individually would be a royal PITA!
    >>
    >> Thanks in advance for any advice,
    >> JohnO

    >
    > I found this thread:
    > http://groups.google.co.nz/group/com...5649e6e626354d
    >
    > In which Serge replied in the negative. Serge, as I said, the
    > statement based trigger works, but it has a problem when I am deleting
    > millions of rows.

    As always convenience has its price....
    In some future version of DB2 you may be able to do:
    INSERT INTO T VALUES ROW newrow
    (or something like that). But that'll be a while...

    Cheers
    Serge
    --
    Serge Rielau
    DB2 Solutions Development
    IBM Toronto Lab

  4. Re: Row and Statement based Trigger Question

    On May 9, 1:04 pm, Serge Rielau wrote:
    > JohnO wrote:
    > > On May 9, 11:45 am, JohnO wrote:
    > >> Hi All,

    >
    > >> This question is related to iSeries V5R4 and db2.

    >
    > >> I want to implement an AFTER DELETE trigger to save the deleted rows
    > >> to an archive table, I initially defined it as a FOR EACH STATEMENT
    > >> trigger that would insert all the deleted rows in one operation like
    > >> this:

    >
    > >> CREATE TRIGGER MyTable_TD
    > >> AFTER DELETE ON MyTable
    > >> REFERENCING OLD TABLE AS Deleted
    > >> FOR EACH STATEMENT
    > >> BEGIN
    > >> INSERT INTO MyTableA SELECT * from Deleted
    > >> END

    >
    > >> This worked pretty well, but if I was bulk deleting millions of
    > >> records, the delete would run for a long time without writing any
    > >> archive records. Only once all the deletes completed would any archive
    > >> records get written. If the operation was interrupted, the delete
    > >> would end incomplete, and no archive records are written at all - the
    > >> records are lost forever. The archive table is not journaled.

    >
    > >> I can see a performance benefit in this approach as the inserts are
    > >> done as a single operation (although there may be a penalty of
    > >> individual inserts into a temp table).

    >
    > >> I am now attempting to rewrite this as a FOR EACH ROW based trigger,
    > >> on the expectation that one row insert will occur immediately after
    > >> each row deleted, so if the operation is interrupted, at most one row
    > >> is lost.

    >
    > >> This is what I am trying to do:
    > >> CREATE TRIGGER MyTable_TD
    > >> AFTER DELETE ON MyTable
    > >> REFERENCING OLD ROW AS Deleted
    > >> FOR EACH ROW
    > >> BEGIN
    > >> INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
    > >> END

    >
    > >> But of course the trigger payload is invalid as Deleted is a row
    > >> reference not a table reference.

    >
    > >> So my question is this: in a row based trigger, how can I reference
    > >> the old deleted row in my insert statement. I cannot use a list of
    > >> individually named columns as a) I want this to be generic and b) I
    > >> want it to be low maintenance and c) the underlying table has hundreds
    > >> of columns and referencing them individually would be a royal PITA!

    >
    > >> Thanks in advance for any advice,
    > >> JohnO

    >
    > > I found this thread:
    > >http://groups.google.co.nz/group/com.../browse_frm/th...

    >
    > > In which Serge replied in the negative. Serge, as I said, the
    > > statement based trigger works, but it has a problem when I am deleting
    > > millions of rows.

    >
    > As always convenience has its price....
    > In some future version of DB2 you may be able to do:
    > INSERT INTO T VALUES ROW newrow
    > (or something like that). But that'll be a while...
    >
    > Cheers
    > Serge
    > --
    > Serge Rielau
    > DB2 Solutions Development
    > IBM Toronto Lab


    Thanks Serge, I knew you would confirm my suspicions.

    Can you think of a way to improve the statement based trigger? It
    works, but I am not comfortable with using it when deleting millions
    of rows as the archive rows are lost if the delete is interrupted yet
    the deleted records stay deleted. If I enable journaling on the
    archive table would that make it safer? Would there be a performance
    and disk space penalty?

    What I would really like would be for the delete to run for a while,
    and then be able to interrupt the delete, with the corresponding
    inserts completing. Then I could restart the delete another time.


    Thanks
    johnO

  5. Re: Row and Statement based Trigger Question

    > If I enable journaling on the
    > archive table would that make it safer?
    >

    Yes.

    > Would there be a performance [penalty]
    >

    I don't believe so. If I remember correctly, the OS will cache the
    inserts until a COMMIT is issued.

    > and disk space penalty?
    >

    Yes. Journalling does require some disk space.

    > What I would really like would be for the delete to run for a while,
    > and then be able to interrupt the delete, with the corresponding
    > inserts completing. Then I could restart the delete another time.
    >

    You could do this by making the delete a programmed housekeeping /
    archive process, rather than relying on a trigger.
    Are you likely to be doing bulk (multi-million row) deletes? That
    suggests that your delete / housekeeping / archive process is perhaps
    not very well designed. (Of course, it might not be avoidable in your
    case, but to me this is a bad sign.)

  6. Re: Row and Statement based Trigger Question

    On May 10, 12:13 am, "walker.l2" wrote:
    > > If I enable journaling on the
    > > archive table would that make it safer?

    >
    > Yes.
    >
    > > Would there be a performance [penalty]

    >
    > I don't believe so. If I remember correctly, the OS will cache the
    > inserts until a COMMIT is issued.
    >
    > > and disk space penalty?

    >
    > Yes. Journalling does require some disk space.
    >
    > > What I would really like would be for the delete to run for a while,
    > > and then be able to interrupt the delete, with the corresponding
    > > inserts completing. Then I could restart the delete another time.

    >
    > You could do this by making the delete a programmed housekeeping /
    > archive process, rather than relying on a trigger.
    > Are you likely to be doing bulk (multi-million row) deletes? That
    > suggests that your delete / housekeeping / archive process is perhaps
    > not very well designed. (Of course, it might not be avoidable in your
    > case, but to me this is a bad sign.)


    Hi, and thanks for the comments.

    The deletes are only going to be purge/archive operations, rather than
    business transactions. The first one is going to delete the millions
    as it will be several years worth. Subsequent ones will be annual so
    will still be pretty big, but not quite so bad.

  7. Re: Row and Statement based Trigger Question

    JohnO wrote:
    > The deletes are only going to be purge/archive operations, rather than
    > business transactions. The first one is going to delete the millions
    > as it will be several years worth. Subsequent ones will be annual so
    > will still be pretty big, but not quite so bad.

    If that's the case why don't you use batch process. Triggers are meant
    to to be active database objects. I wouldn't use them for maintenance tasks.

    Cheers
    Serge

    --
    Serge Rielau
    DB2 Solutions Development
    IBM Toronto Lab

  8. Re: Row and Statement based Trigger Question

    On May 11, 2:36 pm, Serge Rielau wrote:
    > JohnO wrote:
    > > The deletes are only going to be purge/archive operations, rather than
    > > business transactions. The first one is going to delete the millions
    > > as it will be several years worth. Subsequent ones will be annual so
    > > will still be pretty big, but not quite so bad.

    >
    > If that's the case why don't you use batch process. Triggers are meant
    > to to be active database objects. I wouldn't use them for maintenance tasks.
    >


    H Serge,

    Speed, mostly. This is a JDEdwards OneWorld site, so any batch work
    will be written in the OneWorld toolset, and will likely end up bing
    millions of individual inserts and deletes. The process would take
    many days for some of the files.

    It's so much faster to do a bulk delete, and I was hoping the trigger
    would optimise out well enough.

    The statement based trigger seems to be the most efficient, but with
    the problems I pointed otu at the start.

    Cheers,
    JohnO


  9. Re: Row and Statement based Trigger Question

    > Speed, mostly. This is a JDEdwards OneWorld site, so any batch work
    > will be written in the OneWorld toolset, and will likely end up bing
    > millions of individual inserts and deletes. The process would take
    > many days for some of the files.
    >

    When all you have is a hammer, every problem lokos like a nail. :-)

    This is an ideal job for a couple of simple SQL statements (that can
    be run from a simple CL program if required).
    Something like:
    CREATE TABLE archive as SELECT * FROM mainfile WHERE date < sometime
    WITH DATA
    and
    DELETE FROM mainfile WHERE date < sometime
    should do the trick (N.B. I haven't checked the syntax of these, but
    you get the idea), with an index over the 'date' field for performance
    reasons.
    In future a simple CPYF *ADD command with a date selection could move
    records from mainfile to archive, and the SQL delete perform the tidy-
    up.

  10. Re: Row and Statement based Trigger Question

    On May 12, 9:20 pm, "walker.l2" wrote:
    > > Speed, mostly. This is a JDEdwards OneWorld site, so any batch work
    > > will be written in the OneWorld toolset, and will likely end up bing
    > > millions of individual inserts and deletes. The process would take
    > > many days for some of the files.

    >
    > When all you have is a hammer, every problem lokos like a nail. :-)
    >
    > This is an ideal job for a couple of simple SQL statements (that can
    > be run from a simple CL program if required).
    > Something like:
    > CREATE TABLE archive as SELECT * FROM mainfile WHERE date < sometime
    > WITH DATA
    > and
    > DELETE FROM mainfile WHERE date < sometime
    > should do the trick (N.B. I haven't checked the syntax of these, but
    > you get the idea), with an index over the 'date' field for performance
    > reasons.
    > In future a simple CPYF *ADD command with a date selection could move
    > records from mainfile to archive, and the SQL delete perform the tidy-
    > up.


    Interesting about the CPYF *ADD - will that perform faster than an
    INSERT INTO ... SELECT ... FROM type SQL operation?

  11. Re: Row and Statement based Trigger Question

    I've no idea which method would be faster, but CPYF might be less
    'scary' if your shop is not used to SQL.

+ Reply to Thread