Multiple row destruction in tables with an integer index - SNMP

This is a discussion on Multiple row destruction in tables with an integer index - SNMP ; If I have a table with a simple integer index from 1 to n, and someone requests to delete multiple rows - how can I make sure the correct rows are deleted? For example - if I get a request ...

+ Reply to Thread
Results 1 to 4 of 4

Thread: Multiple row destruction in tables with an integer index

  1. Multiple row destruction in tables with an integer index

    If I have a table with a simple integer index from 1 to n, and someone
    requests to delete multiple rows - how can I make sure the correct
    rows are deleted?

    For example - if I get a request to delete rows 2 and 3, and I delete
    row 2, when the request to delete row 3 comes in, row 3 has actually
    now become row 2, so they would in fact be deleting the original row
    4.

    I am thinking of storing the size of the table in a scalar variable
    and not allow row deletion in this table at all. The user can delete
    entries by setting the table size to a number below the current number
    of rows in the table. Then I can ensure that row deletion happens
    correctly - I'll just delete the entries from the end of the table to
    shrink the size down. Any opinions on this approach?

    Thanks,
    Keith

  2. Re: Multiple row destruction in tables with an integer index

    Keith,

    First, an index is an index. Deleting a row in a table
    must not affect the index values of the other rows in a table.
    That's how relational databases work and why should it be
    different with SNMP?

    Thus, I would recommend decoupling the array index of your
    physical table from the indexes of your SNMP table. Most
    SNMP agent development kits provide such mapping facilities.

    Independently from the above, you can synchronize access to
    a table with the snmpSetSerialNo (1.3.6.1.6.3.1.1.6.1.0) instance
    (coarse grain) or by adding a scalar with the syntax TestAndIncr
    to the MIB module where your table is located (fine-grain).

    For more information on the TestAndIncr textual convention,
    read RFC 2579.

    Regards,
    Frank Fock

    Keith JOhnston wrote:

    >If I have a table with a simple integer index from 1 to n, and someone
    >requests to delete multiple rows - how can I make sure the correct
    >rows are deleted?
    >
    >For example - if I get a request to delete rows 2 and 3, and I delete
    >row 2, when the request to delete row 3 comes in, row 3 has actually
    >now become row 2, so they would in fact be deleting the original row
    >4.
    >
    >I am thinking of storing the size of the table in a scalar variable
    >and not allow row deletion in this table at all. The user can delete
    >entries by setting the table size to a number below the current number
    >of rows in the table. Then I can ensure that row deletion happens
    >correctly - I'll just delete the entries from the end of the table to
    >shrink the size down. Any opinions on this approach?
    >
    >Thanks,
    >Keith
    >
    >



  3. Re: Multiple row destruction in tables with an integer index

    Thanks for the response - I see where I was confused. I think I can
    use the example in "Understanding SNMP" of having a "getNextIndex"
    scalar that a manager can query to get a unique index. These items I
    am managing don't really have a good unique index - and I really don't
    want to use a string as an index - the OIDs produced by strings seem
    pretty cumbersome to me.

    I still like the idea of pre-allocating a set of rows and giving them
    the indices 1-n. The manager cannot delete these rows - only
    inactivate / deactivate them. If they find they need more rows, they
    can go to a "limits" group in the MIB and increase the number of
    pre-allocated rows for that table. Likewise, they can shrink the
    number of rows this way. This way - the index for an item would never
    change. Is there anything about this approach that would break that
    I'm not seeing? Or would it be confusing for SNMP managers?

    Thanks again,
    Keith

    Frank Fock wrote in message news:...
    > Keith,
    >
    > First, an index is an index. Deleting a row in a table
    > must not affect the index values of the other rows in a table.
    > That's how relational databases work and why should it be
    > different with SNMP?
    >
    > Thus, I would recommend decoupling the array index of your
    > physical table from the indexes of your SNMP table. Most
    > SNMP agent development kits provide such mapping facilities.
    >
    > Independently from the above, you can synchronize access to
    > a table with the snmpSetSerialNo (1.3.6.1.6.3.1.1.6.1.0) instance
    > (coarse grain) or by adding a scalar with the syntax TestAndIncr
    > to the MIB module where your table is located (fine-grain).
    >
    > For more information on the TestAndIncr textual convention,
    > read RFC 2579.
    >
    > Regards,
    > Frank Fock
    >
    > Keith JOhnston wrote:
    >
    > >If I have a table with a simple integer index from 1 to n, and someone
    > >requests to delete multiple rows - how can I make sure the correct
    > >rows are deleted?
    > >
    > >For example - if I get a request to delete rows 2 and 3, and I delete
    > >row 2, when the request to delete row 3 comes in, row 3 has actually
    > >now become row 2, so they would in fact be deleting the original row
    > >4.
    > >
    > >I am thinking of storing the size of the table in a scalar variable
    > >and not allow row deletion in this table at all. The user can delete
    > >entries by setting the table size to a number below the current number
    > >of rows in the table. Then I can ensure that row deletion happens
    > >correctly - I'll just delete the entries from the end of the table to
    > >shrink the size down. Any opinions on this approach?
    > >
    > >Thanks,
    > >Keith
    > >
    > >


  4. Re: Multiple row destruction in tables with an integer index

    In article ,
    kjohnston@rocksteady.com says...

    > Thanks for the response - I see where I was confused. I think I can
    > use the example in "Understanding SNMP" of having a "getNextIndex"
    > scalar that a manager can query to get a unique index.


    That's a decent method, also advocated by Stephen Morris in his recent
    book (mainly about MPLS), although for single-integer indexes it's pretty
    easy to find an available index without resorting to a complete column
    walk using a simple binary search algorithm - no more than about 32 get-
    next requests (and likely fewer) for a 32-bit integer index, if you can
    assume there's room at the end.

    > I still like the idea of pre-allocating a set of rows and giving them
    > the indices 1-n. The manager cannot delete these rows - only
    > inactivate / deactivate them. If they find they need more rows, they
    > can go to a "limits" group in the MIB and increase the number of
    > pre-allocated rows for that table. Likewise, they can shrink the
    > number of rows this way. This way - the index for an item would never
    > change. Is there anything about this approach that would break that
    > I'm not seeing? Or would it be confusing for SNMP managers?


    That's not very conducive to working with multiple managers. If one
    manager shrinks the table, how do you determine which rows to delete such
    that you don't wind up deleting rows the another manager is still using?
    Better for a manager to only delete rows that it created.

    Also, it would probably be easy to cause resource exhaustion in the agent
    by simply setting all such tables to have the maximum number of rows
    allowed by the index. One request per table to do it where normally it
    would take one request per row. Possibly an easy DoS attack vector if
    the agent is comprimised (though if it is then there's probably other
    things to worry about).

    --
    Michael Kirkham
    Muonics
    http://www.muonics.com/

+ Reply to Thread