Regarding QZDASOINIT jobs - Websphere

This is a discussion on Regarding QZDASOINIT jobs - Websphere ; Hi all,.. Firstly i appreciate this forum, which gives some great helpful information about websphere server on AS400. Coming to my question,.. We are actually developing J2ee Applications using WAS-express 5.0.2 and use JDBC connection pooling for database (DB2 on ...

+ Reply to Thread
Results 1 to 4 of 4

Thread: Regarding QZDASOINIT jobs

  1. Regarding QZDASOINIT jobs


    Hi all,..

    Firstly i appreciate this forum, which gives some great helpful
    information about websphere server on AS400.

    Coming to my question,..

    We are actually developing J2ee Applications using WAS-express 5.0.2
    and use JDBC connection pooling for database (DB2 on iseries) access.
    As per my knowledge QZDASOINIT jobs serve the purpose of connection
    pooling for database on the iseries.

    In java,I use PreparedStatement(s) for executing SQL queries and i
    always close the ResultSet,PreparedStatement and the Connection in the
    finally clause always.

    But i noticed that the QZDASOINIT jobs are always in the TIMW status,
    which is not correct right? because i am properly closing the
    Connection

    I read about the life cycle of the QZDASOINIT jobs here

    http://www.centerfieldtechnology.com...Newsletter.pdf

    which also states that the TIMW status is because the client has
    terminated the connection on its end without sending the
    close request to the iSeries.
    Because of this, the number of QZDASOINIT jobs are getting more and
    more.

    I have no clue how to solve this or maybe this is not a problem??
    I really appreciate your help on this matter.
    Thanks a bunch for your help


  2. Re: Regarding QZDASOINIT jobs

    I don't think there is a problem. DEQW is the wait state when you have
    forgotten to close the connection and the OS thinks you might want to
    do some more SQL. TIMW is the normal state for connections that are no
    longer being used.

    If you are using connection pooling, I believe this deliberately
    prevents the QZDASOINIT jobs from ending (to save the overhead of
    recreating them again later) but leaves them in TIMW state until the
    connection pool is destroyed. Each time you request a connection, you
    should get an existing QZDASOINIT job (one of those in TIMW status); or
    if there are none available, the OS will spawn a new one for you.

    You will only be using the QZDASOINIT jobs if you are using the Toolbox
    SQL driver. If you are using the (better performing) native driver,
    your SQL will be handled by QSQSRVR jobs in QSYSWRK instead.


  3. Re: Regarding QZDASOINIT jobs


    Hi Walker,..

    Thanks a lot for your reply; good to hear that there is no problem.
    I thought it might be a problem, because the link that i gave in my
    first post(on page 3 of that PDF), clearly states that TIMW is because

    the connection was not closed properly (maybe because it is connection
    pooling, it was not closed !)
    It says pretty much like what you said but the only difference is PSRW
    status
    (pre-start wait status) instead of the TIMW status.
    If connection is closed properly the status changes from DEQW to PSRW,
    if not
    from DEQW to TIMW

    I really do not know if probably they were wrong or not.
    Your comments on that please.

    I really appreciate your help
    Thanks again


  4. Re: Regarding QZDASOINIT jobs

    I'm pretty sure that connection pooling prevents the connection from
    being fully closed - if you look at the jobs you can sometimes see
    locks still being held even though you aren't doing anything any more.
    These appear to be 'soft' locks holding the files in some kind of
    semi-open state, presumably for performance reasons for the next SQL
    call.If any other job tries to access the files, the SQL job drops its
    'soft' lock; an exception is the ALCOBJ command, here you must specify
    the RQSRLS parameter to have a value of *YES for the SQL job to drop
    the lock, the default is *NO which will prevent the ALCOBJ command from
    succeeding.

    I think if the number of pre-start jobs is greater than the size of
    your connection pool, then you might see the PRSW status for the extra
    jobs (and TIMW for the ones used by the connection pool); but if your
    pool is bigger than the number of pre-starts then you will only see
    TIMW. On the other hand, this could be version related - we run v5r2.

    Certainly QZDASOINIT jobs on TIMW have never caused any problems on
    our system.

    If you can though, I would use the native driver because the
    performance is so much better.

    You might also be interested in the following threads:
    http://groups.google.com/group/comp....ac69b6fd94767e

    http://groups.google.com/group/comp....a8bf2b9186061c

    Walker.


+ Reply to Thread