[PATCH 2.6.26 0/3] RDMA/cxgb3: fixes and enhancements for 2.6.26 - Kernel

This is a discussion on [PATCH 2.6.26 0/3] RDMA/cxgb3: fixes and enhancements for 2.6.26 - Kernel ; The following series fixes some bugs as well as enabling peer-2-peer applications including OpenMPI and HPMPI. I hope this can make 2.6.26. NOTE: The changes in patch 3 require a new firmware version. I added the version change to drivers/net/cxgb3/version.h ...

+ Reply to Thread
Results 1 to 7 of 7

Thread: [PATCH 2.6.26 0/3] RDMA/cxgb3: fixes and enhancements for 2.6.26

  1. [PATCH 2.6.26 0/3] RDMA/cxgb3: fixes and enhancements for 2.6.26

    The following series fixes some bugs as well as enabling peer-2-peer
    applications including OpenMPI and HPMPI.

    I hope this can make 2.6.26.

    NOTE: The changes in patch 3 require a new firmware version. I added
    the version change to drivers/net/cxgb3/version.h in this patch so that
    the changes that require the new firmware as well as the version bump
    are all in one git commit. This keeps things like 'git bisect' from
    leaving the driver broken.

    --
    Steve.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. [PATCH 2.6.26 2/3] RDMA/cxgb3: Correctly set the max_mr_size device attribute.


    cxgb3 only supports 4GB memory regions. The lustre RDMA code uses this
    attribute and currently has to code around our bad setting.

    Signed-off-by: Steve Wise
    ---

    drivers/infiniband/hw/cxgb3/cxio_hal.h | 1 +
    drivers/infiniband/hw/cxgb3/iwch.c | 1 +
    drivers/infiniband/hw/cxgb3/iwch.h | 1 +
    drivers/infiniband/hw/cxgb3/iwch_provider.c | 2 +-
    4 files changed, 4 insertions(+), 1 deletions(-)

    diff --git a/drivers/infiniband/hw/cxgb3/cxio_hal.h b/drivers/infiniband/hw/cxgb3/cxio_hal.h
    index 99543d6..2bcff7f 100644
    --- a/drivers/infiniband/hw/cxgb3/cxio_hal.h
    +++ b/drivers/infiniband/hw/cxgb3/cxio_hal.h
    @@ -53,6 +53,7 @@
    #define T3_MAX_PBL_SIZE 256
    #define T3_MAX_RQ_SIZE 1024
    #define T3_MAX_NUM_STAG (1<<15)
    +#define T3_MAX_MR_SIZE 0x100000000ULL

    #define T3_STAG_UNSET 0xffffffff

    diff --git a/drivers/infiniband/hw/cxgb3/iwch.c b/drivers/infiniband/hw/cxgb3/iwch.c
    index 0315c9d..98a768f 100644
    --- a/drivers/infiniband/hw/cxgb3/iwch.c
    +++ b/drivers/infiniband/hw/cxgb3/iwch.c
    @@ -83,6 +83,7 @@ static void rnic_init(struct iwch_dev *rnicp)
    rnicp->attr.max_phys_buf_entries = T3_MAX_PBL_SIZE;
    rnicp->attr.max_pds = T3_MAX_NUM_PD - 1;
    rnicp->attr.mem_pgsizes_bitmask = 0x7FFF; /* 4KB-128MB */
    + rnicp->attr.max_mr_size = T3_MAX_MR_SIZE;
    rnicp->attr.can_resize_wq = 0;
    rnicp->attr.max_rdma_reads_per_qp = 8;
    rnicp->attr.max_rdma_read_resources =
    diff --git a/drivers/infiniband/hw/cxgb3/iwch.h b/drivers/infiniband/hw/cxgb3/iwch.h
    index caf4e60..238c103 100644
    --- a/drivers/infiniband/hw/cxgb3/iwch.h
    +++ b/drivers/infiniband/hw/cxgb3/iwch.h
    @@ -66,6 +66,7 @@ struct iwch_rnic_attributes {
    * size (4k)^i. Phys block list mode unsupported.
    */
    u32 mem_pgsizes_bitmask;
    + u64 max_mr_size;
    u8 can_resize_wq;

    /*
    diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c
    index b2ea921..f7df213 100644
    --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
    +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
    @@ -998,7 +998,7 @@ static int iwch_query_device(struct ib_device *ibdev,
    props->device_cap_flags = dev->device_cap_flags;
    props->vendor_id = (u32)dev->rdev.rnic_info.pdev->vendor;
    props->vendor_part_id = (u32)dev->rdev.rnic_info.pdev->device;
    - props->max_mr_size = ~0ull;
    + props->max_mr_size = dev->attr.max_mr_size;
    props->max_qp = dev->attr.max_qps;
    props->max_qp_wr = dev->attr.max_wrs;
    props->max_sge = dev->attr.max_sge_per_wr;
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. Re: [ofa-general] [PATCH 2.6.26 3/3] RDMA/cxgb3: Support peer-2-peer connection setup.



    Roland Dreier wrote:
    > What are the interoperability implications of this?
    >
    > Looking closer I see that iw_nes has the send_first module parameter.
    > How does this interact with that?
    >


    It doesn't...yet. But we wanted to enable these applications for
    chelsio now and get the low level fw and driver changes done first and
    tested.

    > I guess it's fine to apply this, but do we have a plan for how we want
    > to handle this issue in the long-term?
    >


    Yes! If you'll recall, we had a thread on the ofa general list
    discussing how to enhance the MPA negotiation so peers can indicate
    whether they want/need the RTR and what type of RTR (0B read, 0B write,
    or 0B send) should be sent. This will be done by standardizing a few
    bits of the private data in order to negotiate all this. The rdma-cma
    API will be extended so applications will have to request this
    peer-2-peer model since it adds overhead to the connection setup.

    I plan to do this work for 2.6.27/ofed-1.4. I think it was listed in
    Felix's talk at Sonoma. This work (design, API, and code changes
    affecting core and placing requirements on iwarp providers) will be
    posted as RFC changes to get everyones feedback as soon as I get
    something going.

    Does that sound ok?


    Steve.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  4. RE: [ofa-general] [PATCH 2.6.26 3/3] RDMA/cxgb3: Support peer-2-peerconnection setup.

    I expect it to be tests at Sept interop event.
    If it works then I will send proposal to IETF for MPA enhancement.
    Thanks,

    Arkady Kanevsky email: arkady@netapp.com
    Network Appliance Inc. phone: 781-768-5395
    1601 Trapelo Rd. - Suite 16. Fax: 781-895-1195
    Waltham, MA 02451 central phone: 781-768-5300


    > -----Original Message-----
    > From: Steve Wise [mailto:swise@opengridcomputing.com]
    > Sent: Sunday, April 27, 2008 12:45 PM
    > To: Roland Dreier
    > Cc: netdev@vger.kernel.org; general@lists.openfabrics.org;
    > linux-kernel@vger.kernel.org; divy@chelsio.com
    > Subject: Re: [ofa-general] [PATCH 2.6.26 3/3] RDMA/cxgb3:
    > Support peer-2-peerconnection setup.
    >
    >
    >
    > Roland Dreier wrote:
    > > What are the interoperability implications of this?
    > >
    > > Looking closer I see that iw_nes has the send_first module

    > parameter.
    > > How does this interact with that?
    > >

    >
    > It doesn't...yet. But we wanted to enable these applications
    > for chelsio now and get the low level fw and driver changes
    > done first and tested.
    >
    > > I guess it's fine to apply this, but do we have a plan for

    > how we want
    > > to handle this issue in the long-term?
    > >

    >
    > Yes! If you'll recall, we had a thread on the ofa general
    > list discussing how to enhance the MPA negotiation so peers
    > can indicate whether they want/need the RTR and what type of
    > RTR (0B read, 0B write, or 0B send) should be sent. This
    > will be done by standardizing a few bits of the private data
    > in order to negotiate all this. The rdma-cma API will be
    > extended so applications will have to request this
    > peer-2-peer model since it adds overhead to the connection setup.
    >
    > I plan to do this work for 2.6.27/ofed-1.4. I think it was
    > listed in Felix's talk at Sonoma. This work (design, API,
    > and code changes affecting core and placing requirements on
    > iwarp providers) will be posted as RFC changes to get
    > everyones feedback as soon as I get something going.
    >
    > Does that sound ok?
    >
    >
    > Steve.
    > _______________________________________________
    > general mailing list
    > general@lists.openfabrics.org
    > http://lists.openfabrics.org/cgi-bin...stinfo/general
    >
    > To unsubscribe, please visit
    > http://openib.org/mailman/listinfo/openib-general
    >

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  5. Re: [ofa-general] [PATCH 2.6.26 1/3] RDMA/cxgb3: Correctly serialize peer abort path.

    oh yeah, and I deleted an unused "out" label
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  6. Re: [ofa-general] [PATCH 2.6.26 1/3] RDMA/cxgb3: Correctly serialize peer abort path.

    OK, applied, with a few fixups based on checkpatch output -- mostly
    __FUNCTION__ -> __func__ (__FUNCTION__ is a deprecated gcc-specific
    extension, __func__ is standard), and also a couple "abort=0" -> "abort
    = 0".

    - R.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  7. Re: [PATCH 2.6.26 2/3] RDMA/cxgb3: Correctly set the max_mr_size device attribute.

    thanks, applied
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread