DMA descriptor alignment - Kernel

This is a discussion on DMA descriptor alignment - Kernel ; For those variants of BCM43xx cards that use 64-bit DMA, there is a requirement that all descriptor rings must be aligned on an 8K boundary and must fit within an 8K page. On the x86_64 architecture where the page size ...

+ Reply to Thread
Results 1 to 5 of 5

Thread: DMA descriptor alignment

  1. DMA descriptor alignment

    For those variants of BCM43xx cards that use 64-bit DMA, there is a requirement that all descriptor
    rings must be aligned on an 8K boundary and must fit within an 8K page. On the x86_64 architecture
    where the page size is 4K, I was getting addresses like 0x67AF000 when using dma_alloc_coherent
    calls. From the description of the dma_pool_create and dma_pool_allocate routines, I thought they
    would fix my problems; however, even with a dma_pool_create(name, dev, 8192, 8192, 8192) call, I'm
    still getting 4K rather than 8K alignment, which results in DMA errors.

    Is there a bug in these routines, am I using them incorrectly, or do I have a misunderstanding of
    what it takes to get this kind of alignment?

    Thanks,

    Larry
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. Re: DMA descriptor alignment

    Larry Finger writes:

    > For those variants of BCM43xx cards that use 64-bit DMA, there is a requirement that all descriptor
    > rings must be aligned on an 8K boundary and must fit within an 8K page. On the x86_64 architecture
    > where the page size is 4K, I was getting addresses like 0x67AF000 when using dma_alloc_coherent
    > calls.


    Normally x86-64 dma_alloc_coherent calls the buddy allocator which gives
    you always naturally aligned blocks. But there is a fallback calling
    into swiotlb and swiotlb uses best fit allocation which only guarantees
    single page alignment. That is probably what you're seeing.

    My dma zone rework would remove that fallback case and should make it work.

    > From the description of the dma_pool_create and dma_pool_allocate routines, I thought they
    > would fix my problems; however, even with a dma_pool_create(name, dev, 8192, 8192, 8192) call, I'm
    > still getting 4K rather than 8K alignment, which results in DMA errors.


    They cannot give you more alignment than the underlying allocator.

    > Is there a bug in these routines, am I using them incorrectly, or do I have a misunderstanding of
    > what it takes to get this kind of alignment?


    My suggestion as a short term workaround would be to first allocate
    8K using dma_alloc_coherent and if that has the wrong alignment get 16K
    and align yourself. When the driver is loaded later that might be unreliable,
    but near boot or with enough free memory using order 2 should usually work.

    With the dma zone rework that could be removed later.

    -Andi
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. Re: DMA descriptor alignment

    Andi Kleen wrote:
    > Larry Finger writes:
    >
    >> For those variants of BCM43xx cards that use 64-bit DMA, there is a requirement that all descriptor
    >> rings must be aligned on an 8K boundary and must fit within an 8K page. On the x86_64 architecture
    >> where the page size is 4K, I was getting addresses like 0x67AF000 when using dma_alloc_coherent
    >> calls.

    >
    > Normally x86-64 dma_alloc_coherent calls the buddy allocator which gives
    > you always naturally aligned blocks. But there is a fallback calling
    > into swiotlb and swiotlb uses best fit allocation which only guarantees
    > single page alignment. That is probably what you're seeing.
    >
    > My dma zone rework would remove that fallback case and should make it work.
    >
    >> From the description of the dma_pool_create and dma_pool_allocate routines, I thought they
    >> would fix my problems; however, even with a dma_pool_create(name, dev, 8192, 8192, 8192) call, I'm
    >> still getting 4K rather than 8K alignment, which results in DMA errors.

    >
    > They cannot give you more alignment than the underlying allocator.
    >
    >> Is there a bug in these routines, am I using them incorrectly, or do I have a misunderstanding of
    >> what it takes to get this kind of alignment?

    >
    > My suggestion as a short term workaround would be to first allocate
    > 8K using dma_alloc_coherent and if that has the wrong alignment get 16K
    > and align yourself. When the driver is loaded later that might be unreliable,
    > but near boot or with enough free memory using order 2 should usually work.
    >
    > With the dma zone rework that could be removed later.


    I will be most interested in the dma zone rework; however, I now have the descriptors properly
    allocated merely by asking for 8K, just as I had thought it should work. I'm not quite sure what was
    wrong before, but I added a test to make certain that the alignment is OK just in case the problem
    comes back.

    Thanks for your response,

    Larry
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  4. Re: DMA descriptor alignment

    Hi,

    I am working on adding Advanced DMA(ADMA) support for SDHC controller on MXC91341(Freescale board).
    ADMA requires 4k aligned dma buffers.
    For example dma address should be 0x8e210000.
    But i am not getting 4k aligned address instead i am getting address like 0x8e1168dc.

    I am using the below dma api
    dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
    (data->
    flags & MMC_DATA_READ) ? DMA_FROM_DEVICE :
    DMA_TO_DEVICE);

    Let me know if u require any additional information.Please help me with the issue.

    Regards,
    Hema

  5. Re: DMA descriptor alignment

    Hi,

    I am working on adding Advanced Direct memory access(ADMA) support for mxc91341(Freescale) Board.The ADMA requires 4k aligned dma address.
    ie for example dma address should be 0x8e210000.Instead i am getting dma address as 0x8e1168dc.

    I am using dma_alloc_coherent to allocate memory for the adma descriptor.
    Inside the adma descriptor i am storing the dma address which is allocated by scatter gather list structure.
    I am using the below dma api,

    dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
    (data->
    flags & MMC_DATA_READ) ? DMA_FROM_DEVICE :
    DMA_TO_DEVICE);

    tsg=data->sg;

    where tsg,sg is a pointer to scatter gather list structure.
    tsg->dma_address contains dma address.This address is not 4k aligned.
    If anyone knows how to get always 4k aligned dma address ,please help me.

    Let me know if any additional information is needed.

    Regards,
    Hema Prathyusha.K

+ Reply to Thread