Allocating a large chunk of memory in the kernel (2mb) - Linux

This is a discussion on Allocating a large chunk of memory in the kernel (2mb) - Linux ; I need to allocate 2mb for a brain dead driver. The existing driver likes to get it's memory by using mem= , then ioremaping the top of physical memory to get raw access to it. Basically, the area is for ...

+ Reply to Thread
Results 1 to 6 of 6

Thread: Allocating a large chunk of memory in the kernel (2mb)

  1. Allocating a large chunk of memory in the kernel (2mb)


    I need to allocate 2mb for a brain dead driver. The existing driver likes to get it's memory by using mem=, then ioremaping the top of physical memory to get raw access to it.

    Basically, the area is for DMA / shared memory for a pci card.

    So, I'm trying to remove the mem= dependency, and just allocate a large buffer to use with the driver. But, kmalloc doesn't like to give out chunks that size.

    So, how do I malloc 2Mb of contiguous space?

    -Dave

    --
    David Frascone

    Oxymoron: Safe Sex.

  2. Re: Allocating a large chunk of memory in the kernel (2mb)

    ?? man malloc ??


    On Mon, 03 Jan 2005 01:33:08 +0000, David Frascone wrote:

    >
    > I need to allocate 2mb for a brain dead driver. The existing driver likes to get it's memory by using mem=, then ioremaping the top of physical memory to get raw access to it.
    >
    > Basically, the area is for DMA / shared memory for a pci card.
    >
    > So, I'm trying to remove the mem= dependency, and just allocate a large buffer to use with the driver. But, kmalloc doesn't like to give out chunks that size.
    >
    > So, how do I malloc 2Mb of contiguous space?
    >
    > -Dave



  3. Re: Allocating a large chunk of memory in the kernel (2mb)


    I can only assume that this is a troll. Otherwise it would take a pretty big idiot to ignore the word "kernel" in the subject, and ignoring the sentence in the body where I said that kmalloc will not return me 2mb of memory.

    Unless I'm the moron, and malloc was added to kernel space . . . let me know.

    -Dave

    On Sun, 02 Jan 2005 23:31:13 -0600
    root wrote:

    > ?? man malloc ??
    >
    >
    > On Mon, 03 Jan 2005 01:33:08 +0000, David Frascone wrote:
    >
    > >
    > > I need to allocate 2mb for a brain dead driver. The existing driver likes to get it's memory by using mem=, then ioremaping the top of physical memory to get raw access to it.
    > >
    > > Basically, the area is for DMA / shared memory for a pci card.
    > >
    > > So, I'm trying to remove the mem= dependency, and just allocate a large buffer to use with the driver. But, kmalloc doesn't like to give out chunks that size.
    > >
    > > So, how do I malloc 2Mb of contiguous space?
    > >
    > > -Dave

    >



    --
    David Frascone

    Psychiatrists stay on your mind.

  4. Re: Allocating a large chunk of memory in the kernel (2mb)

    David Frascone wrote:

    >I need to allocate 2mb for a brain dead driver. The existing driver likes to get it's memory by using mem=, then ioremaping the top of physical memory to get raw access to it.
    >
    >Basically, the area is for DMA / shared memory for a pci card.
    >
    >So, I'm trying to remove the mem= dependency, and just allocate a large buffer to use with the driver. But, kmalloc doesn't like to give out chunks that size.
    >
    >So, how do I malloc 2Mb of contiguous space?
    >
    >-Dave
    >
    >
    >

    Hi Dave,

    Perhaps __get_dma_pages ? Never used this function myself, but it's
    described in http://www.xml.com/ldd/chapter/book/ch07.html
    Anyway, linux.kernel might be a better newsgroup for you to ask.

    Mihai

  5. Re: Allocating a large chunk of memory in the kernel (2mb)

    On Mon, 03 Jan 2005 15:58:17 +0100
    Mihai Osian wrote:

    > David Frascone wrote:
    >
    > >I need to allocate 2mb for a brain dead driver. The existing driver likes to get it's memory by using mem=, then ioremaping the top of physical memory to get raw access to it.
    > >
    > >Basically, the area is for DMA / shared memory for a pci card.
    > >
    > >So, I'm trying to remove the mem= dependency, and just allocate a large buffer to use with the driver. But, kmalloc doesn't like to give out chunks that size.
    > >
    > >So, how do I malloc 2Mb of contiguous space?
    > >
    > >-Dave
    > >
    > >
    > >

    > Hi Dave,
    >
    > Perhaps __get_dma_pages ? Never used this function myself, but it's
    > described in http://www.xml.com/ldd/chapter/book/ch07.html
    > Anyway, linux.kernel might be a better newsgroup for you to ask.
    >


    Thanks, that is exactly the direction I'm looking.

    I still don't think that's going to be enough, though. Looks like it maxes out at 2mb, and I might need two 2mb chunks. I'll give it a try though. I'm leaning toward the bigphysmem patch now. It might give me what I need.

    I'll post when I find a solution.

    -Dave


    --
    David Frascone

    Don't ask me, I'm making this up as I go!

  6. Re: Allocating a large chunk of memory in the kernel (2mb)

    On Mon, 03 Jan 2005 15:02:08 GMT
    David Frascone wrote:

    > On Mon, 03 Jan 2005 15:58:17 +0100
    > Mihai Osian wrote:
    >
    > > David Frascone wrote:
    > >
    > > >I need to allocate 2mb for a brain dead driver. The existing driver likes to get it's memory by using mem=, then ioremaping the top of physical memory to get raw access to it.
    > > >
    > > >Basically, the area is for DMA / shared memory for a pci card.
    > > >
    > > >So, I'm trying to remove the mem= dependency, and just allocate a large buffer to use with the driver. But, kmalloc doesn't like to give out chunks that size.
    > > >
    > > >So, how do I malloc 2Mb of contiguous space?
    > > >
    > > >-Dave
    > > >
    > > >
    > > >

    > > Hi Dave,
    > >
    > > Perhaps __get_dma_pages ? Never used this function myself, but it's
    > > described in http://www.xml.com/ldd/chapter/book/ch07.html
    > > Anyway, linux.kernel might be a better newsgroup for you to ask.
    > >

    >
    > Thanks, that is exactly the direction I'm looking.
    >
    > I still don't think that's going to be enough, though. Looks like it maxes out at 2mb, and I might need two 2mb chunks. I'll give it a try though. I'm leaning toward the bigphysmem patch now. It might give me what I need.
    >
    > I'll post when I find a solution.
    >


    Get DMA pages worked! It let me use an order of 9, for 512 4k pages (2mb). And, I was able to allocate both that I needed.

    Thanks so much for your help!

    -Dave

    --
    David Frascone

    Computers are not intelligent. They only think they are.

+ Reply to Thread