driver's knowledge of total RAM - max_pfn ... - Linux

This is a discussion on driver's knowledge of total RAM - max_pfn ... - Linux ; I'm working on a driver for a PCI device whose DMA engines are limited to 32-bit addresses. Because of the addressing limitation I'm implementing one procedure for +4GB installations and another for installations with at most 4GB of memory. If ...

+ Reply to Thread
Results 1 to 2 of 2

Thread: driver's knowledge of total RAM - max_pfn ...

  1. driver's knowledge of total RAM - max_pfn ...

    I'm working on a driver for a PCI device whose DMA engines are limited to
    32-bit addresses. Because of the addressing limitation I'm implementing one
    procedure for +4GB installations and another for installations with at most
    4GB of memory. If the physical addresses are limited to 32-bits, then the
    driver can use a more efficient method of performinh DMA. However, if an
    application's I/O buffers might reside beyond the 4BG region, then the
    driver must resort to a less efficient method for performing DMA. This
    criteria for which to use may change depending upon the response I get.

    From searching internet the only thing I've found is that the driver can
    estimate the amount of RAM by looking at max_pfn or num_physpages. I've
    played with these on a 2.6.9, 32-bit kernel and have found odd results.

    RAM < 4GB: With less than 4GB installed both max_pfn and num_physpages
    reflect the installed amount of memory minus some reserve memory. I've tried
    512MB, 1GB and 2GB configurations.

    RAM >= 4GB: With 4GB or more installed both max_pfn and num_physpages
    reflect 512MB more than is installed. I've tried 4GB and 8GB configurations.

    How can a driver identify at load time the amount of memory installed?

    Alternatively, how can a driver identify at load time if the system is using
    memory beyond the 4GB memory mark.

    Don



  2. Re: driver's knowledge of total RAM - max_pfn ...

    "Don" wrote in
    news:8aednfJTU6PnKr3VnZ2dnUVZ_hKdnZ2d@hiwaay1:

    [snip; sections rearranged -- see below]

    > RAM >= 4GB: With 4GB or more installed both max_pfn and num_physpages
    > reflect 512MB more than is installed. I've tried 4GB and 8GB
    > configurations.
    >
    > How can a driver identify at load time the amount of memory installed?


    [Caveat: Most of this is i386-specific info and applies specifically to the
    most common model (flatmem / 4K pages)]

    The reason why more memory is indicated than is actually installed is
    because the "physical address" of the memory is rearranged by the BIOS to
    leave a "hole" for various devices on the motherboard as well as any add-in
    PCI devices.

    The true physical address layout is typically retrieved from the BIOS with
    "int 15H" while still in real mode (it's referred to as "e820" information
    in the kernel because e820h is the value you place into the AX register
    before executing "int 15H" to request this info from the BIOS). The kernel
    prints this info early in the boot process. For example, on one system
    with 4GB, dmesg shows this:
    BIOS-provided physical RAM map:
    BIOS-e820: 0000000000000000 - 000000000009d000 (usable)
    BIOS-e820: 000000000009d000 - 00000000000a0000 (reserved)
    BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved)
    BIOS-e820: 0000000000100000 - 00000000cffbce00 (usable)
    BIOS-e820: 00000000cffbce00 - 00000000cffd0000 (ACPI data)
    BIOS-e820: 00000000cffd0000 - 00000000d0000000 (reserved)
    BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved)
    BIOS-e820: 00000000fec00000 - 0000000100000000 (reserved)
    BIOS-e820: 0000000100000000 - 0000000130000000 (usable)

    Both max_pfn and num_physpages are set to encompass all of the memory
    above: this also turns out to be the number of entries in the mem_map array
    (an array of 'struct page' that contains an entry for every page of
    physical RAM). If there is a hole, the array will also contain entries for
    the pages corresponding to the hole (they're simply marked "Reserved"). So
    for the system above, max_pfn would be set to (0x130000000 / 4096 [page
    size]) = 1245184 and there are that many entries in the mem_map array.

    > I'm working on a driver for a PCI device whose DMA engines are limited
    > to 32-bit addresses. Because of the addressing limitation I'm
    > implementing one procedure for +4GB installations and another for
    > installations with at most 4GB of memory. If the physical addresses
    > are limited to 32-bits, then the driver can use a more efficient
    > method of performinh DMA. However, if an application's I/O buffers
    > might reside beyond the 4BG region, then the driver must resort to a
    > less efficient method for performing DMA. This criteria for which to
    > use may change depending upon the response I get.


    As you can see from the above, if max_pfn is above (0x100000000 / 4096) =
    1048576, that indicates that there *is* memory in the system whose physical
    address is not representable in 32-bits. What you need to do about it
    depends on what kind of device you're driving and how the memory area is
    given to the driver.

    GH

+ Reply to Thread