[PATCH 0 of 8] x86: use PTE_MASK consistently - Kernel

This is a discussion on [PATCH 0 of 8] x86: use PTE_MASK consistently - Kernel ; Hi all, Here's a series to rationalize the use of PTE_MASK and remove some amount of ad-hocery. This gist of the series is: 1. Fix the definition of PTE_MASK so that its equally applicable in all pagetable modes 2. Use ...

+ Reply to Thread
Results 1 to 12 of 12

Thread: [PATCH 0 of 8] x86: use PTE_MASK consistently

  1. [PATCH 0 of 8] x86: use PTE_MASK consistently

    Hi all,

    Here's a series to rationalize the use of PTE_MASK and remove some
    amount of ad-hocery.

    This gist of the series is:
    1. Fix the definition of PTE_MASK so that its equally applicable in
    all pagetable modes
    2. Use it consistently

    I haven't tried to address the *_bad() stuff, other than to convert
    pmd_bad_* to use PTE_MASK.

    I've compile tested it a bit and run it on 32-bit PAE (native and
    Xen), but I haven't tested it with >4G memory, non-PAE or 64-bit. In
    other words, it needs some time in Ingo's torture machine.

    J

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. [PATCH 4 of 8] x86: use PTE_MASK in 32-bit PAE

    Use PTE_MASK in 3-level pagetables (ie, 32-bit PAE).

    Signed-off-by: Jeremy Fitzhardinge
    ---
    include/asm-x86/pgtable-3level.h | 6 +++---
    1 file changed, 3 insertions(+), 3 deletions(-)

    diff --git a/include/asm-x86/pgtable-3level.h b/include/asm-x86/pgtable-3level.h
    --- a/include/asm-x86/pgtable-3level.h
    +++ b/include/asm-x86/pgtable-3level.h
    @@ -120,9 +120,9 @@
    write_cr3(pgd);
    }

    -#define pud_page(pud) ((struct page *) __va(pud_val(pud) & PAGE_MASK))
    +#define pud_page(pud) ((struct page *) __va(pud_val(pud) & PTE_MASK))

    -#define pud_page_vaddr(pud) ((unsigned long) __va(pud_val(pud) & PAGE_MASK))
    +#define pud_page_vaddr(pud) ((unsigned long) __va(pud_val(pud) & PTE_MASK))


    /* Find an entry in the second-level page table.. */
    @@ -160,7 +160,7 @@

    static inline unsigned long pte_pfn(pte_t pte)
    {
    - return (pte_val(pte) & ~_PAGE_NX) >> PAGE_SHIFT;
    + return (pte_val(pte) & PTE_MASK) >> PAGE_SHIFT;
    }

    /*


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. [PATCH 6 of 8] x86: clarify use of _PAGE_CHG_MASK

    _PAGE_CHG_MASK is defined as the set of bits not updated by
    pte_modify(); specifically, the pfn itself, and the Accessed and Dirty
    bits (which are updated by hardware).

    Signed-off-by: Jeremy Fitzhardinge
    ---
    include/asm-x86/pgtable.h | 8 +++-----
    1 file changed, 3 insertions(+), 5 deletions(-)

    diff --git a/include/asm-x86/pgtable.h b/include/asm-x86/pgtable.h
    --- a/include/asm-x86/pgtable.h
    +++ b/include/asm-x86/pgtable.h
    @@ -58,6 +58,7 @@
    #define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \
    _PAGE_DIRTY)

    +/* Set of bits not changed in pte_modify */
    #define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)

    #define _PAGE_CACHE_MASK (_PAGE_PCD | _PAGE_PWT)
    @@ -285,11 +286,8 @@
    {
    pteval_t val = pte_val(pte);

    - /*
    - * Chop off the NX bit (if present), and add the NX portion of
    - * the newprot (if present):
    - */
    - val &= _PAGE_CHG_MASK & ~_PAGE_NX;
    + /* Extract unchanged bits from pte */
    + val &= _PAGE_CHG_MASK;
    val |= pgprot_val(newprot) & __supported_pte_mask;

    return __pte(val);


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  4. [PATCH 8 of 8] xen: use PTE_MASK in pte_mfn()

    Use PTE_MASK to extract mfn from pte.

    Signed-off-by: Jeremy Fitzhardinge
    ---
    include/asm-x86/xen/page.h | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

    diff --git a/include/asm-x86/xen/page.h b/include/asm-x86/xen/page.h
    --- a/include/asm-x86/xen/page.h
    +++ b/include/asm-x86/xen/page.h
    @@ -127,7 +127,7 @@

    static inline unsigned long pte_mfn(pte_t pte)
    {
    - return (pte.pte & ~_PAGE_NX) >> PAGE_SHIFT;
    + return (pte.pte & PTE_MASK) >> PAGE_SHIFT;
    }

    static inline pte_t mfn_pte(unsigned long page_nr, pgprot_t pgprot)


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  5. [PATCH 3 of 8] x86: rearrange __(VIRTUAL|PHYSICAL)_MASK

    Put the definitions of __(VIRTUAL|PHYSICAL)_MASK before their uses.

    Signed-off-by: Jeremy Fitzhardinge
    ---
    include/asm-x86/page.h | 6 +++---
    1 file changed, 3 insertions(+), 3 deletions(-)

    diff --git a/include/asm-x86/page.h b/include/asm-x86/page.h
    --- a/include/asm-x86/page.h
    +++ b/include/asm-x86/page.h
    @@ -9,6 +9,9 @@
    #define PAGE_MASK (~(PAGE_SIZE-1))

    #ifdef __KERNEL__
    +
    +#define __PHYSICAL_MASK ((phys_addr_t)(1ULL << __PHYSICAL_MASK_SHIFT) - 1)
    +#define __VIRTUAL_MASK ((1UL << __VIRTUAL_MASK_SHIFT) - 1)

    /* Cast PAGE_MASK to a signed type so that it is sign-extended if
    virtual addresses are 32-bits but physical addresses are larger
    @@ -28,9 +31,6 @@

    /* to align the pointer to the (next) page boundary */
    #define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
    -
    -#define __PHYSICAL_MASK ((phys_addr_t)(1ULL << __PHYSICAL_MASK_SHIFT) - 1)
    -#define __VIRTUAL_MASK ((1UL << __VIRTUAL_MASK_SHIFT) - 1)

    #ifndef __ASSEMBLY__
    #include


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  6. [PATCH 1 of 8] x86: define PTE_MASK in a universally useful way

    Define PTE_MASK so that it contains a meaningful value for all x86
    pagetable configurations. Previously it was defined as a "long" which
    means that it was too short to cover a 32-bit PAE pte entry.

    It is now defined as a pteval_t, which is an integer type long enough
    to contain a full pte (or pmd, pud, pgd).

    Signed-off-by: Jeremy Fitzhardinge
    ---
    include/asm-x86/page.h | 13 +++++++++----
    1 file changed, 9 insertions(+), 4 deletions(-)

    diff --git a/include/asm-x86/page.h b/include/asm-x86/page.h
    --- a/include/asm-x86/page.h
    +++ b/include/asm-x86/page.h
    @@ -10,8 +10,13 @@

    #ifdef __KERNEL__

    -#define PHYSICAL_PAGE_MASK (PAGE_MASK & __PHYSICAL_MASK)
    -#define PTE_MASK (_AT(long, PHYSICAL_PAGE_MASK))
    +/* Cast PAGE_MASK to a signed type so that it is sign-extended if
    + virtual addresses are 32-bits but physical addresses are larger
    + (ie, 32-bit PAE). */
    +#define PHYSICAL_PAGE_MASK (((signed long)PAGE_MASK) & __PHYSICAL_MASK)
    +
    +/* PTE_MASK extracts the PFN from a (pte|pmd|pud|pgd)val_t */
    +#define PTE_MASK ((pteval_t)PHYSICAL_PAGE_MASK)

    #define PMD_PAGE_SIZE (_AC(1, UL) << PMD_SHIFT)
    #define PMD_PAGE_MASK (~(PMD_PAGE_SIZE-1))
    @@ -24,8 +29,8 @@
    /* to align the pointer to the (next) page boundary */
    #define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)

    -#define __PHYSICAL_MASK _AT(phys_addr_t, (_AC(1,ULL) << __PHYSICAL_MASK_SHIFT) - 1)
    -#define __VIRTUAL_MASK ((_AC(1,UL) << __VIRTUAL_MASK_SHIFT) - 1)
    +#define __PHYSICAL_MASK ((((phys_addr_t)1) << __PHYSICAL_MASK_SHIFT) - 1)
    +#define __VIRTUAL_MASK ((1UL << __VIRTUAL_MASK_SHIFT) - 1)

    #ifndef __ASSEMBLY__
    #include


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  7. [PATCH 5 of 8] x86: use PTE_MASK in pgtable_32.h

    ---
    include/asm-x86/pgtable_32.h | 6 +++---
    1 file changed, 3 insertions(+), 3 deletions(-)

    diff --git a/include/asm-x86/pgtable_32.h b/include/asm-x86/pgtable_32.h
    --- a/include/asm-x86/pgtable_32.h
    +++ b/include/asm-x86/pgtable_32.h
    @@ -98,9 +98,9 @@
    extern int pmd_bad(pmd_t pmd);

    #define pmd_bad_v1(x) \
    - (_KERNPG_TABLE != (pmd_val((x)) & ~(PAGE_MASK | _PAGE_USER)))
    + (_KERNPG_TABLE != (pmd_val((x)) & ~(PTE_MASK | _PAGE_USER)))
    #define pmd_bad_v2(x) \
    - (_KERNPG_TABLE != (pmd_val((x)) & ~(PAGE_MASK | _PAGE_USER | \
    + (_KERNPG_TABLE != (pmd_val((x)) & ~(PTE_MASK | _PAGE_USER | \
    _PAGE_PSE | _PAGE_NX)))

    #define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
    @@ -172,7 +172,7 @@
    #define pmd_page(pmd) (pfn_to_page(pmd_val((pmd)) >> PAGE_SHIFT))

    #define pmd_page_vaddr(pmd) \
    - ((unsigned long)__va(pmd_val((pmd)) & PAGE_MASK))
    + ((unsigned long)__va(pmd_val((pmd)) & PTE_MASK))

    #if defined(CONFIG_HIGHPTE)
    #define pte_offset_map(dir, address) \


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  8. [PATCH 2 of 8] x86: fix warning on 32-bit non-PAE

    Fix the warning:

    include2/asm/pgtable.h: In function `pte_modify':
    include2/asm/pgtable.h:290: warning: left shift count >= width of type

    On 32-bit PAE the virtual and physical addresses are both 32-bits,
    so it ends up evaluating 1<<32. Do the shift as a 64-bit shift then
    cast to the appropriate size. This should all be done at compile time,
    and so have no effect on generated code.

    Signed-off-by: Jeremy Fitzhardinge
    ---
    include/asm-x86/page.h | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

    diff --git a/include/asm-x86/page.h b/include/asm-x86/page.h
    --- a/include/asm-x86/page.h
    +++ b/include/asm-x86/page.h
    @@ -29,7 +29,7 @@
    /* to align the pointer to the (next) page boundary */
    #define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)

    -#define __PHYSICAL_MASK ((((phys_addr_t)1) << __PHYSICAL_MASK_SHIFT) - 1)
    +#define __PHYSICAL_MASK ((phys_addr_t)(1ULL << __PHYSICAL_MASK_SHIFT) - 1)
    #define __VIRTUAL_MASK ((1UL << __VIRTUAL_MASK_SHIFT) - 1)

    #ifndef __ASSEMBLY__


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  9. [PATCH 7 of 8] x86: use PTE_MASK rather than ad-hoc mask

    Use ~PTE_MASK to extract the non-pfn parts of the pte (ie, the pte
    flags), rather than constructing an ad-hoc mask.

    Signed-off-by: Jeremy Fitzhardinge
    ---
    include/asm-x86/pgtable.h | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

    diff --git a/include/asm-x86/pgtable.h b/include/asm-x86/pgtable.h
    --- a/include/asm-x86/pgtable.h
    +++ b/include/asm-x86/pgtable.h
    @@ -293,7 +293,7 @@
    return __pte(val);
    }

    -#define pte_pgprot(x) __pgprot(pte_val(x) & (0xfff | _PAGE_NX))
    +#define pte_pgprot(x) __pgprot(pte_val(x) & ~PTE_MASK)

    #define canon_pgprot(p) __pgprot(pgprot_val(p) & __supported_pte_mask)



    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  10. Re: [PATCH 5 of 8] x86: use PTE_MASK in pgtable_32.h

    On Fri, 9 May 2008, Jeremy Fitzhardinge wrote:

    > ---
    > include/asm-x86/pgtable_32.h | 6 +++---
    > 1 file changed, 3 insertions(+), 3 deletions(-)
    >
    > diff --git a/include/asm-x86/pgtable_32.h b/include/asm-x86/pgtable_32.h
    > --- a/include/asm-x86/pgtable_32.h
    > +++ b/include/asm-x86/pgtable_32.h
    > @@ -98,9 +98,9 @@
    > extern int pmd_bad(pmd_t pmd);
    >
    > #define pmd_bad_v1(x) \
    > - (_KERNPG_TABLE != (pmd_val((x)) & ~(PAGE_MASK | _PAGE_USER)))
    > + (_KERNPG_TABLE != (pmd_val((x)) & ~(PTE_MASK | _PAGE_USER)))
    > #define pmd_bad_v2(x) \
    > - (_KERNPG_TABLE != (pmd_val((x)) & ~(PAGE_MASK | _PAGE_USER | \
    > + (_KERNPG_TABLE != (pmd_val((x)) & ~(PTE_MASK | _PAGE_USER | \


    that's gone from mainline already. Hugh's patch restored the old pmd_bad check.

    Thanks,
    tglx
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  11. Re: [PATCH 5 of 8] x86: use PTE_MASK in pgtable_32.h

    Thomas Gleixner wrote:
    > that's gone from mainline already. Hugh's patch restored the old pmd_bad check.
    >


    Here's the rebased patch.

    ---
    include/asm-x86/pgtable_32.h | 4 ++--
    1 file changed, 2 insertions(+), 2 deletions(-)

    ================================================== =================
    --- a/include/asm-x86/pgtable_32.h
    +++ b/include/asm-x86/pgtable_32.h
    @@ -94,7 +94,7 @@
    /* To avoid harmful races, pmd_none(x) should check only the lower when PAE */
    #define pmd_none(x) (!(unsigned long)pmd_val((x)))
    #define pmd_present(x) (pmd_val((x)) & _PAGE_PRESENT)
    -#define pmd_bad(x) ((pmd_val(x) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
    +#define pmd_bad(x) ((pmd_val(x) & (~PTE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)

    #define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))

    @@ -165,7 +165,7 @@
    #define pmd_page(pmd) (pfn_to_page(pmd_val((pmd)) >> PAGE_SHIFT))

    #define pmd_page_vaddr(pmd) \
    - ((unsigned long)__va(pmd_val((pmd)) & PAGE_MASK))
    + ((unsigned long)__va(pmd_val((pmd)) & PTE_MASK))

    #if defined(CONFIG_HIGHPTE)
    #define pte_offset_map(dir, address) \


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  12. Re: [PATCH 0 of 8] x86: use PTE_MASK consistently


    * Jeremy Fitzhardinge wrote:

    > Here's a series to rationalize the use of PTE_MASK and remove some
    > amount of ad-hocery.
    >
    > This gist of the series is:
    > 1. Fix the definition of PTE_MASK so that its equally applicable in
    > all pagetable modes
    > 2. Use it consistently
    >
    > I haven't tried to address the *_bad() stuff, other than to convert
    > pmd_bad_* to use PTE_MASK.
    >
    > I've compile tested it a bit and run it on 32-bit PAE (native and
    > Xen), but I haven't tested it with >4G memory, non-PAE or 64-bit. In
    > other words, it needs some time in Ingo's torture machine.


    applied, thanks. This patchset has held up fine so far in overnight
    testing, nice work.

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread