What is in ubifs-2.6.git - Kernel

This is a discussion on What is in ubifs-2.6.git - Kernel ; Hi, here is the stuff we have scheduled for 2.6.28. The patches contains various small fixes and cleanups, but there is something interesting as well. The first interesting thing is new "bulk read" functionality. The idea is that many NAND ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 24

Thread: What is in ubifs-2.6.git

  1. What is in ubifs-2.6.git

    Hi,

    here is the stuff we have scheduled for 2.6.28. The patches
    contains various small fixes and cleanups, but there is something
    interesting as well.

    The first interesting thing is new "bulk read" functionality. The
    idea is that many NAND flashes support "bulk read" in some form.
    For example, OneNAND has "read while load" feature, which allows
    reading consecutive NAND pages faster than reading them one-by-one.

    So we've made UBIFS benefit from this feature and introduced new
    "bulk_read" mount option. With this option enabled UBIFS reads files
    a little ahead if the file data sits in consecutive physical
    addresses. For example, if user-space asks to read page zero of a
    file, and page 0-4 are in consecutive flash addressed, UBIFS reads
    pages 0-4 and populates them to the Page Cache.

    Note, this is disabled by default and UBIFS has to be explicitly
    mounted mounted with "bulk_read" option. The reason for this is
    that we consider this feature as experimental so far.

    Note, UBIFS does not use VFS read-ahead and actually explicitly
    disables it. This is because MTD is synchronous and all I/O is
    done synchronously, so read-ahead actually slows things down for
    UBIFS, instead of improving them. So the "bulk read" feature is
    basically internal UBIFS read-ahead implementation.

    We are able to gain 4-5MiB/s of read speed on OneNAND with bulk
    read enabled.

    The second interesting thing is new "no_chk_data_crc" mount option
    which disables data CRC32 checking.

    UBIFS always checks CRC of everything it reads from flash by default.
    On ARM platform this results in ~30% of total CPU usage in profiles,
    which is quite high. But many modern flashes are very reliable
    (e.g., OneNAND), and one does not need that level of protection.
    So now it is possible to disable CRC checking for _data_. However:

    * internal indexing information CRC is always checked;
    * when replaying the journal, data CRC is always checked;
    * on write, CRC is always calculated.

    With this mount option we are able to gain another 4-5MiB/s of read
    speed on OneNAND. And together with bulk-read, the read speed becomes
    ~10MiB/s faster.

    Adrian Hunter (11):
    UBIFS: add bulk-read facility
    UBIFS: add no_chk_data_crc mount option
    UBIFS: improve znode splitting rules
    UBIFS: correct key comparison
    UBIFS: ensure data read beyond i_size is zeroed out correctly
    UBIFS: allow for sync_fs when read-only
    UBIFS: improve garbage collection
    UBIFS: fix bulk-read handling uptodate pages
    UBIFS: add more debugging messages for LPT
    UBIFS: correct condition to eliminate unecessary assignment
    UBIFS: check buffer length when scanning for LPT nodes

    Artem Bityutskiy (9):
    UBIFS: add a print, fix comments and more minor stuff
    UBIFS: inline one-line functions
    UBIFS: check data CRC when in error state
    UBIFS: use bit-fields when possible
    UBIFS: fix races in bit-fields
    UBIFS: fix commentary
    UBIFS: update dbg_dump_inode
    UBIFS: correct comment for commit_on_unmount
    UBIFS: commit on sync_fs

    Hirofumi Nakagawa (1):
    UBIFS: remove unneeded unlikely()

    Julien Brunel (1):
    UBIFS: use an IS_ERR test rather than a NULL test

    Documentation/filesystems/ubifs.txt | 9 +
    fs/ubifs/budget.c | 26 ++--
    fs/ubifs/debug.c | 79 +++++++--
    fs/ubifs/debug.h | 6 +
    fs/ubifs/file.c | 260 ++++++++++++++++++++++++++
    fs/ubifs/find.c | 4 +-
    fs/ubifs/gc.c | 90 ++++++++--
    fs/ubifs/io.c | 12 +-
    fs/ubifs/key.h | 22 ++-
    fs/ubifs/lprops.c | 34 +----
    fs/ubifs/lpt.c | 3 +-
    fs/ubifs/lpt_commit.c | 187 ++++++++++++++++++-
    fs/ubifs/misc.h | 27 +++
    fs/ubifs/scan.c | 2 +-
    fs/ubifs/super.c | 109 +++++++++--
    fs/ubifs/tnc.c | 345 ++++++++++++++++++++++++++++++++---
    fs/ubifs/tnc_misc.c | 4 +-
    fs/ubifs/ubifs-media.h | 1 -
    fs/ubifs/ubifs.h | 85 +++++++--
    fs/ubifs/xattr.c | 2 +-
    20 files changed, 1149 insertions(+), 158 deletions(-)

    --
    Best regards,
    Artem Bityutskiy (Битюцкий Артём)
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. [PATCH] UBIFS: correct condition to eliminate unecessary assignment

    From: Adrian Hunter

    Signed-off-by: Adrian Hunter
    ---
    fs/ubifs/tnc.c | 2 +-
    1 files changed, 1 insertions(+), 1 deletions(-)

    diff --git a/fs/ubifs/tnc.c b/fs/ubifs/tnc.c
    index e0878a4..d27fd91 100644
    --- a/fs/ubifs/tnc.c
    +++ b/fs/ubifs/tnc.c
    @@ -1600,7 +1600,7 @@ out:
    * An enormous hole could cause bulk-read to encompass too many
    * page cache pages, so limit the number here.
    */
    - if (bu->blk_cnt >= UBIFS_MAX_BULK_READ)
    + if (bu->blk_cnt > UBIFS_MAX_BULK_READ)
    bu->blk_cnt = UBIFS_MAX_BULK_READ;
    /*
    * Ensure that bulk-read covers a whole number of page cache
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. [PATCH] UBIFS: inline one-line functions

    From: Artem Bityutskiy

    'ubifs_get_lprops()' and 'ubifs_release_lprops()' basically wrap
    mutex lock and unlock. We have them because we want lprops subsystem
    be separate and as independent as possible. And we planned better
    locking rules for lprops.

    Anyway, because they are short, it is better to inline them.

    Signed-off-by: Artem Bityutskiy
    ---
    fs/ubifs/lprops.c | 28 ----------------------------
    fs/ubifs/misc.h | 27 +++++++++++++++++++++++++++
    fs/ubifs/ubifs.h | 2 --
    3 files changed, 27 insertions(+), 30 deletions(-)

    diff --git a/fs/ubifs/lprops.c b/fs/ubifs/lprops.c
    index 3659b88..f27176e 100644
    --- a/fs/ubifs/lprops.c
    +++ b/fs/ubifs/lprops.c
    @@ -461,18 +461,6 @@ static void change_category(struct ubifs_info *c, struct ubifs_lprops *lprops)
    }

    /**
    - * ubifs_get_lprops - get reference to LEB properties.
    - * @c: the UBIFS file-system description object
    - *
    - * This function locks lprops. Lprops have to be unlocked by
    - * 'ubifs_release_lprops()'.
    - */
    -void ubifs_get_lprops(struct ubifs_info *c)
    -{
    - mutex_lock(&c->lp_mutex);
    -}
    -
    -/**
    * calc_dark - calculate LEB dark space size.
    * @c: the UBIFS file-system description object
    * @spc: amount of free and dirty space in the LEB
    @@ -643,22 +631,6 @@ const struct ubifs_lprops *ubifs_change_lp(struct ubifs_info *c,
    }

    /**
    - * ubifs_release_lprops - release lprops lock.
    - * @c: the UBIFS file-system description object
    - *
    - * This function has to be called after each 'ubifs_get_lprops()' call to
    - * unlock lprops.
    - */
    -void ubifs_release_lprops(struct ubifs_info *c)
    -{
    - ubifs_assert(mutex_is_locked(&c->lp_mutex));
    - ubifs_assert(c->lst.empty_lebs >= 0 &&
    - c->lst.empty_lebs <= c->main_lebs);
    -
    - mutex_unlock(&c->lp_mutex);
    -}
    -
    -/**
    * ubifs_get_lp_stats - get lprops statistics.
    * @c: UBIFS file-system description object
    * @st: return statistics
    diff --git a/fs/ubifs/misc.h b/fs/ubifs/misc.h
    index 4c12a92..4fa81d8 100644
    --- a/fs/ubifs/misc.h
    +++ b/fs/ubifs/misc.h
    @@ -310,4 +310,31 @@ static inline int ubifs_tnc_lookup(struct ubifs_info *c,
    return ubifs_tnc_locate(c, key, node, NULL, NULL);
    }

    +/**
    + * ubifs_get_lprops - get reference to LEB properties.
    + * @c: the UBIFS file-system description object
    + *
    + * This function locks lprops. Lprops have to be unlocked by
    + * 'ubifs_release_lprops()'.
    + */
    +static inline void ubifs_get_lprops(struct ubifs_info *c)
    +{
    + mutex_lock(&c->lp_mutex);
    +}
    +
    +/**
    + * ubifs_release_lprops - release lprops lock.
    + * @c: the UBIFS file-system description object
    + *
    + * This function has to be called after each 'ubifs_get_lprops()' call to
    + * unlock lprops.
    + */
    +static inline void ubifs_release_lprops(struct ubifs_info *c)
    +{
    + ubifs_assert(mutex_is_locked(&c->lp_mutex));
    + ubifs_assert(c->lst.empty_lebs >= 0 &&
    + c->lst.empty_lebs <= c->main_lebs);
    + mutex_unlock(&c->lp_mutex);
    +}
    +
    #endif /* __UBIFS_MISC_H__ */
    diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h
    index 17c620b..ce86549 100644
    --- a/fs/ubifs/ubifs.h
    +++ b/fs/ubifs/ubifs.h
    @@ -1586,12 +1586,10 @@ int ubifs_lpt_post_commit(struct ubifs_info *c);
    void ubifs_lpt_free(struct ubifs_info *c, int wr_only);

    /* lprops.c */
    -void ubifs_get_lprops(struct ubifs_info *c);
    const struct ubifs_lprops *ubifs_change_lp(struct ubifs_info *c,
    const struct ubifs_lprops *lp,
    int free, int dirty, int flags,
    int idx_gc_cnt);
    -void ubifs_release_lprops(struct ubifs_info *c);
    void ubifs_get_lp_stats(struct ubifs_info *c, struct ubifs_lp_stats *stats);
    void ubifs_add_to_cat(struct ubifs_info *c, struct ubifs_lprops *lprops,
    int cat);
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  4. [PATCH] UBIFS: ensure data read beyond i_size is zeroed out correctly

    From: Adrian Hunter

    Signed-off-by: Adrian Hunter
    ---
    fs/ubifs/file.c | 10 ++++++++--
    fs/ubifs/ubifs-media.h | 1 -
    2 files changed, 8 insertions(+), 3 deletions(-)

    diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
    index cdcfe95..2f20a49 100644
    --- a/fs/ubifs/file.c
    +++ b/fs/ubifs/file.c
    @@ -147,6 +147,12 @@ static int do_readpage(struct page *page)
    err = ret;
    if (err != -ENOENT)
    break;
    + } else if (block + 1 == beyond) {
    + int dlen = le32_to_cpu(dn->size);
    + int ilen = i_size & (UBIFS_BLOCK_SIZE - 1);
    +
    + if (ilen && ilen < dlen)
    + memset(addr + ilen, 0, dlen - ilen);
    }
    }
    if (++i >= UBIFS_BLOCKS_PER_PAGE)
    @@ -601,7 +607,7 @@ static int populate_page(struct ubifs_info *c, struct page *page,

    addr = zaddr = kmap(page);

    - end_index = (i_size + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
    + end_index = (i_size - 1) >> PAGE_CACHE_SHIFT;
    if (!i_size || page->index > end_index) {
    memset(addr, 0, PAGE_CACHE_SIZE);
    goto out_hole;
    @@ -649,7 +655,7 @@ static int populate_page(struct ubifs_info *c, struct page *page,
    if (end_index == page->index) {
    int len = i_size & (PAGE_CACHE_SIZE - 1);

    - if (len < read)
    + if (len && len < read)
    memset(zaddr + len, 0, read - len);
    }

    diff --git a/fs/ubifs/ubifs-media.h b/fs/ubifs/ubifs-media.h
    index a9ecbd9..0b37804 100644
    --- a/fs/ubifs/ubifs-media.h
    +++ b/fs/ubifs/ubifs-media.h
    @@ -75,7 +75,6 @@
    */
    #define UBIFS_BLOCK_SIZE 4096
    #define UBIFS_BLOCK_SHIFT 12
    -#define UBIFS_BLOCK_MASK 0x00000FFF

    /* UBIFS padding byte pattern (must not be first or last byte of node magic) */
    #define UBIFS_PADDING_BYTE 0xCE
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  5. [PATCH] UBIFS: correct comment for commit_on_unmount

    From: Artem Bityutskiy

    Signed-off-by: Artem Bityutskiy
    ---
    fs/ubifs/super.c | 9 +++------
    1 files changed, 3 insertions(+), 6 deletions(-)

    diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
    index cf078b5..dae1c62 100644
    --- a/fs/ubifs/super.c
    +++ b/fs/ubifs/super.c
    @@ -1465,12 +1465,9 @@ out:
    * commit_on_unmount - commit the journal when un-mounting.
    * @c: UBIFS file-system description object
    *
    - * This function is called during un-mounting and it commits the journal unless
    - * the "fast unmount" mode is enabled. It also avoids committing the journal if
    - * it contains too few data.
    - *
    - * Sometimes recovery requires the journal to be committed at least once, and
    - * this function takes care about this.
    + * This function is called during un-mounting and re-mounting, and it commits
    + * the journal unless the "fast unmount" mode is enabled. It also avoids
    + * committing the journal if it contains too few data.
    */
    static void commit_on_unmount(struct ubifs_info *c)
    {
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  6. [PATCH] UBIFS: commit on sync_fs

    From: Artem Bityutskiy

    Commit the journal when the FS is sync'ed. This will make
    statfs provide better free space report. And we anyway
    advice our users to sync the FS if they want better statfs
    report.

    Signed-off-by: Artem Bityutskiy
    ---
    fs/ubifs/super.c | 12 ++++++++++++
    1 files changed, 12 insertions(+), 0 deletions(-)

    diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
    index dae1c62..7e1f3ef 100644
    --- a/fs/ubifs/super.c
    +++ b/fs/ubifs/super.c
    @@ -418,6 +418,7 @@ static int ubifs_sync_fs(struct super_block *sb, int wait)
    {
    struct ubifs_info *c = sb->s_fs_info;
    int i, ret = 0, err;
    + long long bud_bytes;

    if (c->jheads)
    for (i = 0; i < c->jhead_cnt; i++) {
    @@ -425,6 +426,17 @@ static int ubifs_sync_fs(struct super_block *sb, int wait)
    if (err && !ret)
    ret = err;
    }
    +
    + /* Commit the journal unless it has too few data */
    + spin_lock(&c->buds_lock);
    + bud_bytes = c->bud_bytes;
    + spin_unlock(&c->buds_lock);
    + if (bud_bytes > c->leb_size) {
    + err = ubifs_run_commit(c);
    + if (err)
    + return err;
    + }
    +
    /*
    * We ought to call sync for c->ubi but it does not have one. If it had
    * it would in turn call mtd->sync, however mtd operations are
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  7. [PATCH] UBIFS: add no_chk_data_crc mount option

    From: Adrian Hunter

    UBIFS read performance can be improved by skipping the CRC
    check when data nodes are read. This option can be used if
    the underlying media is considered to be highly reliable.
    Note that CRCs are always checked for metadata.

    Read speed on Arm platform with OneNAND goes from 19 MiB/s
    to 27 MiB/s with data CRC checking disabled.

    Signed-off-by: Adrian Hunter
    ---
    Documentation/filesystems/ubifs.txt | 6 ++++++
    fs/ubifs/io.c | 11 ++++++++---
    fs/ubifs/scan.c | 2 +-
    fs/ubifs/super.c | 34 +++++++++++++++++++++++++++++++---
    fs/ubifs/tnc.c | 6 +++++-
    fs/ubifs/ubifs.h | 11 ++++++++++-
    6 files changed, 61 insertions(+), 9 deletions(-)

    diff --git a/Documentation/filesystems/ubifs.txt b/Documentation/filesystems/ubifs.txt
    index 340512c..dd84ea3 100644
    --- a/Documentation/filesystems/ubifs.txt
    +++ b/Documentation/filesystems/ubifs.txt
    @@ -89,6 +89,12 @@ fast_unmount do not commit on unmount; this option makes
    bulk_read read more in one go to take advantage of flash
    media that read faster sequentially
    no_bulk_read (*) do not bulk-read
    +no_chk_data_crc skip checking of CRCs on data nodes in order to
    + improve read performance. Use this option only
    + if the flash media is highly reliable. The effect
    + of this option is that corruption of the contents
    + of a file can go unnoticed.
    +chk_data_crc (*) do not skip checking CRCs on data nodes


    Quick usage instructions
    diff --git a/fs/ubifs/io.c b/fs/ubifs/io.c
    index 054363f..40e2790 100644
    --- a/fs/ubifs/io.c
    +++ b/fs/ubifs/io.c
    @@ -74,6 +74,7 @@ void ubifs_ro_mode(struct ubifs_info *c, int err)
    * @lnum: logical eraseblock number
    * @offs: offset within the logical eraseblock
    * @quiet: print no messages
    + * @chk_crc: indicates whether to always check the CRC
    *
    * This function checks node magic number and CRC checksum. This function also
    * validates node length to prevent UBIFS from becoming crazy when an attacker
    @@ -85,7 +86,7 @@ void ubifs_ro_mode(struct ubifs_info *c, int err)
    * or magic.
    */
    int ubifs_check_node(const struct ubifs_info *c, const void *buf, int lnum,
    - int offs, int quiet)
    + int offs, int quiet, int chk_crc)
    {
    int err = -EINVAL, type, node_len;
    uint32_t crc, node_crc, magic;
    @@ -121,6 +122,10 @@ int ubifs_check_node(const struct ubifs_info *c, const void *buf, int lnum,
    node_len > c->ranges[type].max_len)
    goto out_len;

    + if (!chk_crc && type == UBIFS_DATA_NODE && !c->always_chk_crc)
    + if (c->no_chk_data_crc)
    + return 0;
    +
    crc = crc32(UBIFS_CRC32_INIT, buf + 8, node_len - 8);
    node_crc = le32_to_cpu(ch->crc);
    if (crc != node_crc) {
    @@ -722,7 +727,7 @@ int ubifs_read_node_wbuf(struct ubifs_wbuf *wbuf, void *buf, int type, int len,
    goto out;
    }

    - err = ubifs_check_node(c, buf, lnum, offs, 0);
    + err = ubifs_check_node(c, buf, lnum, offs, 0, 0);
    if (err) {
    ubifs_err("expected node type %d", type);
    return err;
    @@ -781,7 +786,7 @@ int ubifs_read_node(const struct ubifs_info *c, void *buf, int type, int len,
    goto out;
    }

    - err = ubifs_check_node(c, buf, lnum, offs, 0);
    + err = ubifs_check_node(c, buf, lnum, offs, 0, 0);
    if (err) {
    ubifs_err("expected node type %d", type);
    return err;
    diff --git a/fs/ubifs/scan.c b/fs/ubifs/scan.c
    index acf5c5f..0ed8247 100644
    --- a/fs/ubifs/scan.c
    +++ b/fs/ubifs/scan.c
    @@ -87,7 +87,7 @@ int ubifs_scan_a_node(const struct ubifs_info *c, void *buf, int len, int lnum,

    dbg_scan("scanning %s", dbg_ntype(ch->node_type));

    - if (ubifs_check_node(c, buf, lnum, offs, quiet))
    + if (ubifs_check_node(c, buf, lnum, offs, quiet, 1))
    return SCANNED_A_CORRUPT_NODE;

    if (ch->node_type == UBIFS_PAD_NODE) {
    diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
    index b1c57e8..cf078b5 100644
    --- a/fs/ubifs/super.c
    +++ b/fs/ubifs/super.c
    @@ -406,6 +406,11 @@ static int ubifs_show_options(struct seq_file *s, struct vfsmount *mnt)
    else if (c->mount_opts.bulk_read == 1)
    seq_printf(s, ",no_bulk_read");

    + if (c->mount_opts.chk_data_crc == 2)
    + seq_printf(s, ",chk_data_crc");
    + else if (c->mount_opts.chk_data_crc == 1)
    + seq_printf(s, ",no_chk_data_crc");
    +
    return 0;
    }

    @@ -859,6 +864,8 @@ static int check_volume_empty(struct ubifs_info *c)
    * Opt_norm_unmount: run a journal commit before un-mounting
    * Opt_bulk_read: enable bulk-reads
    * Opt_no_bulk_read: disable bulk-reads
    + * Opt_chk_data_crc: check CRCs when reading data nodes
    + * Opt_no_chk_data_crc: do not check CRCs when reading data nodes
    * Opt_err: just end of array marker
    */
    enum {
    @@ -866,6 +873,8 @@ enum {
    Opt_norm_unmount,
    Opt_bulk_read,
    Opt_no_bulk_read,
    + Opt_chk_data_crc,
    + Opt_no_chk_data_crc,
    Opt_err,
    };

    @@ -874,6 +883,8 @@ static match_table_t tokens = {
    {Opt_norm_unmount, "norm_unmount"},
    {Opt_bulk_read, "bulk_read"},
    {Opt_no_bulk_read, "no_bulk_read"},
    + {Opt_chk_data_crc, "chk_data_crc"},
    + {Opt_no_chk_data_crc, "no_chk_data_crc"},
    {Opt_err, NULL},
    };

    @@ -919,6 +930,14 @@ static int ubifs_parse_options(struct ubifs_info *c, char *options,
    c->mount_opts.bulk_read = 1;
    c->bulk_read = 0;
    break;
    + case Opt_chk_data_crc:
    + c->mount_opts.chk_data_crc = 2;
    + c->no_chk_data_crc = 0;
    + break;
    + case Opt_no_chk_data_crc:
    + c->mount_opts.chk_data_crc = 1;
    + c->no_chk_data_crc = 1;
    + break;
    default:
    ubifs_err("unrecognized mount option \"%s\" "
    "or missing value", p);
    @@ -1027,6 +1046,8 @@ static int mount_ubifs(struct ubifs_info *c)
    goto out_free;
    }

    + c->always_chk_crc = 1;
    +
    err = ubifs_read_superblock(c);
    if (err)
    goto out_free;
    @@ -1168,6 +1189,8 @@ static int mount_ubifs(struct ubifs_info *c)
    if (err)
    goto out_infos;

    + c->always_chk_crc = 0;
    +
    ubifs_msg("mounted UBI device %d, volume %d, name \"%s\"",
    c->vi.ubi_num, c->vi.vol_id, c->vi.name);
    if (mounted_read_only)
    @@ -1313,6 +1336,7 @@ static int ubifs_remount_rw(struct ubifs_info *c)

    mutex_lock(&c->umount_mutex);
    c->remounting_rw = 1;
    + c->always_chk_crc = 1;

    /* Check for enough free space */
    if (ubifs_calc_available(c, c->min_idx_lebs) <= 0) {
    @@ -1381,13 +1405,15 @@ static int ubifs_remount_rw(struct ubifs_info *c)
    c->bgt = NULL;
    ubifs_err("cannot spawn \"%s\", error %d",
    c->bgt_name, err);
    - return err;
    + goto out;
    }
    wake_up_process(c->bgt);

    c->orph_buf = vmalloc(c->leb_size);
    - if (!c->orph_buf)
    - return -ENOMEM;
    + if (!c->orph_buf) {
    + err = -ENOMEM;
    + goto out;
    + }

    /* Check for enough log space */
    lnum = c->lhead_lnum + 1;
    @@ -1414,6 +1440,7 @@ static int ubifs_remount_rw(struct ubifs_info *c)
    dbg_gen("re-mounted read-write");
    c->vfs_sb->s_flags &= ~MS_RDONLY;
    c->remounting_rw = 0;
    + c->always_chk_crc = 0;
    mutex_unlock(&c->umount_mutex);
    return 0;

    @@ -1429,6 +1456,7 @@ out:
    c->ileb_buf = NULL;
    ubifs_lpt_free(c, 1);
    c->remounting_rw = 0;
    + c->always_chk_crc = 0;
    mutex_unlock(&c->umount_mutex);
    return err;
    }
    diff --git a/fs/ubifs/tnc.c b/fs/ubifs/tnc.c
    index d279012..66dc571 100644
    --- a/fs/ubifs/tnc.c
    +++ b/fs/ubifs/tnc.c
    @@ -470,6 +470,10 @@ static int try_read_node(const struct ubifs_info *c, void *buf, int type,
    if (node_len != len)
    return 0;

    + if (type == UBIFS_DATA_NODE && !c->always_chk_crc)
    + if (c->no_chk_data_crc)
    + return 0;
    +
    crc = crc32(UBIFS_CRC32_INIT, buf + 8, node_len - 8);
    node_crc = le32_to_cpu(ch->crc);
    if (crc != node_crc)
    @@ -1687,7 +1691,7 @@ static int validate_data_node(struct ubifs_info *c, void *buf,
    goto out_err;
    }

    - err = ubifs_check_node(c, buf, zbr->lnum, zbr->offs, 0);
    + err = ubifs_check_node(c, buf, zbr->lnum, zbr->offs, 0, 0);
    if (err) {
    ubifs_err("expected node type %d", UBIFS_DATA_NODE);
    goto out;
    diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h
    index 8513239..d6ae3f7 100644
    --- a/fs/ubifs/ubifs.h
    +++ b/fs/ubifs/ubifs.h
    @@ -894,10 +894,12 @@ struct ubifs_orphan {
    * struct ubifs_mount_opts - UBIFS-specific mount options information.
    * @unmount_mode: selected unmount mode (%0 default, %1 normal, %2 fast)
    * @bulk_read: enable bulk-reads
    + * @chk_data_crc: check CRCs when reading data nodes
    */
    struct ubifs_mount_opts {
    unsigned int unmount_mode:2;
    unsigned int bulk_read:2;
    + unsigned int chk_data_crc:2;
    };

    /**
    @@ -1001,6 +1003,9 @@ struct ubifs_mount_opts {
    * @bulk_read: enable bulk-reads
    * @bulk_read_buf_size: buffer size for bulk-reads
    *
    + * @no_chk_data_crc: do not check CRCs when reading data nodes (except during
    + * recovery)
    + *
    * @dirty_pg_cnt: number of dirty pages (not used)
    * @dirty_zn_cnt: number of dirty znodes
    * @clean_zn_cnt: number of clean znodes
    @@ -1138,6 +1143,7 @@ struct ubifs_mount_opts {
    * @rcvrd_mst_node: recovered master node to write when mounting ro to rw
    * @size_tree: inode size information for recovery
    * @remounting_rw: set while remounting from ro to rw (sb flags have MS_RDONLY)
    + * @always_chk_crc: always check CRCs (while mounting and remounting rw)
    * @mount_opts: UBIFS-specific mount options
    *
    * @dbg_buf: a buffer of LEB size used for debugging purposes
    @@ -1244,6 +1250,8 @@ struct ubifs_info {
    int bulk_read;
    int bulk_read_buf_size;

    + int no_chk_data_crc;
    +
    atomic_long_t dirty_pg_cnt;
    atomic_long_t dirty_zn_cnt;
    atomic_long_t clean_zn_cnt;
    @@ -1374,6 +1382,7 @@ struct ubifs_info {
    struct ubifs_mst_node *rcvrd_mst_node;
    struct rb_root size_tree;
    int remounting_rw;
    + int always_chk_crc;
    struct ubifs_mount_opts mount_opts;

    #ifdef CONFIG_UBIFS_FS_DEBUG
    @@ -1416,7 +1425,7 @@ int ubifs_read_node_wbuf(struct ubifs_wbuf *wbuf, void *buf, int type, int len,
    int ubifs_write_node(struct ubifs_info *c, void *node, int len, int lnum,
    int offs, int dtype);
    int ubifs_check_node(const struct ubifs_info *c, const void *buf, int lnum,
    - int offs, int quiet);
    + int offs, int quiet, int chk_crc);
    void ubifs_prepare_node(struct ubifs_info *c, void *buf, int len, int pad);
    void ubifs_prep_grp_node(struct ubifs_info *c, void *node, int len, int last);
    int ubifs_io_init(struct ubifs_info *c);
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  8. [PATCH] UBIFS: improve znode splitting rules

    From: Adrian Hunter

    When inserting into a full znode it is split into two
    znodes. Because data node keys are usually consecutive,
    it is better to try to keep them together. This patch
    does a better job of that.

    Signed-off-by: Adrian Hunter
    ---
    fs/ubifs/tnc.c | 54 +++++++++++++++++++++++++++++++++---------------------
    1 files changed, 33 insertions(+), 21 deletions(-)

    diff --git a/fs/ubifs/tnc.c b/fs/ubifs/tnc.c
    index 66dc571..e0878a4 100644
    --- a/fs/ubifs/tnc.c
    +++ b/fs/ubifs/tnc.c
    @@ -1962,7 +1962,7 @@ static int tnc_insert(struct ubifs_info *c, struct ubifs_znode *znode,
    {
    struct ubifs_znode *zn, *zi, *zp;
    int i, keep, move, appending = 0;
    - union ubifs_key *key = &zbr->key;
    + union ubifs_key *key = &zbr->key, *key1;

    ubifs_assert(n >= 0 && n <= c->fanout);

    @@ -2003,20 +2003,33 @@ again:
    zn->level = znode->level;

    /* Decide where to split */
    - if (znode->level == 0 && n == c->fanout &&
    - key_type(c, key) == UBIFS_DATA_KEY) {
    - union ubifs_key *key1;
    -
    - /*
    - * If this is an inode which is being appended - do not split
    - * it because no other zbranches can be inserted between
    - * zbranches of consecutive data nodes anyway.
    - */
    - key1 = &znode->zbranch[n - 1].key;
    - if (key_inum(c, key1) == key_inum(c, key) &&
    - key_type(c, key1) == UBIFS_DATA_KEY &&
    - key_block(c, key1) == key_block(c, key) - 1)
    - appending = 1;
    + if (znode->level == 0 && key_type(c, key) == UBIFS_DATA_KEY) {
    + /* Try not to split consecutive data keys */
    + if (n == c->fanout) {
    + key1 = &znode->zbranch[n - 1].key;
    + if (key_inum(c, key1) == key_inum(c, key) &&
    + key_type(c, key1) == UBIFS_DATA_KEY)
    + appending = 1;
    + } else
    + goto check_split;
    + } else if (appending && n != c->fanout) {
    + /* Try not to split consecutive data keys */
    + appending = 0;
    +check_split:
    + if (n >= (c->fanout + 1) / 2) {
    + key1 = &znode->zbranch[0].key;
    + if (key_inum(c, key1) == key_inum(c, key) &&
    + key_type(c, key1) == UBIFS_DATA_KEY) {
    + key1 = &znode->zbranch[n].key;
    + if (key_inum(c, key1) != key_inum(c, key) ||
    + key_type(c, key1) != UBIFS_DATA_KEY) {
    + keep = n;
    + move = c->fanout - keep;
    + zi = znode;
    + goto do_split;
    + }
    + }
    + }
    }

    if (appending) {
    @@ -2046,6 +2059,8 @@ again:
    zbr->znode->parent = zn;
    }

    +do_split:
    +
    __set_bit(DIRTY_ZNODE, &zn->flags);
    atomic_long_inc(&c->dirty_zn_cnt);

    @@ -2072,14 +2087,11 @@ again:

    /* Insert new znode (produced by spitting) into the parent */
    if (zp) {
    - i = n;
    + if (n == 0 && zi == znode && znode->iip == 0)
    + correct_parent_keys(c, znode);
    +
    /* Locate insertion point */
    n = znode->iip + 1;
    - if (appending && n != c->fanout)
    - appending = 0;
    -
    - if (i == 0 && zi == znode && znode->iip == 0)
    - correct_parent_keys(c, znode);

    /* Tail recursion */
    zbr->key = zn->zbranch[0].key;
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  9. [PATCH] UBIFS: fix races in bit-fields

    From: Artem Bityutskiy

    We cannot store bit-fields together if the processes which
    change them may race, unless we serialize them.

    Thus, move the nospc and nospc_rp bit-fields eway from
    the mount option/constant bit-fields, to avoid races.

    Signed-off-by: Artem Bityutskiy
    ---
    fs/ubifs/ubifs.h | 17 +++++++++--------
    1 files changed, 9 insertions(+), 8 deletions(-)

    diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h
    index 542cbaf..c3ac5a8 100644
    --- a/fs/ubifs/ubifs.h
    +++ b/fs/ubifs/ubifs.h
    @@ -334,7 +334,7 @@ struct ubifs_gced_idx_leb {
    * @bulk_read: non-zero if bulk-read should be used
    * @ui_mutex: serializes inode write-back with the rest of VFS operations,
    * serializes "clean <-> dirty" state changes, serializes bulk-read,
    - * protects @dirty, @ui_size, and @xattr_size
    + * protects @dirty, @bulk_read, @ui_size, and @xattr_size
    * @ui_lock: protects @synced_i_size
    * @synced_i_size: synchronized size of inode, i.e. the value of inode size
    * currently stored on the flash; used only for regular file
    @@ -944,10 +944,6 @@ struct ubifs_mount_opts {
    * @fast_unmount: do not run journal commit before un-mounting
    * @big_lpt: flag that LPT is too big to write whole during commit
    * @check_lpt_free: flag that indicates LPT GC may be needed
    - * @nospace: non-zero if the file-system does not have flash space (used as
    - * optimization)
    - * @nospace_rp: the same as @nospace, but additionally means that even reserved
    - * pool is full
    * @no_chk_data_crc: do not check CRCs when reading data nodes (except during
    * recovery)
    * @bulk_read: enable bulk-reads
    @@ -1017,12 +1013,17 @@ struct ubifs_mount_opts {
    * but which still have to be taken into account because
    * the index has not been committed so far
    * @space_lock: protects @budg_idx_growth, @budg_data_growth, @budg_dd_growth,
    - * @budg_uncommited_idx, @min_idx_lebs, @old_idx_sz, and @lst;
    + * @budg_uncommited_idx, @min_idx_lebs, @old_idx_sz, @lst,
    + * @nospace, and @nospace_rp;
    * @min_idx_lebs: minimum number of LEBs required for the index
    * @old_idx_sz: size of index on flash
    * @calc_idx_sz: temporary variable which is used to calculate new index size
    * (contains accurate new index size at end of TNC commit start)
    * @lst: lprops statistics
    + * @nospace: non-zero if the file-system does not have flash space (used as
    + * optimization)
    + * @nospace_rp: the same as @nospace, but additionally means that even reserved
    + * pool is full
    *
    * @page_budget: budget for a page
    * @inode_budget: budget for an inode
    @@ -1191,8 +1192,6 @@ struct ubifs_info {
    unsigned int fast_unmount:1;
    unsigned int big_lpt:1;
    unsigned int check_lpt_free:1;
    - unsigned int nospace:1;
    - unsigned int nospace_rp:1;
    unsigned int no_chk_data_crc:1;
    unsigned int bulk_read:1;

    @@ -1263,6 +1262,8 @@ struct ubifs_info {
    unsigned long long old_idx_sz;
    unsigned long long calc_idx_sz;
    struct ubifs_lp_stats lst;
    + unsigned int nospace:1;
    + unsigned int nospace_rp:1;

    int page_budget;
    int inode_budget;
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  10. [PATCH] UBIFS: fix commentary

    From: Artem Bityutskiy

    Znode may refer both data nodes and indexing nodes

    Signed-off-by: Artem Bityutskiy
    ---
    fs/ubifs/ubifs.h | 4 ++--
    1 files changed, 2 insertions(+), 2 deletions(-)

    diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h
    index c3ac5a8..49b06c9 100644
    --- a/fs/ubifs/ubifs.h
    +++ b/fs/ubifs/ubifs.h
    @@ -707,8 +707,8 @@ struct ubifs_jhead {
    * struct ubifs_zbranch - key/coordinate/length branch stored in znodes.
    * @key: key
    * @znode: znode address in memory
    - * @lnum: LEB number of the indexing node
    - * @offs: offset of the indexing node within @lnum
    + * @lnum: LEB number of the target node (indexing node or data node)
    + * @offs: target node offset within @lnum
    * @len: target node length
    */
    struct ubifs_zbranch {
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  11. [PATCH] UBIFS: correct key comparison

    From: Adrian Hunter

    The comparison was working, but more by accident than design.

    Signed-off-by: Adrian Hunter
    ---
    fs/ubifs/tnc_misc.c | 4 ++--
    1 files changed, 2 insertions(+), 2 deletions(-)

    diff --git a/fs/ubifs/tnc_misc.c b/fs/ubifs/tnc_misc.c
    index a25c1cc..b48db99 100644
    --- a/fs/ubifs/tnc_misc.c
    +++ b/fs/ubifs/tnc_misc.c
    @@ -480,8 +480,8 @@ int ubifs_tnc_read_node(struct ubifs_info *c, struct ubifs_zbranch *zbr,
    }

    /* Make sure the key of the read node is correct */
    - key_read(c, key, &key1);
    - if (memcmp(node + UBIFS_KEY_OFFSET, &key1, c->key_len)) {
    + key_read(c, node + UBIFS_KEY_OFFSET, &key1);
    + if (!keys_eq(c, key, &key1)) {
    ubifs_err("bad key in node at LEB %d:%d",
    zbr->lnum, zbr->offs);
    dbg_tnc("looked for key %s found node's key %s",
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  12. [PATCH] UBIFS: improve garbage collection

    From: Adrian Hunter

    Make garbage collection try to keep data nodes from the same
    inode together and in ascending order. This improves
    performance when reading those nodes especially when bulk-read
    is used.

    Signed-off-by: Adrian Hunter
    ---
    fs/ubifs/gc.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++++-------
    1 files changed, 72 insertions(+), 10 deletions(-)

    diff --git a/fs/ubifs/gc.c b/fs/ubifs/gc.c
    index a6b633a..0bef650 100644
    --- a/fs/ubifs/gc.c
    +++ b/fs/ubifs/gc.c
    @@ -96,6 +96,48 @@ static int switch_gc_head(struct ubifs_info *c)
    }

    /**
    + * joinup - bring data nodes for an inode together.
    + * @c: UBIFS file-system description object
    + * @sleb: describes scanned LEB
    + * @inum: inode number
    + * @blk: block number
    + * @data: list to which to add data nodes
    + *
    + * This function looks at the first few nodes in the scanned LEB @sleb and adds
    + * them to @data if they are data nodes from @inum and have a larger block
    + * number than @blk. This function returns %0 on success and a negative error
    + * code on failure.
    + */
    +static int joinup(struct ubifs_info *c, struct ubifs_scan_leb *sleb, ino_t inum,
    + unsigned int blk, struct list_head *data)
    +{
    + int err, cnt = 6, lnum = sleb->lnum, offs;
    + struct ubifs_scan_node *snod, *tmp;
    + union ubifs_key *key;
    +
    + list_for_each_entry_safe(snod, tmp, &sleb->nodes, list) {
    + key = &snod->key;
    + if (key_inum(c, key) == inum &&
    + key_type(c, key) == UBIFS_DATA_KEY &&
    + key_block(c, key) > blk) {
    + offs = snod->offs;
    + err = ubifs_tnc_has_node(c, key, 0, lnum, offs, 0);
    + if (err < 0)
    + return err;
    + list_del(&snod->list);
    + if (err) {
    + list_add_tail(&snod->list, data);
    + blk = key_block(c, key);
    + } else
    + kfree(snod);
    + cnt = 6;
    + } else if (--cnt == 0)
    + break;
    + }
    + return 0;
    +}
    +
    +/**
    * move_nodes - move nodes.
    * @c: UBIFS file-system description object
    * @sleb: describes nodes to move
    @@ -116,16 +158,21 @@ static int switch_gc_head(struct ubifs_info *c)
    static int move_nodes(struct ubifs_info *c, struct ubifs_scan_leb *sleb)
    {
    struct ubifs_scan_node *snod, *tmp;
    - struct list_head large, medium, small;
    + struct list_head data, large, medium, small;
    struct ubifs_wbuf *wbuf = &c->jheads[GCHD].wbuf;
    int avail, err, min = INT_MAX;
    + unsigned int blk = 0;
    + ino_t inum = 0;

    + INIT_LIST_HEAD(&data);
    INIT_LIST_HEAD(&large);
    INIT_LIST_HEAD(&medium);
    INIT_LIST_HEAD(&small);

    - list_for_each_entry_safe(snod, tmp, &sleb->nodes, list) {
    - struct list_head *lst;
    + while (!list_empty(&sleb->nodes)) {
    + struct list_head *lst = sleb->nodes.next;
    +
    + snod = list_entry(lst, struct ubifs_scan_node, list);

    ubifs_assert(snod->type != UBIFS_IDX_NODE);
    ubifs_assert(snod->type != UBIFS_REF_NODE);
    @@ -136,7 +183,6 @@ static int move_nodes(struct ubifs_info *c, struct ubifs_scan_leb *sleb)
    if (err < 0)
    goto out;

    - lst = &snod->list;
    list_del(lst);
    if (!err) {
    /* The node is obsolete, remove it from the list */
    @@ -145,15 +191,30 @@ static int move_nodes(struct ubifs_info *c, struct ubifs_scan_leb *sleb)
    }

    /*
    - * Sort the list of nodes so that large nodes go first, and
    - * small nodes go last.
    + * Sort the list of nodes so that data nodes go first, large
    + * nodes go second, and small nodes go last.
    */
    - if (snod->len > MEDIUM_NODE_WM)
    - list_add(lst, &large);
    + if (key_type(c, &snod->key) == UBIFS_DATA_KEY) {
    + if (inum != key_inum(c, &snod->key)) {
    + if (inum) {
    + /*
    + * Try to move data nodes from the same
    + * inode together.
    + */
    + err = joinup(c, sleb, inum, blk, &data);
    + if (err)
    + goto out;
    + }
    + inum = key_inum(c, &snod->key);
    + blk = key_block(c, &snod->key);
    + }
    + list_add_tail(lst, &data);
    + } else if (snod->len > MEDIUM_NODE_WM)
    + list_add_tail(lst, &large);
    else if (snod->len > SMALL_NODE_WM)
    - list_add(lst, &medium);
    + list_add_tail(lst, &medium);
    else
    - list_add(lst, &small);
    + list_add_tail(lst, &small);

    /* And find the smallest node */
    if (snod->len < min)
    @@ -164,6 +225,7 @@ static int move_nodes(struct ubifs_info *c, struct ubifs_scan_leb *sleb)
    * Join the tree lists so that we'd have one roughly sorted list
    * ('large' will be the head of the joined list).
    */
    + list_splice(&data, &large);
    list_splice(&medium, large.prev);
    list_splice(&small, large.prev);

    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  13. [PATCH] UBIFS: check buffer length when scanning for LPT nodes

    From: Adrian Hunter

    'is_a_node()' function was reading from a buffer before
    checking the buffer length, resulting in an OOPS as
    follows:

    BUG: unable to handle kernel paging request at f8f74002
    IP: [] :ubifs:ubifs_unpack_bits+0xca/0x233
    *pde = 19e95067 *pte = 00000000
    Oops: 0000 [#1] PREEMPT SMP
    Modules linked in: ubifs ubi mtdchar bio2mtd mtd brd video output
    [last unloaded: mtd]

    Pid: 6414, comm: integck Not tainted (2.6.27-rc6ubifs34 #23)
    EIP: 0060:[] EFLAGS: 00010246 CPU: 0
    EIP is at ubifs_unpack_bits+0xca/0x233 [ubifs]
    EAX: 00000000 EBX: f6090630 ECX: d9badcfc EDX: 00000000
    ESI: 00000004 EDI: f8f74002 EBP: d9badcec ESP: d9badcc0
    DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
    Process integck (pid: 6414, ti=d9bac000 task=f727dae0 task.ti=d9bac000)
    Stack: 00000006 f7306240 00000002 00000000 d9badcfc d9badd00 0000001c 00000000
    f6090630 f6090630 f8f74000 d9badd10 f8fa1cc9 00000000 f8f74002 00000000
    f8f74002 f60fe128 f6090630 f8f74000 d9badd68 f8fa1e46 00000000 0001e000
    Call Trace:
    [] ? is_a_node+0x30/0x90 [ubifs]
    [] ? dbg_check_ltab+0x11d/0x5bd [ubifs]
    [] ? ubifs_lpt_start_commit+0x42/0xed3 [ubifs]
    [] ? mutex_unlock+0x8/0xa
    [] ? ubifs_tnc_start_commit+0x1c8/0xedb [ubifs]
    [] ? do_commit+0x187/0x523 [ubifs]
    [] ? mutex_unlock+0x8/0xa
    [] ? bud_wbuf_callback+0x22/0x28 [ubifs]
    [] ? ubifs_run_commit+0x76/0xc0 [ubifs]
    [] ? ubifs_sync_fs+0xd2/0xe6 [ubifs]
    [] ? vfs_quota_sync+0x0/0x17e
    [] ? quota_sync_sb+0x26/0xbb
    [] ? vfs_quota_sync+0x0/0x17e
    [] ? sync_dquots+0x22/0x12c
    [] ? __fsync_super+0x19/0x68
    [] ? fsync_super+0xb/0x19
    [] ? generic_shutdown_super+0x22/0xe7
    [] ? vfs_quota_off+0x0/0x5fd
    [] ? ubifs_kill_sb+0x31/0x35 [ubifs]
    [] ? deactivate_super+0x5e/0x71
    [] ? mntput_no_expire+0x82/0xe4
    [] ? sys_umount+0x4c/0x2f6
    [] ? sys_oldumount+0x19/0x1b
    [] ? sysenter_do_call+0x12/0x25
    =======================
    Code: c1 f8 03 8d 04 07 8b 4d e8 89 01 8b 45 e4 89 10 89 d8 89 f1 d3 e8 85 c0
    74 07 29 d6 83 fe 20 75 2a 89 d8 83 c4 20 5b 5e 5f 5d
    EIP: [] ubifs_unpack_bits+0xca/0x233 [ubifs] SS:ESP 0068:d9badcc0
    ---[ end trace 1f02572436518c13 ]---

    Signed-off-by: Adrian Hunter
    ---
    fs/ubifs/lpt_commit.c | 2 ++
    1 files changed, 2 insertions(+), 0 deletions(-)

    diff --git a/fs/ubifs/lpt_commit.c b/fs/ubifs/lpt_commit.c
    index 8546865..eed5a00 100644
    --- a/fs/ubifs/lpt_commit.c
    +++ b/fs/ubifs/lpt_commit.c
    @@ -1089,6 +1089,8 @@ static int is_a_node(struct ubifs_info *c, uint8_t *buf, int len)
    int pos = 0, node_type, node_len;
    uint16_t crc, calc_crc;

    + if (len < UBIFS_LPT_CRC_BYTES + (UBIFS_LPT_TYPE_BITS + 7) / 8)
    + return 0;
    node_type = ubifs_unpack_bits(&addr, &pos, UBIFS_LPT_TYPE_BITS);
    if (node_type == UBIFS_LPT_NOT_A_NODE)
    return 0;
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  14. [PATCH] UBIFS: use an IS_ERR test rather than a NULL test

    From: Julien Brunel

    In case of error, the function kthread_create returns an ERR pointer,
    but never returns a NULL pointer. So a NULL test that comes before an
    IS_ERR test should be deleted.

    The semantic match that finds this problem is as follows:
    (http://www.emn.fr/x-info/coccinelle/)

    //
    @match_bad_null_test@
    expression x, E;
    statement S1,S2;
    @@
    x = kthread_create(...)
    .... when != x = E
    * if (x == NULL)
    S1 else S2
    //


    Signed-off-by: Julien Brunel
    Signed-off-by: Julia Lawall
    Signed-off-by: Artem Bityutskiy
    ---
    fs/ubifs/super.c | 4 ----
    1 files changed, 0 insertions(+), 4 deletions(-)

    diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
    index 667c72d..d87b0cf 100644
    --- a/fs/ubifs/super.c
    +++ b/fs/ubifs/super.c
    @@ -1032,8 +1032,6 @@ static int mount_ubifs(struct ubifs_info *c)

    /* Create background thread */
    c->bgt = kthread_create(ubifs_bg_thread, c, c->bgt_name);
    - if (!c->bgt)
    - c->bgt = ERR_PTR(-EINVAL);
    if (IS_ERR(c->bgt)) {
    err = PTR_ERR(c->bgt);
    c->bgt = NULL;
    @@ -1347,8 +1345,6 @@ static int ubifs_remount_rw(struct ubifs_info *c)

    /* Create background thread */
    c->bgt = kthread_create(ubifs_bg_thread, c, c->bgt_name);
    - if (!c->bgt)
    - c->bgt = ERR_PTR(-EINVAL);
    if (IS_ERR(c->bgt)) {
    err = PTR_ERR(c->bgt);
    c->bgt = NULL;
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  15. [PATCH] UBIFS: fix bulk-read handling uptodate pages

    From: Adrian Hunter

    Bulk-read skips uptodate pages but this was putting its
    array index out and causing it to treat subsequent pages
    as holes.

    Signed-off-by: Adrian Hunter
    ---
    fs/ubifs/file.c | 16 +++++++++++-----
    1 files changed, 11 insertions(+), 5 deletions(-)

    diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
    index 2f20a49..51cf511 100644
    --- a/fs/ubifs/file.c
    +++ b/fs/ubifs/file.c
    @@ -595,7 +595,7 @@ out:
    static int populate_page(struct ubifs_info *c, struct page *page,
    struct bu_info *bu, int *n)
    {
    - int i = 0, nn = *n, offs = bu->zbranch[0].offs, hole = 1, read = 0;
    + int i = 0, nn = *n, offs = bu->zbranch[0].offs, hole = 0, read = 0;
    struct inode *inode = page->mapping->host;
    loff_t i_size = i_size_read(inode);
    unsigned int page_block;
    @@ -609,6 +609,7 @@ static int populate_page(struct ubifs_info *c, struct page *page,

    end_index = (i_size - 1) >> PAGE_CACHE_SHIFT;
    if (!i_size || page->index > end_index) {
    + hole = 1;
    memset(addr, 0, PAGE_CACHE_SIZE);
    goto out_hole;
    }
    @@ -617,10 +618,10 @@ static int populate_page(struct ubifs_info *c, struct page *page,
    while (1) {
    int err, len, out_len, dlen;

    - if (nn >= bu->cnt ||
    - key_block(c, &bu->zbranch[nn].key) != page_block)
    + if (nn >= bu->cnt) {
    + hole = 1;
    memset(addr, 0, UBIFS_BLOCK_SIZE);
    - else {
    + } else if (key_block(c, &bu->zbranch[nn].key) == page_block) {
    struct ubifs_data_node *dn;

    dn = bu->buf + (bu->zbranch[nn].offs - offs);
    @@ -643,8 +644,13 @@ static int populate_page(struct ubifs_info *c, struct page *page,
    memset(addr + len, 0, UBIFS_BLOCK_SIZE - len);

    nn += 1;
    - hole = 0;
    read = (i << UBIFS_BLOCK_SHIFT) + len;
    + } else if (key_block(c, &bu->zbranch[nn].key) < page_block) {
    + nn += 1;
    + continue;
    + } else {
    + hole = 1;
    + memset(addr, 0, UBIFS_BLOCK_SIZE);
    }
    if (++i >= UBIFS_BLOCKS_PER_PAGE)
    break;
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  16. [PATCH] UBIFS: remove unneeded unlikely()

    From: Hirofumi Nakagawa

    IS_ERR() macro already has unlikely(), so do not use constructions
    like 'if (unlikely(IS_ERR())'.

    Signed-off-by: Hirofumi Nakagawa
    Signed-off-by: Artem Bityutskiy
    ---
    fs/ubifs/find.c | 4 ++--
    fs/ubifs/gc.c | 8 ++++----
    fs/ubifs/tnc.c | 4 ++--
    fs/ubifs/xattr.c | 2 +-
    4 files changed, 9 insertions(+), 9 deletions(-)

    diff --git a/fs/ubifs/find.c b/fs/ubifs/find.c
    index 47814cd..717d79c 100644
    --- a/fs/ubifs/find.c
    +++ b/fs/ubifs/find.c
    @@ -901,11 +901,11 @@ static int get_idx_gc_leb(struct ubifs_info *c)
    * it is needed now for this commit.
    */
    lp = ubifs_lpt_lookup_dirty(c, lnum);
    - if (unlikely(IS_ERR(lp)))
    + if (IS_ERR(lp))
    return PTR_ERR(lp);
    lp = ubifs_change_lp(c, lp, LPROPS_NC, LPROPS_NC,
    lp->flags | LPROPS_INDEX, -1);
    - if (unlikely(IS_ERR(lp)))
    + if (IS_ERR(lp))
    return PTR_ERR(lp);
    dbg_find("LEB %d, dirty %d and free %d flags %#x",
    lp->lnum, lp->dirty, lp->free, lp->flags);
    diff --git a/fs/ubifs/gc.c b/fs/ubifs/gc.c
    index 02aba36..a6b633a 100644
    --- a/fs/ubifs/gc.c
    +++ b/fs/ubifs/gc.c
    @@ -653,7 +653,7 @@ int ubifs_gc_start_commit(struct ubifs_info *c)
    */
    while (1) {
    lp = ubifs_fast_find_freeable(c);
    - if (unlikely(IS_ERR(lp))) {
    + if (IS_ERR(lp)) {
    err = PTR_ERR(lp);
    goto out;
    }
    @@ -665,7 +665,7 @@ int ubifs_gc_start_commit(struct ubifs_info *c)
    if (err)
    goto out;
    lp = ubifs_change_lp(c, lp, c->leb_size, 0, lp->flags, 0);
    - if (unlikely(IS_ERR(lp))) {
    + if (IS_ERR(lp)) {
    err = PTR_ERR(lp);
    goto out;
    }
    @@ -680,7 +680,7 @@ int ubifs_gc_start_commit(struct ubifs_info *c)
    /* Record index freeable LEBs for unmapping after commit */
    while (1) {
    lp = ubifs_fast_find_frdi_idx(c);
    - if (unlikely(IS_ERR(lp))) {
    + if (IS_ERR(lp)) {
    err = PTR_ERR(lp);
    goto out;
    }
    @@ -696,7 +696,7 @@ int ubifs_gc_start_commit(struct ubifs_info *c)
    /* Don't release the LEB until after the next commit */
    flags = (lp->flags | LPROPS_TAKEN) ^ LPROPS_INDEX;
    lp = ubifs_change_lp(c, lp, c->leb_size, 0, flags, 1);
    - if (unlikely(IS_ERR(lp))) {
    + if (IS_ERR(lp)) {
    err = PTR_ERR(lp);
    kfree(idx_gc);
    goto out;
    diff --git a/fs/ubifs/tnc.c b/fs/ubifs/tnc.c
    index 7634c59..ba13c92 100644
    --- a/fs/ubifs/tnc.c
    +++ b/fs/ubifs/tnc.c
    @@ -284,7 +284,7 @@ static struct ubifs_znode *dirty_cow_znode(struct ubifs_info *c,
    }

    zn = copy_znode(c, znode);
    - if (unlikely(IS_ERR(zn)))
    + if (IS_ERR(zn))
    return zn;

    if (zbr->len) {
    @@ -1128,7 +1128,7 @@ static struct ubifs_znode *dirty_cow_bottom_up(struct ubifs_info *c,
    ubifs_assert(znode == c->zroot.znode);
    znode = dirty_cow_znode(c, &c->zroot);
    }
    - if (unlikely(IS_ERR(znode)) || !p)
    + if (IS_ERR(znode) || !p)
    break;
    ubifs_assert(path[p - 1] >= 0);
    ubifs_assert(path[p - 1] < znode->child_cnt);
    diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
    index 649bec7..cfd31e2 100644
    --- a/fs/ubifs/xattr.c
    +++ b/fs/ubifs/xattr.c
    @@ -446,7 +446,7 @@ ssize_t ubifs_listxattr(struct dentry *dentry, char *buffer, size_t size)
    int type;

    xent = ubifs_tnc_next_ent(c, &key, &nm);
    - if (unlikely(IS_ERR(xent))) {
    + if (IS_ERR(xent)) {
    err = PTR_ERR(xent);
    break;
    }
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  17. [PATCH] UBIFS: add a print, fix comments and more minor stuff

    From: Artem Bityutskiy

    This commit adds a reserved pool size print and tweaks the
    prints to make them look nicer.

    It also fixes and cleans-up some comments.

    Additionally, it deletes some blank lines to make the code look
    a little nicer.

    In other words, nothing essential.

    Signed-off-by: Artem Bityutskiy
    ---
    fs/ubifs/budget.c | 26 ++++++++++++++------------
    fs/ubifs/lprops.c | 6 +-----
    fs/ubifs/super.c | 16 +++++++++-------
    3 files changed, 24 insertions(+), 24 deletions(-)

    diff --git a/fs/ubifs/budget.c b/fs/ubifs/budget.c
    index 73db464..1a4973e 100644
    --- a/fs/ubifs/budget.c
    +++ b/fs/ubifs/budget.c
    @@ -414,19 +414,21 @@ static int do_budget_space(struct ubifs_info *c)
    * @c->lst.empty_lebs + @c->freeable_cnt + @c->idx_gc_cnt -
    * @c->lst.taken_empty_lebs
    *
    - * @empty_lebs are available because they are empty. @freeable_cnt are
    - * available because they contain only free and dirty space and the
    - * index allocation always occurs after wbufs are synch'ed.
    - * @idx_gc_cnt are available because they are index LEBs that have been
    - * garbage collected (including trivial GC) and are awaiting the commit
    - * before they can be unmapped - note that the in-the-gaps method will
    - * grab these if it needs them. @taken_empty_lebs are empty_lebs that
    - * have already been allocated for some purpose (also includes those
    - * LEBs on the @idx_gc list).
    + * @c->lst.empty_lebs are available because they are empty.
    + * @c->freeable_cnt are available because they contain only free and
    + * dirty space, @c->idx_gc_cnt are available because they are index
    + * LEBs that have been garbage collected and are awaiting the commit
    + * before they can be used. And the in-the-gaps method will grab these
    + * if it needs them. @c->lst.taken_empty_lebs are empty LEBs that have
    + * already been allocated for some purpose.
    *
    - * Note, @taken_empty_lebs may temporarily be higher by one because of
    - * the way we serialize LEB allocations and budgeting. See a comment in
    - * 'ubifs_find_free_space()'.
    + * Note, @c->idx_gc_cnt is included to both @c->lst.empty_lebs (because
    + * these LEBs are empty) and to @c->lst.taken_empty_lebs (because they
    + * are taken until after the commit).
    + *
    + * Note, @c->lst.taken_empty_lebs may temporarily be higher by one
    + * because of the way we serialize LEB allocations and budgeting. See a
    + * comment in 'ubifs_find_free_space()'.
    */
    lebs = c->lst.empty_lebs + c->freeable_cnt + c->idx_gc_cnt -
    c->lst.taken_empty_lebs;
    diff --git a/fs/ubifs/lprops.c b/fs/ubifs/lprops.c
    index 2ba93da..3659b88 100644
    --- a/fs/ubifs/lprops.c
    +++ b/fs/ubifs/lprops.c
    @@ -125,6 +125,7 @@ static void adjust_lpt_heap(struct ubifs_info *c, struct ubifs_lpt_heap *heap,
    }
    }
    }
    +
    /* Not greater than parent, so compare to children */
    while (1) {
    /* Compare to left child */
    @@ -576,7 +577,6 @@ const struct ubifs_lprops *ubifs_change_lp(struct ubifs_info *c,
    ubifs_assert(!(lprops->free & 7) && !(lprops->dirty & 7));

    spin_lock(&c->space_lock);
    -
    if ((lprops->flags & LPROPS_TAKEN) && lprops->free == c->leb_size)
    c->lst.taken_empty_lebs -= 1;

    @@ -637,11 +637,8 @@ const struct ubifs_lprops *ubifs_change_lp(struct ubifs_info *c,
    c->lst.taken_empty_lebs += 1;

    change_category(c, lprops);
    -
    c->idx_gc_cnt += idx_gc_cnt;
    -
    spin_unlock(&c->space_lock);
    -
    return lprops;
    }

    @@ -1262,7 +1259,6 @@ static int scan_check_cb(struct ubifs_info *c,
    }

    ubifs_scan_destroy(sleb);
    -
    return LPT_SCAN_CONTINUE;

    out_print:
    diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
    index 3f49020..667c72d 100644
    --- a/fs/ubifs/super.c
    +++ b/fs/ubifs/super.c
    @@ -1144,19 +1144,21 @@ static int mount_ubifs(struct ubifs_info *c)
    if (mounted_read_only)
    ubifs_msg("mounted read-only");
    x = (long long)c->main_lebs * c->leb_size;
    - ubifs_msg("file system size: %lld bytes (%lld KiB, %lld MiB, %d LEBs)",
    - x, x >> 10, x >> 20, c->main_lebs);
    + ubifs_msg("file system size: %lld bytes (%lld KiB, %lld MiB, %d "
    + "LEBs)", x, x >> 10, x >> 20, c->main_lebs);
    x = (long long)c->log_lebs * c->leb_size + c->max_bud_bytes;
    - ubifs_msg("journal size: %lld bytes (%lld KiB, %lld MiB, %d LEBs)",
    - x, x >> 10, x >> 20, c->log_lebs + c->max_bud_cnt);
    - ubifs_msg("default compressor: %s", ubifs_compr_name(c->default_compr));
    - ubifs_msg("media format %d, latest format %d",
    + ubifs_msg("journal size: %lld bytes (%lld KiB, %lld MiB, %d "
    + "LEBs)", x, x >> 10, x >> 20, c->log_lebs + c->max_bud_cnt);
    + ubifs_msg("media format: %d (latest is %d)",
    c->fmt_version, UBIFS_FORMAT_VERSION);
    + ubifs_msg("default compressor: %s", ubifs_compr_name(c->default_compr));
    + ubifs_msg("reserved pool size: %llu bytes (%llu KiB)",
    + c->report_rp_size, c->report_rp_size >> 10);

    dbg_msg("compiled on: " __DATE__ " at " __TIME__);
    dbg_msg("min. I/O unit size: %d bytes", c->min_io_size);
    dbg_msg("LEB size: %d bytes (%d KiB)",
    - c->leb_size, c->leb_size / 1024);
    + c->leb_size, c->leb_size >> 10);
    dbg_msg("data journal heads: %d",
    c->jhead_cnt - NONDATA_JHEADS_CNT);
    dbg_msg("UUID: %02X%02X%02X%02X-%02X%02X"
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  18. [PATCH] UBIFS: add bulk-read facility

    From: Adrian Hunter

    Some flash media are capable of reading sequentially at faster rates.
    UBIFS bulk-read facility is designed to take advantage of that, by
    reading in one go consecutive data nodes that are also located
    consecutively in the same LEB.

    Read speed on Arm platform with OneNAND goes from 17 MiB/s to
    19 MiB/s.

    Signed-off-by: Adrian Hunter
    ---
    Documentation/filesystems/ubifs.txt | 3 +
    fs/ubifs/file.c | 248 ++++++++++++++++++++++++++++++
    fs/ubifs/key.h | 22 +++-
    fs/ubifs/super.c | 31 ++++
    fs/ubifs/tnc.c | 283 +++++++++++++++++++++++++++++++++++
    fs/ubifs/ubifs.h | 45 ++++++-
    6 files changed, 629 insertions(+), 3 deletions(-)

    diff --git a/Documentation/filesystems/ubifs.txt b/Documentation/filesystems/ubifs.txt
    index 6a0d70a..340512c 100644
    --- a/Documentation/filesystems/ubifs.txt
    +++ b/Documentation/filesystems/ubifs.txt
    @@ -86,6 +86,9 @@ norm_unmount (*) commit on unmount; the journal is committed
    fast_unmount do not commit on unmount; this option makes
    unmount faster, but the next mount slower
    because of the need to replay the journal.
    +bulk_read read more in one go to take advantage of flash
    + media that read faster sequentially
    +no_bulk_read (*) do not bulk-read


    Quick usage instructions
    diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
    index 3d698e2..cdcfe95 100644
    --- a/fs/ubifs/file.c
    +++ b/fs/ubifs/file.c
    @@ -577,8 +577,256 @@ out:
    return copied;
    }

    +/**
    + * populate_page - copy data nodes into a page for bulk-read.
    + * @c: UBIFS file-system description object
    + * @page: page
    + * @bu: bulk-read information
    + * @n: next zbranch slot
    + *
    + * This function returns %0 on success and a negative error code on failure.
    + */
    +static int populate_page(struct ubifs_info *c, struct page *page,
    + struct bu_info *bu, int *n)
    +{
    + int i = 0, nn = *n, offs = bu->zbranch[0].offs, hole = 1, read = 0;
    + struct inode *inode = page->mapping->host;
    + loff_t i_size = i_size_read(inode);
    + unsigned int page_block;
    + void *addr, *zaddr;
    + pgoff_t end_index;
    +
    + dbg_gen("ino %lu, pg %lu, i_size %lld, flags %#lx",
    + inode->i_ino, page->index, i_size, page->flags);
    +
    + addr = zaddr = kmap(page);
    +
    + end_index = (i_size + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
    + if (!i_size || page->index > end_index) {
    + memset(addr, 0, PAGE_CACHE_SIZE);
    + goto out_hole;
    + }
    +
    + page_block = page->index << UBIFS_BLOCKS_PER_PAGE_SHIFT;
    + while (1) {
    + int err, len, out_len, dlen;
    +
    + if (nn >= bu->cnt ||
    + key_block(c, &bu->zbranch[nn].key) != page_block)
    + memset(addr, 0, UBIFS_BLOCK_SIZE);
    + else {
    + struct ubifs_data_node *dn;
    +
    + dn = bu->buf + (bu->zbranch[nn].offs - offs);
    +
    + ubifs_assert(dn->ch.sqnum >
    + ubifs_inode(inode)->creat_sqnum);
    +
    + len = le32_to_cpu(dn->size);
    + if (len <= 0 || len > UBIFS_BLOCK_SIZE)
    + goto out_err;
    +
    + dlen = le32_to_cpu(dn->ch.len) - UBIFS_DATA_NODE_SZ;
    + out_len = UBIFS_BLOCK_SIZE;
    + err = ubifs_decompress(&dn->data, dlen, addr, &out_len,
    + le16_to_cpu(dn->compr_type));
    + if (err || len != out_len)
    + goto out_err;
    +
    + if (len < UBIFS_BLOCK_SIZE)
    + memset(addr + len, 0, UBIFS_BLOCK_SIZE - len);
    +
    + nn += 1;
    + hole = 0;
    + read = (i << UBIFS_BLOCK_SHIFT) + len;
    + }
    + if (++i >= UBIFS_BLOCKS_PER_PAGE)
    + break;
    + addr += UBIFS_BLOCK_SIZE;
    + page_block += 1;
    + }
    +
    + if (end_index == page->index) {
    + int len = i_size & (PAGE_CACHE_SIZE - 1);
    +
    + if (len < read)
    + memset(zaddr + len, 0, read - len);
    + }
    +
    +out_hole:
    + if (hole) {
    + SetPageChecked(page);
    + dbg_gen("hole");
    + }
    +
    + SetPageUptodate(page);
    + ClearPageError(page);
    + flush_dcache_page(page);
    + kunmap(page);
    + *n = nn;
    + return 0;
    +
    +out_err:
    + ClearPageUptodate(page);
    + SetPageError(page);
    + flush_dcache_page(page);
    + kunmap(page);
    + ubifs_err("bad data node (block %u, inode %lu)",
    + page_block, inode->i_ino);
    + return -EINVAL;
    +}
    +
    +/**
    + * ubifs_do_bulk_read - do bulk-read.
    + * @c: UBIFS file-system description object
    + * @page1: first page
    + *
    + * This function returns %1 if the bulk-read is done, otherwise %0 is returned.
    + */
    +static int ubifs_do_bulk_read(struct ubifs_info *c, struct page *page1)
    +{
    + pgoff_t offset = page1->index, end_index;
    + struct address_space *mapping = page1->mapping;
    + struct inode *inode = mapping->host;
    + struct ubifs_inode *ui = ubifs_inode(inode);
    + struct bu_info *bu;
    + int err, page_idx, page_cnt, ret = 0, n = 0;
    + loff_t isize;
    +
    + bu = kmalloc(sizeof(struct bu_info), GFP_NOFS);
    + if (!bu)
    + return 0;
    +
    + bu->buf_len = c->bulk_read_buf_size;
    + bu->buf = kmalloc(bu->buf_len, GFP_NOFS);
    + if (!bu->buf)
    + goto out_free;
    +
    + data_key_init(c, &bu->key, inode->i_ino,
    + offset << UBIFS_BLOCKS_PER_PAGE_SHIFT);
    +
    + err = ubifs_tnc_get_bu_keys(c, bu);
    + if (err)
    + goto out_warn;
    +
    + if (bu->eof) {
    + /* Turn off bulk-read at the end of the file */
    + ui->read_in_a_row = 1;
    + ui->bulk_read = 0;
    + }
    +
    + page_cnt = bu->blk_cnt >> UBIFS_BLOCKS_PER_PAGE_SHIFT;
    + if (!page_cnt) {
    + /*
    + * This happens when there are multiple blocks per page and the
    + * blocks for the first page we are looking for, are not
    + * together. If all the pages were like this, bulk-read would
    + * reduce performance, so we turn it off for a while.
    + */
    + ui->read_in_a_row = 0;
    + ui->bulk_read = 0;
    + goto out_free;
    + }
    +
    + if (bu->cnt) {
    + err = ubifs_tnc_bulk_read(c, bu);
    + if (err)
    + goto out_warn;
    + }
    +
    + err = populate_page(c, page1, bu, &n);
    + if (err)
    + goto out_warn;
    +
    + unlock_page(page1);
    + ret = 1;
    +
    + isize = i_size_read(inode);
    + if (isize == 0)
    + goto out_free;
    + end_index = ((isize - 1) >> PAGE_CACHE_SHIFT);
    +
    + for (page_idx = 1; page_idx < page_cnt; page_idx++) {
    + pgoff_t page_offset = offset + page_idx;
    + struct page *page;
    +
    + if (page_offset > end_index)
    + break;
    + page = find_or_create_page(mapping, page_offset,
    + GFP_NOFS | __GFP_COLD);
    + if (!page)
    + break;
    + if (!PageUptodate(page))
    + err = populate_page(c, page, bu, &n);
    + unlock_page(page);
    + page_cache_release(page);
    + if (err)
    + break;
    + }
    +
    + ui->last_page_read = offset + page_idx - 1;
    +
    +out_free:
    + kfree(bu->buf);
    + kfree(bu);
    + return ret;
    +
    +out_warn:
    + ubifs_warn("ignoring error %d and skipping bulk-read", err);
    + goto out_free;
    +}
    +
    +/**
    + * ubifs_bulk_read - determine whether to bulk-read and, if so, do it.
    + * @page: page from which to start bulk-read.
    + *
    + * Some flash media are capable of reading sequentially at faster rates. UBIFS
    + * bulk-read facility is designed to take advantage of that, by reading in one
    + * go consecutive data nodes that are also located consecutively in the same
    + * LEB. This function returns %1 if a bulk-read is done and %0 otherwise.
    + */
    +static int ubifs_bulk_read(struct page *page)
    +{
    + struct inode *inode = page->mapping->host;
    + struct ubifs_info *c = inode->i_sb->s_fs_info;
    + struct ubifs_inode *ui = ubifs_inode(inode);
    + pgoff_t index = page->index, last_page_read = ui->last_page_read;
    + int ret = 0;
    +
    + ui->last_page_read = index;
    +
    + if (!c->bulk_read)
    + return 0;
    + /*
    + * Bulk-read is protected by ui_mutex, but it is an optimization, so
    + * don't bother if we cannot lock the mutex.
    + */
    + if (!mutex_trylock(&ui->ui_mutex))
    + return 0;
    + if (index != last_page_read + 1) {
    + /* Turn off bulk-read if we stop reading sequentially */
    + ui->read_in_a_row = 1;
    + if (ui->bulk_read)
    + ui->bulk_read = 0;
    + goto out_unlock;
    + }
    + if (!ui->bulk_read) {
    + ui->read_in_a_row += 1;
    + if (ui->read_in_a_row < 3)
    + goto out_unlock;
    + /* Three reads in a row, so switch on bulk-read */
    + ui->bulk_read = 1;
    + }
    + ret = ubifs_do_bulk_read(c, page);
    +out_unlock:
    + mutex_unlock(&ui->ui_mutex);
    + return ret;
    +}
    +
    static int ubifs_readpage(struct file *file, struct page *page)
    {
    + if (ubifs_bulk_read(page))
    + return 0;
    do_readpage(page);
    unlock_page(page);
    return 0;
    diff --git a/fs/ubifs/key.h b/fs/ubifs/key.h
    index 8f74760..9ee6508 100644
    --- a/fs/ubifs/key.h
    +++ b/fs/ubifs/key.h
    @@ -484,7 +484,7 @@ static inline void key_copy(const struct ubifs_info *c,
    * @key2: the second key to compare
    *
    * This function compares 2 keys and returns %-1 if @key1 is less than
    - * @key2, 0 if the keys are equivalent and %1 if @key1 is greater than @key2.
    + * @key2, %0 if the keys are equivalent and %1 if @key1 is greater than @key2.
    */
    static inline int keys_cmp(const struct ubifs_info *c,
    const union ubifs_key *key1,
    @@ -503,6 +503,26 @@ static inline int keys_cmp(const struct ubifs_info *c,
    }

    /**
    + * keys_eq - determine if keys are equivalent.
    + * @c: UBIFS file-system description object
    + * @key1: the first key to compare
    + * @key2: the second key to compare
    + *
    + * This function compares 2 keys and returns %1 if @key1 is equal to @key2 and
    + * %0 if not.
    + */
    +static inline int keys_eq(const struct ubifs_info *c,
    + const union ubifs_key *key1,
    + const union ubifs_key *key2)
    +{
    + if (key1->u32[0] != key2->u32[0])
    + return 0;
    + if (key1->u32[1] != key2->u32[1])
    + return 0;
    + return 1;
    +}
    +
    +/**
    * is_hash_key - is a key vulnerable to hash collisions.
    * @c: UBIFS file-system description object
    * @key: key
    diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
    index d87b0cf..b1c57e8 100644
    --- a/fs/ubifs/super.c
    +++ b/fs/ubifs/super.c
    @@ -401,6 +401,11 @@ static int ubifs_show_options(struct seq_file *s, struct vfsmount *mnt)
    else if (c->mount_opts.unmount_mode == 1)
    seq_printf(s, ",norm_unmount");

    + if (c->mount_opts.bulk_read == 2)
    + seq_printf(s, ",bulk_read");
    + else if (c->mount_opts.bulk_read == 1)
    + seq_printf(s, ",no_bulk_read");
    +
    return 0;
    }

    @@ -538,6 +543,18 @@ static int init_constants_early(struct ubifs_info *c)
    * calculations when reporting free space.
    */
    c->leb_overhead = c->leb_size % UBIFS_MAX_DATA_NODE_SZ;
    + /* Buffer size for bulk-reads */
    + c->bulk_read_buf_size = UBIFS_MAX_BULK_READ * UBIFS_MAX_DATA_NODE_SZ;
    + if (c->bulk_read_buf_size > c->leb_size)
    + c->bulk_read_buf_size = c->leb_size;
    + if (c->bulk_read_buf_size > 128 * 1024) {
    + /* Check if we can kmalloc more than 128KiB */
    + void *try = kmalloc(c->bulk_read_buf_size, GFP_KERNEL);
    +
    + kfree(try);
    + if (!try)
    + c->bulk_read_buf_size = 128 * 1024;
    + }
    return 0;
    }

    @@ -840,17 +857,23 @@ static int check_volume_empty(struct ubifs_info *c)
    *
    * Opt_fast_unmount: do not run a journal commit before un-mounting
    * Opt_norm_unmount: run a journal commit before un-mounting
    + * Opt_bulk_read: enable bulk-reads
    + * Opt_no_bulk_read: disable bulk-reads
    * Opt_err: just end of array marker
    */
    enum {
    Opt_fast_unmount,
    Opt_norm_unmount,
    + Opt_bulk_read,
    + Opt_no_bulk_read,
    Opt_err,
    };

    static match_table_t tokens = {
    {Opt_fast_unmount, "fast_unmount"},
    {Opt_norm_unmount, "norm_unmount"},
    + {Opt_bulk_read, "bulk_read"},
    + {Opt_no_bulk_read, "no_bulk_read"},
    {Opt_err, NULL},
    };

    @@ -888,6 +911,14 @@ static int ubifs_parse_options(struct ubifs_info *c, char *options,
    c->mount_opts.unmount_mode = 1;
    c->fast_unmount = 0;
    break;
    + case Opt_bulk_read:
    + c->mount_opts.bulk_read = 2;
    + c->bulk_read = 1;
    + break;
    + case Opt_no_bulk_read:
    + c->mount_opts.bulk_read = 1;
    + c->bulk_read = 0;
    + break;
    default:
    ubifs_err("unrecognized mount option \"%s\" "
    "or missing value", p);
    diff --git a/fs/ubifs/tnc.c b/fs/ubifs/tnc.c
    index ba13c92..d279012 100644
    --- a/fs/ubifs/tnc.c
    +++ b/fs/ubifs/tnc.c
    @@ -1492,6 +1492,289 @@ out:
    }

    /**
    + * ubifs_tnc_get_bu_keys - lookup keys for bulk-read.
    + * @c: UBIFS file-system description object
    + * @bu: bulk-read parameters and results
    + *
    + * Lookup consecutive data node keys for the same inode that reside
    + * consecutively in the same LEB.
    + */
    +int ubifs_tnc_get_bu_keys(struct ubifs_info *c, struct bu_info *bu)
    +{
    + int n, err = 0, lnum = -1, uninitialized_var(offs);
    + int uninitialized_var(len);
    + unsigned int block = key_block(c, &bu->key);
    + struct ubifs_znode *znode;
    +
    + bu->cnt = 0;
    + bu->blk_cnt = 0;
    + bu->eof = 0;
    +
    + mutex_lock(&c->tnc_mutex);
    + /* Find first key */
    + err = ubifs_lookup_level0(c, &bu->key, &znode, &n);
    + if (err < 0)
    + goto out;
    + if (err) {
    + /* Key found */
    + len = znode->zbranch[n].len;
    + /* The buffer must be big enough for at least 1 node */
    + if (len > bu->buf_len) {
    + err = -EINVAL;
    + goto out;
    + }
    + /* Add this key */
    + bu->zbranch[bu->cnt++] = znode->zbranch[n];
    + bu->blk_cnt += 1;
    + lnum = znode->zbranch[n].lnum;
    + offs = ALIGN(znode->zbranch[n].offs + len, 8);
    + }
    + while (1) {
    + struct ubifs_zbranch *zbr;
    + union ubifs_key *key;
    + unsigned int next_block;
    +
    + /* Find next key */
    + err = tnc_next(c, &znode, &n);
    + if (err)
    + goto out;
    + zbr = &znode->zbranch[n];
    + key = &zbr->key;
    + /* See if there is another data key for this file */
    + if (key_inum(c, key) != key_inum(c, &bu->key) ||
    + key_type(c, key) != UBIFS_DATA_KEY) {
    + err = -ENOENT;
    + goto out;
    + }
    + if (lnum < 0) {
    + /* First key found */
    + lnum = zbr->lnum;
    + offs = ALIGN(zbr->offs + zbr->len, 8);
    + len = zbr->len;
    + if (len > bu->buf_len) {
    + err = -EINVAL;
    + goto out;
    + }
    + } else {
    + /*
    + * The data nodes must be in consecutive positions in
    + * the same LEB.
    + */
    + if (zbr->lnum != lnum || zbr->offs != offs)
    + goto out;
    + offs += ALIGN(zbr->len, 8);
    + len = ALIGN(len, 8) + zbr->len;
    + /* Must not exceed buffer length */
    + if (len > bu->buf_len)
    + goto out;
    + }
    + /* Allow for holes */
    + next_block = key_block(c, key);
    + bu->blk_cnt += (next_block - block - 1);
    + if (bu->blk_cnt >= UBIFS_MAX_BULK_READ)
    + goto out;
    + block = next_block;
    + /* Add this key */
    + bu->zbranch[bu->cnt++] = *zbr;
    + bu->blk_cnt += 1;
    + /* See if we have room for more */
    + if (bu->cnt >= UBIFS_MAX_BULK_READ)
    + goto out;
    + if (bu->blk_cnt >= UBIFS_MAX_BULK_READ)
    + goto out;
    + }
    +out:
    + if (err == -ENOENT) {
    + bu->eof = 1;
    + err = 0;
    + }
    + bu->gc_seq = c->gc_seq;
    + mutex_unlock(&c->tnc_mutex);
    + if (err)
    + return err;
    + /*
    + * An enormous hole could cause bulk-read to encompass too many
    + * page cache pages, so limit the number here.
    + */
    + if (bu->blk_cnt >= UBIFS_MAX_BULK_READ)
    + bu->blk_cnt = UBIFS_MAX_BULK_READ;
    + /*
    + * Ensure that bulk-read covers a whole number of page cache
    + * pages.
    + */
    + if (UBIFS_BLOCKS_PER_PAGE == 1 ||
    + !(bu->blk_cnt & (UBIFS_BLOCKS_PER_PAGE - 1)))
    + return 0;
    + if (bu->eof) {
    + /* At the end of file we can round up */
    + bu->blk_cnt += UBIFS_BLOCKS_PER_PAGE - 1;
    + return 0;
    + }
    + /* Exclude data nodes that do not make up a whole page cache page */
    + block = key_block(c, &bu->key) + bu->blk_cnt;
    + block &= ~(UBIFS_BLOCKS_PER_PAGE - 1);
    + while (bu->cnt) {
    + if (key_block(c, &bu->zbranch[bu->cnt - 1].key) < block)
    + break;
    + bu->cnt -= 1;
    + }
    + return 0;
    +}
    +
    +/**
    + * read_wbuf - bulk-read from a LEB with a wbuf.
    + * @wbuf: wbuf that may overlap the read
    + * @buf: buffer into which to read
    + * @len: read length
    + * @lnum: LEB number from which to read
    + * @offs: offset from which to read
    + *
    + * This functions returns %0 on success or a negative error code on failure.
    + */
    +static int read_wbuf(struct ubifs_wbuf *wbuf, void *buf, int len, int lnum,
    + int offs)
    +{
    + const struct ubifs_info *c = wbuf->c;
    + int rlen, overlap;
    +
    + dbg_io("LEB %d:%d, length %d", lnum, offs, len);
    + ubifs_assert(wbuf && lnum >= 0 && lnum < c->leb_cnt && offs >= 0);
    + ubifs_assert(!(offs & 7) && offs < c->leb_size);
    + ubifs_assert(offs + len <= c->leb_size);
    +
    + spin_lock(&wbuf->lock);
    + overlap = (lnum == wbuf->lnum && offs + len > wbuf->offs);
    + if (!overlap) {
    + /* We may safely unlock the write-buffer and read the data */
    + spin_unlock(&wbuf->lock);
    + return ubi_read(c->ubi, lnum, buf, offs, len);
    + }
    +
    + /* Don't read under wbuf */
    + rlen = wbuf->offs - offs;
    + if (rlen < 0)
    + rlen = 0;
    +
    + /* Copy the rest from the write-buffer */
    + memcpy(buf + rlen, wbuf->buf + offs + rlen - wbuf->offs, len - rlen);
    + spin_unlock(&wbuf->lock);
    +
    + if (rlen > 0)
    + /* Read everything that goes before write-buffer */
    + return ubi_read(c->ubi, lnum, buf, offs, rlen);
    +
    + return 0;
    +}
    +
    +/**
    + * validate_data_node - validate data nodes for bulk-read.
    + * @c: UBIFS file-system description object
    + * @buf: buffer containing data node to validate
    + * @zbr: zbranch of data node to validate
    + *
    + * This functions returns %0 on success or a negative error code on failure.
    + */
    +static int validate_data_node(struct ubifs_info *c, void *buf,
    + struct ubifs_zbranch *zbr)
    +{
    + union ubifs_key key1;
    + struct ubifs_ch *ch = buf;
    + int err, len;
    +
    + if (ch->node_type != UBIFS_DATA_NODE) {
    + ubifs_err("bad node type (%d but expected %d)",
    + ch->node_type, UBIFS_DATA_NODE);
    + goto out_err;
    + }
    +
    + err = ubifs_check_node(c, buf, zbr->lnum, zbr->offs, 0);
    + if (err) {
    + ubifs_err("expected node type %d", UBIFS_DATA_NODE);
    + goto out;
    + }
    +
    + len = le32_to_cpu(ch->len);
    + if (len != zbr->len) {
    + ubifs_err("bad node length %d, expected %d", len, zbr->len);
    + goto out_err;
    + }
    +
    + /* Make sure the key of the read node is correct */
    + key_read(c, buf + UBIFS_KEY_OFFSET, &key1);
    + if (!keys_eq(c, &zbr->key, &key1)) {
    + ubifs_err("bad key in node at LEB %d:%d",
    + zbr->lnum, zbr->offs);
    + dbg_tnc("looked for key %s found node's key %s",
    + DBGKEY(&zbr->key), DBGKEY1(&key1));
    + goto out_err;
    + }
    +
    + return 0;
    +
    +out_err:
    + err = -EINVAL;
    +out:
    + ubifs_err("bad node at LEB %d:%d", zbr->lnum, zbr->offs);
    + dbg_dump_node(c, buf);
    + dbg_dump_stack();
    + return err;
    +}
    +
    +/**
    + * ubifs_tnc_bulk_read - read a number of data nodes in one go.
    + * @c: UBIFS file-system description object
    + * @bu: bulk-read parameters and results
    + *
    + * This functions reads and validates the data nodes that were identified by the
    + * 'ubifs_tnc_get_bu_keys()' function. This functions returns %0 on success,
    + * -EAGAIN to indicate a race with GC, or another negative error code on
    + * failure.
    + */
    +int ubifs_tnc_bulk_read(struct ubifs_info *c, struct bu_info *bu)
    +{
    + int lnum = bu->zbranch[0].lnum, offs = bu->zbranch[0].offs, len, err, i;
    + struct ubifs_wbuf *wbuf;
    + void *buf;
    +
    + len = bu->zbranch[bu->cnt - 1].offs;
    + len += bu->zbranch[bu->cnt - 1].len - offs;
    + if (len > bu->buf_len) {
    + ubifs_err("buffer too small %d vs %d", bu->buf_len, len);
    + return -EINVAL;
    + }
    +
    + /* Do the read */
    + wbuf = ubifs_get_wbuf(c, lnum);
    + if (wbuf)
    + err = read_wbuf(wbuf, bu->buf, len, lnum, offs);
    + else
    + err = ubi_read(c->ubi, lnum, bu->buf, offs, len);
    +
    + /* Check for a race with GC */
    + if (maybe_leb_gced(c, lnum, bu->gc_seq))
    + return -EAGAIN;
    +
    + if (err && err != -EBADMSG) {
    + ubifs_err("failed to read from LEB %d:%d, error %d",
    + lnum, offs, err);
    + dbg_dump_stack();
    + dbg_tnc("key %s", DBGKEY(&bu->key));
    + return err;
    + }
    +
    + /* Validate the nodes read */
    + buf = bu->buf;
    + for (i = 0; i < bu->cnt; i++) {
    + err = validate_data_node(c, buf, &bu->zbranch[i]);
    + if (err)
    + return err;
    + buf = buf + ALIGN(bu->zbranch[i].len, 8);
    + }
    +
    + return 0;
    +}
    +
    +/**
    * do_lookup_nm- look up a "hashed" node.
    * @c: UBIFS file-system description object
    * @key: node key to lookup
    diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h
    index ce86549..8513239 100644
    --- a/fs/ubifs/ubifs.h
    +++ b/fs/ubifs/ubifs.h
    @@ -142,6 +142,9 @@
    /* Maximum expected tree height for use by bottom_up_buf */
    #define BOTTOM_UP_HEIGHT 64

    +/* Maximum number of data nodes to bulk-read */
    +#define UBIFS_MAX_BULK_READ 32
    +
    /*
    * Lockdep classes for UBIFS inode @ui_mutex.
    */
    @@ -329,8 +332,8 @@ struct ubifs_gced_idx_leb {
    * @dirty: non-zero if the inode is dirty
    * @xattr: non-zero if this is an extended attribute inode
    * @ui_mutex: serializes inode write-back with the rest of VFS operations,
    - * serializes "clean <-> dirty" state changes, protects @dirty,
    - * @ui_size, and @xattr_size
    + * serializes "clean <-> dirty" state changes, serializes bulk-read,
    + * protects @dirty, @ui_size, and @xattr_size
    * @ui_lock: protects @synced_i_size
    * @synced_i_size: synchronized size of inode, i.e. the value of inode size
    * currently stored on the flash; used only for regular file
    @@ -338,6 +341,9 @@ struct ubifs_gced_idx_leb {
    * @ui_size: inode size used by UBIFS when writing to flash
    * @flags: inode flags (@UBIFS_COMPR_FL, etc)
    * @compr_type: default compression type used for this inode
    + * @last_page_read: page number of last page read (for bulk read)
    + * @read_in_a_row: number of consecutive pages read in a row (for bulk read)
    + * @bulk_read: indicates whether bulk-read should be used
    * @data_len: length of the data attached to the inode
    * @data: inode's data
    *
    @@ -385,6 +391,9 @@ struct ubifs_inode {
    loff_t ui_size;
    int flags;
    int compr_type;
    + pgoff_t last_page_read;
    + pgoff_t read_in_a_row;
    + int bulk_read;
    int data_len;
    void *data;
    };
    @@ -744,6 +753,28 @@ struct ubifs_znode {
    };

    /**
    + * struct bu_info - bulk-read information
    + * @key: first data node key
    + * @zbranch: zbranches of data nodes to bulk read
    + * @buf: buffer to read into
    + * @buf_len: buffer length
    + * @gc_seq: GC sequence number to detect races with GC
    + * @cnt: number of data nodes for bulk read
    + * @blk_cnt: number of data blocks including holes
    + * @oef: end of file reached
    + */
    +struct bu_info {
    + union ubifs_key key;
    + struct ubifs_zbranch zbranch[UBIFS_MAX_BULK_READ];
    + void *buf;
    + int buf_len;
    + int gc_seq;
    + int cnt;
    + int blk_cnt;
    + int eof;
    +};
    +
    +/**
    * struct ubifs_node_range - node length range description data structure.
    * @len: fixed node length
    * @min_len: minimum possible node length
    @@ -862,9 +893,11 @@ struct ubifs_orphan {
    /**
    * struct ubifs_mount_opts - UBIFS-specific mount options information.
    * @unmount_mode: selected unmount mode (%0 default, %1 normal, %2 fast)
    + * @bulk_read: enable bulk-reads
    */
    struct ubifs_mount_opts {
    unsigned int unmount_mode:2;
    + unsigned int bulk_read:2;
    };

    /**
    @@ -965,6 +998,9 @@ struct ubifs_mount_opts {
    * @old_leb_cnt: count of logical eraseblocks before re-size
    * @ro_media: the underlying UBI volume is read-only
    *
    + * @bulk_read: enable bulk-reads
    + * @bulk_read_buf_size: buffer size for bulk-reads
    + *
    * @dirty_pg_cnt: number of dirty pages (not used)
    * @dirty_zn_cnt: number of dirty znodes
    * @clean_zn_cnt: number of clean znodes
    @@ -1205,6 +1241,9 @@ struct ubifs_info {
    int old_leb_cnt;
    int ro_media;

    + int bulk_read;
    + int bulk_read_buf_size;
    +
    atomic_long_t dirty_pg_cnt;
    atomic_long_t dirty_zn_cnt;
    atomic_long_t clean_zn_cnt;
    @@ -1490,6 +1529,8 @@ void destroy_old_idx(struct ubifs_info *c);
    int is_idx_node_in_tnc(struct ubifs_info *c, union ubifs_key *key, int level,
    int lnum, int offs);
    int insert_old_idx_znode(struct ubifs_info *c, struct ubifs_znode *znode);
    +int ubifs_tnc_get_bu_keys(struct ubifs_info *c, struct bu_info *bu);
    +int ubifs_tnc_bulk_read(struct ubifs_info *c, struct bu_info *bu);

    /* tnc_misc.c */
    struct ubifs_znode *ubifs_tnc_levelorder_next(struct ubifs_znode *zr,
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  19. Re: What is in ubifs-2.6.git


    On Tue, 2008-09-30 at 12:18 +0300, Artem Bityutskiy wrote:
    > addresses. For example, if user-space asks to read page zero of a
    > file, and page 0-4 are in consecutive flash addressed, UBIFS reads
    > pages 0-4 and populates them to the Page Cache.


    Actually this example is a little wrong, because UBIFS starts
    doing bulk-read only if few pages were read sequentially before.
    So, user would have to read pages 0-3 sequentially, then UBIFS
    would start doing bulk-read for this inode.

    --
    Best regards,
    Artem Bityutskiy (Битюцкий Артём)

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  20. [PATCH] UBIFS: allow for sync_fs when read-only

    From: Adrian Hunter

    sync_fs can be called even if the file system is mounted
    read-only. Ensure the commit is not run in that case.

    Reported-by: Zoltan Sogor
    Signed-off-by: Adrian Hunter
    ---
    fs/ubifs/super.c | 19 ++++++++++---------
    1 files changed, 10 insertions(+), 9 deletions(-)

    diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
    index 7e1f3ef..7fd759d 100644
    --- a/fs/ubifs/super.c
    +++ b/fs/ubifs/super.c
    @@ -420,21 +420,22 @@ static int ubifs_sync_fs(struct super_block *sb, int wait)
    int i, ret = 0, err;
    long long bud_bytes;

    - if (c->jheads)
    + if (c->jheads) {
    for (i = 0; i < c->jhead_cnt; i++) {
    err = ubifs_wbuf_sync(&c->jheads[i].wbuf);
    if (err && !ret)
    ret = err;
    }

    - /* Commit the journal unless it has too few data */
    - spin_lock(&c->buds_lock);
    - bud_bytes = c->bud_bytes;
    - spin_unlock(&c->buds_lock);
    - if (bud_bytes > c->leb_size) {
    - err = ubifs_run_commit(c);
    - if (err)
    - return err;
    + /* Commit the journal unless it has too little data */
    + spin_lock(&c->buds_lock);
    + bud_bytes = c->bud_bytes;
    + spin_unlock(&c->buds_lock);
    + if (bud_bytes > c->leb_size) {
    + err = ubifs_run_commit(c);
    + if (err)
    + return err;
    + }
    }

    /*
    --
    1.5.4.1

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread
Page 1 of 2 1 2 LastLast