[PATCH 00/24] Introduce credential record - Kernel

This is a discussion on [PATCH 00/24] Introduce credential record - Kernel ; Hi Al, Christoph, Trond, Stephen, Casey, Here's a set of patches that implement a very basic set of COW credentials. It compiles, links and runs for x86_64 with EXT3, (V)FAT, NFS, AFS, SELinux and keyrings all enabled. I've included a ...

+ Reply to Thread
Results 1 to 15 of 15

Thread: [PATCH 00/24] Introduce credential record

  1. [PATCH 00/24] Introduce credential record



    Hi Al, Christoph, Trond, Stephen, Casey,

    Here's a set of patches that implement a very basic set of COW credentials. It
    compiles, links and runs for x86_64 with EXT3, (V)FAT, NFS, AFS, SELinux and
    keyrings all enabled. I've included a patch that should make most of the other
    archs and filesystems work, but I haven't yet merged it into the primary
    patches.

    A tarball of the patches can be retrieved from:

    http://people.redhat.com/~dhowells/n...che-24.tar.bz2


    The cred struct contains the credentials that the kernel needs to act upon
    something or to create something. Credentials that govern how a task may be
    acted upon remain in the task struct.

    In essence, the introduction of the cred struct separates a task's subjective
    context (the authority with which it acts) from its objective context (the
    authorisation required by others that want to act upon it), and permits
    overriding of the subjective context by a kernel service so that the service
    can act on the task's behalf to do something the task couldn't do on its own
    authority.

    Because keyrings and effective capabilities can be installed or changed in one
    process by another process, they are shadowed by the cred structure rather than
    residing there. Additionally, the session and process keyrings are shared
    between all the threads of a process. The shadowing is performed by
    update_current_cred() which is invoked on entry to any system call that might
    need it.

    A thread's cred struct may be read by that thread without any RCU precautions
    as only that thread may replace the its own cred struct. To change a thread's
    credentials, dup_cred() should be called to create a new copy, the copy should
    be changed, and then set_current_cred() should be called to make it live. Once
    live, it may not be changed as it may then be shared with file descriptors, RPC
    calls and other threads. RCU will be used to dispose of the old structure.


    The six patches are:

    (1) Introduce struct cred and migrate fsuid, fsgid, the groups list and the
    keyrings pointer to it.

    (2) Introduce a security pointer into the cred struct and add LSM hooks to
    duplicate the information pointed to thereby and to free it.

    Make SELinux implement the hooks, splitting out some the task security
    data to be associated with struct cred instead.

    (3) Make the security functions that permit task SID retrieval return both the
    objective and subjective SIDs as required.

    (4) Migrate the effective capabilities mask into the cred struct.

    (5) Fix up all the other archs and filesystems that I can manage to compile.
    This should be merged into the preceding patches at some point.

    (6) Provide a pair of LSM hooks so that a kernel service can (a) get a
    credential record representing the authority with which it is permitted to
    act, and (b) alter the file creation context in a credential record.

    In addition, as this works with cachefiles, I've included all the FS-Cache,
    CacheFiles, NFS and AFS patches.

    To substitute a temporary set of credentials, the cred struct attached to the
    task should be altered, like so:

    int get_privileged_creds(...)
    {
    /* get special privileged creds */
    my_special_cred = get_kernel_cred("cachefiles", current);
    change_create_files_as(my_special_cred, my_cache_dir);
    }

    int do_stuff(...)
    {
    struct cred *cred;

    /* rotate in the new creds, saving the old */
    cred = __set_current_cred(get_cred(my_special_cred));

    do_privileged_stuff();

    /* restore the old creds */
    set_current_cred(cred);
    }

    One thing I'm not certain about is how this should interact with /proc, which
    can display some of the stuff in the cred struct. I think it may be necessary
    to have a real cred pointer and an effective cred pointer, with the contents of
    /proc coming from the real, but the effective governing what actually goes on.

    Furthemore, I was thinking that it was a good idea to move the setting of i_uid
    and i_gid to current->cred->i_[ug]id into new_inode(), but now I'm not so sure,
    since the kernel special filesystems may assume that the i_uid and i_gid
    default to 0. Any thoughts on this?

    The NFS FS-Cache sharing patch still needs fixing up to correctly do the
    sharing thing when local caching is enabled.

    David
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. [PATCH 02/24] CRED: Split the task security data and move part of it into struct cred

    Move into the cred struct the part of the task security data that defines how a
    task acts upon an object. The part that defines how something acts upon a task
    remains attached to the task.

    For SELinux this requires some of task_security_struct to be split off into
    cred_security_struct which is then attached to struct cred. Note that the
    contents of cred_security_struct may not be changed without the generation of a
    new struct cred.

    The split is as follows:

    (*) create_sid, keycreate_sid and sockcreate_sid just move across.

    (*) sid is split into victim_sid - which remains - and action_sid - which
    migrates.

    (*) osid, exec_sid and ptrace_sid remain.

    victim_sid is the SID used to govern actions upon the task. action_sid is used
    to govern actions made by the task.

    When accessing the cred_security_struct of another process, RCU read procedures
    must be observed.

    Signed-off-by: David Howells
    ---

    include/linux/cred.h | 1
    include/linux/security.h | 33 ++
    kernel/cred.c | 7 +
    security/dummy.c | 11 +
    security/selinux/exports.c | 6
    security/selinux/hooks.c | 497 +++++++++++++++++++++++--------------
    security/selinux/include/objsec.h | 16 +
    security/selinux/selinuxfs.c | 8 -
    security/selinux/xfrm.c | 6
    9 files changed, 379 insertions(+), 206 deletions(-)

    diff --git a/include/linux/cred.h b/include/linux/cred.h
    index 0cc4400..7e35b2f 100644
    --- a/include/linux/cred.h
    +++ b/include/linux/cred.h
    @@ -26,6 +26,7 @@ struct cred {
    gid_t gid; /* fsgid as was */
    struct rcu_head exterminate; /* cred destroyer */
    struct group_info *group_info;
    + void *security;

    /* caches for references to the three task keyrings
    * - note that key_ref_t isn't typedef'd at this point, hence the odd
    diff --git a/include/linux/security.h b/include/linux/security.h
    index 1a15526..74cc204 100644
    --- a/include/linux/security.h
    +++ b/include/linux/security.h
    @@ -504,6 +504,17 @@ struct request_sock;
    * @file contains the file structure being received.
    * Return 0 if permission is granted.
    *
    + * Security hooks for credential structure operations.
    + *
    + * @cred_dup:
    + * Duplicate the credentials onto a duplicated cred structure.
    + * @cred points to the credentials structure. cred->security points to the
    + * security struct that was attached to the original cred struct, but it
    + * lacks a reference for the duplication if reference counting is needed.
    + * @cred_destroy:
    + * Destroy the credentials attached to a cred structure.
    + * @cred points to the credentials structure that is to be destroyed.
    + *
    * Security hooks for task operations.
    *
    * @task_create:
    @@ -1257,6 +1268,9 @@ struct security_operations {
    struct fown_struct * fown, int sig);
    int (*file_receive) (struct file * file);

    + int (*cred_dup)(struct cred *cred);
    + void (*cred_destroy)(struct cred *cred);
    +
    int (*task_create) (unsigned long clone_flags);
    int (*task_alloc_security) (struct task_struct * p);
    void (*task_free_security) (struct task_struct * p);
    @@ -1864,6 +1878,16 @@ static inline int security_file_receive (struct file *file)
    return security_ops->file_receive (file);
    }

    +static inline int security_cred_dup(struct cred *cred)
    +{
    + return security_ops->cred_dup(cred);
    +}
    +
    +static inline void security_cred_destroy(struct cred *cred)
    +{
    + return security_ops->cred_destroy(cred);
    +}
    +
    static inline int security_task_create (unsigned long clone_flags)
    {
    return security_ops->task_create (clone_flags);
    @@ -2546,6 +2570,15 @@ static inline int security_file_receive (struct file *file)
    return 0;
    }

    +static inline int security_cred_dup(struct cred *cred)
    +{
    + return 0;
    +}
    +
    +static inline void security_cred_destroy(struct cred *cred)
    +{
    +}
    +
    static inline int security_task_create (unsigned long clone_flags)
    {
    return 0;
    diff --git a/kernel/cred.c b/kernel/cred.c
    index 5b56b2b..9868eef 100644
    --- a/kernel/cred.c
    +++ b/kernel/cred.c
    @@ -94,6 +94,12 @@ struct cred *dup_cred(const struct cred *pcred)
    if (likely(cred)) {
    *cred = *pcred;
    atomic_set(&cred->usage, 1);
    +
    + if (security_cred_dup(cred) < 0) {
    + kfree(cred);
    + return NULL;
    + }
    +
    get_group_info(cred->group_info);
    #ifdef CONFIG_KEYS
    key_get(key_ref_to_ptr(cred->session_keyring));
    @@ -113,6 +119,7 @@ static void put_cred_rcu(struct rcu_head *rcu)
    {
    struct cred *cred = container_of(rcu, struct cred, exterminate);

    + security_cred_destroy(cred);
    put_group_info(cred->group_info);
    key_ref_put(cred->session_keyring);
    key_ref_put(cred->process_keyring);
    diff --git a/security/dummy.c b/security/dummy.c
    index 62de89c..f535cc6 100644
    --- a/security/dummy.c
    +++ b/security/dummy.c
    @@ -468,6 +468,15 @@ static int dummy_file_receive (struct file *file)
    return 0;
    }

    +static int dummy_cred_dup(struct cred *cred)
    +{
    + return 0;
    +}
    +
    +static void dummy_cred_destroy(struct cred *cred)
    +{
    +}
    +
    static int dummy_task_create (unsigned long clone_flags)
    {
    return 0;
    @@ -1038,6 +1047,8 @@ void security_fixup_ops (struct security_operations *ops)
    set_to_dummy_if_null(ops, file_set_fowner);
    set_to_dummy_if_null(ops, file_send_sigiotask);
    set_to_dummy_if_null(ops, file_receive);
    + set_to_dummy_if_null(ops, cred_dup);
    + set_to_dummy_if_null(ops, cred_destroy);
    set_to_dummy_if_null(ops, task_create);
    set_to_dummy_if_null(ops, task_alloc_security);
    set_to_dummy_if_null(ops, task_free_security);
    diff --git a/security/selinux/exports.c b/security/selinux/exports.c
    index b6f9694..29cb87a 100644
    --- a/security/selinux/exports.c
    +++ b/security/selinux/exports.c
    @@ -57,7 +57,7 @@ void selinux_get_task_sid(struct task_struct *tsk, u32 *sid)
    {
    if (selinux_enabled) {
    struct task_security_struct *tsec = tsk->security;
    - *sid = tsec->sid;
    + *sid = tsec->victim_sid;
    return;
    }
    *sid = 0;
    @@ -77,9 +77,9 @@ EXPORT_SYMBOL_GPL(selinux_string_to_sid);
    int selinux_relabel_packet_permission(u32 sid)
    {
    if (selinux_enabled) {
    - struct task_security_struct *tsec = current->security;
    + struct cred_security_struct *csec = current->cred->security;

    - return avc_has_perm(tsec->sid, sid, SECCLASS_PACKET,
    + return avc_has_perm(csec->action_sid, sid, SECCLASS_PACKET,
    PACKET__RELABELTO, NULL);
    }
    return 0;
    diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
    index 0753b20..4e72dbb 100644
    --- a/security/selinux/hooks.c
    +++ b/security/selinux/hooks.c
    @@ -162,7 +162,8 @@ static int task_alloc_security(struct task_struct *task)
    return -ENOMEM;

    tsec->task = task;
    - tsec->osid = tsec->sid = tsec->ptrace_sid = SECINITSID_UNLABELED;
    + tsec->osid = tsec->victim_sid = tsec->ptrace_sid =
    + SECINITSID_UNLABELED;
    task->security = tsec;

    return 0;
    @@ -177,7 +178,7 @@ static void task_free_security(struct task_struct *task)

    static int inode_alloc_security(struct inode *inode)
    {
    - struct task_security_struct *tsec = current->security;
    + struct cred_security_struct *csec = current->cred->security;
    struct inode_security_struct *isec;

    isec = kmem_cache_zalloc(sel_inode_cache, GFP_KERNEL);
    @@ -189,7 +190,7 @@ static int inode_alloc_security(struct inode *inode)
    isec->inode = inode;
    isec->sid = SECINITSID_UNLABELED;
    isec->sclass = SECCLASS_FILE;
    - isec->task_sid = tsec->sid;
    + isec->task_sid = csec->action_sid;
    inode->i_security = isec;

    return 0;
    @@ -211,7 +212,7 @@ static void inode_free_security(struct inode *inode)

    static int file_alloc_security(struct file *file)
    {
    - struct task_security_struct *tsec = current->security;
    + struct cred_security_struct *csec = current->cred->security;
    struct file_security_struct *fsec;

    fsec = kzalloc(sizeof(struct file_security_struct), GFP_KERNEL);
    @@ -219,8 +220,8 @@ static int file_alloc_security(struct file *file)
    return -ENOMEM;

    fsec->file = file;
    - fsec->sid = tsec->sid;
    - fsec->fown_sid = tsec->sid;
    + fsec->sid = csec->action_sid;
    + fsec->fown_sid = csec->action_sid;
    file->f_security = fsec;

    return 0;
    @@ -335,26 +336,26 @@ static match_table_t tokens = {

    static int may_context_mount_sb_relabel(u32 sid,
    struct superblock_security_struct *sbsec,
    - struct task_security_struct *tsec)
    + struct cred_security_struct *csec)
    {
    int rc;

    - rc = avc_has_perm(tsec->sid, sbsec->sid, SECCLASS_FILESYSTEM,
    + rc = avc_has_perm(csec->action_sid, sbsec->sid, SECCLASS_FILESYSTEM,
    FILESYSTEM__RELABELFROM, NULL);
    if (rc)
    return rc;

    - rc = avc_has_perm(tsec->sid, sid, SECCLASS_FILESYSTEM,
    + rc = avc_has_perm(csec->action_sid, sid, SECCLASS_FILESYSTEM,
    FILESYSTEM__RELABELTO, NULL);
    return rc;
    }

    static int may_context_mount_inode_relabel(u32 sid,
    struct superblock_security_struct *sbsec,
    - struct task_security_struct *tsec)
    + struct cred_security_struct *csec)
    {
    int rc;
    - rc = avc_has_perm(tsec->sid, sbsec->sid, SECCLASS_FILESYSTEM,
    + rc = avc_has_perm(csec->action_sid, sbsec->sid, SECCLASS_FILESYSTEM,
    FILESYSTEM__RELABELFROM, NULL);
    if (rc)
    return rc;
    @@ -371,7 +372,7 @@ static int try_context_mount(struct super_block *sb, void *data)
    const char *name;
    u32 sid;
    int alloc = 0, rc = 0, seen = 0;
    - struct task_security_struct *tsec = current->security;
    + struct cred_security_struct *csec = current->cred->security;
    struct superblock_security_struct *sbsec = sb->s_security;

    if (!data)
    @@ -503,7 +504,7 @@ static int try_context_mount(struct super_block *sb, void *data)
    goto out_free;
    }

    - rc = may_context_mount_sb_relabel(sid, sbsec, tsec);
    + rc = may_context_mount_sb_relabel(sid, sbsec, csec);
    if (rc)
    goto out_free;

    @@ -525,12 +526,12 @@ static int try_context_mount(struct super_block *sb, void *data)
    }

    if (!fscontext) {
    - rc = may_context_mount_sb_relabel(sid, sbsec, tsec);
    + rc = may_context_mount_sb_relabel(sid, sbsec, csec);
    if (rc)
    goto out_free;
    sbsec->sid = sid;
    } else {
    - rc = may_context_mount_inode_relabel(sid, sbsec, tsec);
    + rc = may_context_mount_inode_relabel(sid, sbsec, csec);
    if (rc)
    goto out_free;
    }
    @@ -550,7 +551,7 @@ static int try_context_mount(struct super_block *sb, void *data)
    goto out_free;
    }

    - rc = may_context_mount_inode_relabel(sid, sbsec, tsec);
    + rc = may_context_mount_inode_relabel(sid, sbsec, csec);
    if (rc)
    goto out_free;

    @@ -570,7 +571,7 @@ static int try_context_mount(struct super_block *sb, void *data)
    if (sid == sbsec->def_sid)
    goto out_free;

    - rc = may_context_mount_inode_relabel(sid, sbsec, tsec);
    + rc = may_context_mount_inode_relabel(sid, sbsec, csec);
    if (rc)
    goto out_free;

    @@ -1025,15 +1026,22 @@ static inline u32 signal_to_av(int sig)

    /* Check permission betweeen a pair of tasks, e.g. signal checks,
    fork check, ptrace check, etc. */
    -static int task_has_perm(struct task_struct *tsk1,
    - struct task_struct *tsk2,
    +static int task_has_perm(struct task_struct *actor,
    + struct task_struct *victim,
    u32 perms)
    {
    - struct task_security_struct *tsec1, *tsec2;
    + struct cred_security_struct *csec;
    + struct task_security_struct *tsec;
    + u32 action_sid;
    +
    + /* the actor may not be the current task */
    + rcu_read_lock();
    + csec = task_cred(actor)->security;
    + action_sid = csec->action_sid;
    + rcu_read_unlock();

    - tsec1 = tsk1->security;
    - tsec2 = tsk2->security;
    - return avc_has_perm(tsec1->sid, tsec2->sid,
    + tsec = victim->security;
    + return avc_has_perm(action_sid, tsec->victim_sid,
    SECCLASS_PROCESS, perms, NULL);
    }

    @@ -1041,16 +1049,16 @@ static int task_has_perm(struct task_struct *tsk1,
    static int task_has_capability(struct task_struct *tsk,
    int cap)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct avc_audit_data ad;

    - tsec = tsk->security;
    + csec = tsk->cred->security;

    AVC_AUDIT_DATA_INIT(&ad,CAP);
    ad.tsk = tsk;
    ad.u.cap = cap;

    - return avc_has_perm(tsec->sid, tsec->sid,
    + return avc_has_perm(csec->action_sid, csec->action_sid,
    SECCLASS_CAPABILITY, CAP_TO_MASK(cap), &ad);
    }

    @@ -1058,11 +1066,11 @@ static int task_has_capability(struct task_struct *tsk,
    static int task_has_system(struct task_struct *tsk,
    u32 perms)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;

    - tsec = tsk->security;
    + csec = tsk->cred->security;

    - return avc_has_perm(tsec->sid, SECINITSID_KERNEL,
    + return avc_has_perm(csec->action_sid, SECINITSID_KERNEL,
    SECCLASS_SYSTEM, perms, NULL);
    }

    @@ -1074,14 +1082,14 @@ static int inode_has_perm(struct task_struct *tsk,
    u32 perms,
    struct avc_audit_data *adp)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct inode_security_struct *isec;
    struct avc_audit_data ad;

    if (unlikely (IS_PRIVATE (inode)))
    return 0;

    - tsec = tsk->security;
    + csec = tsk->cred->security;
    isec = inode->i_security;

    if (!adp) {
    @@ -1090,7 +1098,8 @@ static int inode_has_perm(struct task_struct *tsk,
    ad.u.fs.inode = inode;
    }

    - return avc_has_perm(tsec->sid, isec->sid, isec->sclass, perms, adp);
    + return avc_has_perm(csec->action_sid, isec->sid, isec->sclass, perms,
    + adp);
    }

    /* Same as inode_has_perm, but pass explicit audit data containing
    @@ -1121,7 +1130,7 @@ static int file_has_perm(struct task_struct *tsk,
    struct file *file,
    u32 av)
    {
    - struct task_security_struct *tsec = tsk->security;
    + struct cred_security_struct *csec = tsk->cred->security;
    struct file_security_struct *fsec = file->f_security;
    struct vfsmount *mnt = file->f_path.mnt;
    struct dentry *dentry = file->f_path.dentry;
    @@ -1133,8 +1142,8 @@ static int file_has_perm(struct task_struct *tsk,
    ad.u.fs.mnt = mnt;
    ad.u.fs.dentry = dentry;

    - if (tsec->sid != fsec->sid) {
    - rc = avc_has_perm(tsec->sid, fsec->sid,
    + if (csec->action_sid != fsec->sid) {
    + rc = avc_has_perm(csec->action_sid, fsec->sid,
    SECCLASS_FD,
    FD__USE,
    &ad);
    @@ -1154,36 +1163,36 @@ static int may_create(struct inode *dir,
    struct dentry *dentry,
    u16 tclass)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct inode_security_struct *dsec;
    struct superblock_security_struct *sbsec;
    u32 newsid;
    struct avc_audit_data ad;
    int rc;

    - tsec = current->security;
    + csec = current->cred->security;
    dsec = dir->i_security;
    sbsec = dir->i_sb->s_security;

    AVC_AUDIT_DATA_INIT(&ad, FS);
    ad.u.fs.dentry = dentry;

    - rc = avc_has_perm(tsec->sid, dsec->sid, SECCLASS_DIR,
    + rc = avc_has_perm(csec->action_sid, dsec->sid, SECCLASS_DIR,
    DIR__ADD_NAME | DIR__SEARCH,
    &ad);
    if (rc)
    return rc;

    - if (tsec->create_sid && sbsec->behavior != SECURITY_FS_USE_MNTPOINT) {
    - newsid = tsec->create_sid;
    + if (csec->create_sid && sbsec->behavior != SECURITY_FS_USE_MNTPOINT) {
    + newsid = csec->create_sid;
    } else {
    - rc = security_transition_sid(tsec->sid, dsec->sid, tclass,
    - &newsid);
    + rc = security_transition_sid(csec->action_sid, dsec->sid,
    + tclass, &newsid);
    if (rc)
    return rc;
    }

    - rc = avc_has_perm(tsec->sid, newsid, tclass, FILE__CREATE, &ad);
    + rc = avc_has_perm(csec->action_sid, newsid, tclass, FILE__CREATE, &ad);
    if (rc)
    return rc;

    @@ -1196,11 +1205,12 @@ static int may_create(struct inode *dir,
    static int may_create_key(u32 ksid,
    struct task_struct *ctx)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;

    - tsec = ctx->security;
    + csec = ctx->cred->security;

    - return avc_has_perm(tsec->sid, ksid, SECCLASS_KEY, KEY__CREATE, NULL);
    + return avc_has_perm(csec->action_sid, ksid, SECCLASS_KEY, KEY__CREATE,
    + NULL);
    }

    #define MAY_LINK 0
    @@ -1213,13 +1223,13 @@ static int may_link(struct inode *dir,
    int kind)

    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct inode_security_struct *dsec, *isec;
    struct avc_audit_data ad;
    u32 av;
    int rc;

    - tsec = current->security;
    + csec = current->cred->security;
    dsec = dir->i_security;
    isec = dentry->d_inode->i_security;

    @@ -1228,7 +1238,7 @@ static int may_link(struct inode *dir,

    av = DIR__SEARCH;
    av |= (kind ? DIR__REMOVE_NAME : DIR__ADD_NAME);
    - rc = avc_has_perm(tsec->sid, dsec->sid, SECCLASS_DIR, av, &ad);
    + rc = avc_has_perm(csec->action_sid, dsec->sid, SECCLASS_DIR, av, &ad);
    if (rc)
    return rc;

    @@ -1247,7 +1257,7 @@ static int may_link(struct inode *dir,
    return 0;
    }

    - rc = avc_has_perm(tsec->sid, isec->sid, isec->sclass, av, &ad);
    + rc = avc_has_perm(csec->action_sid, isec->sid, isec->sclass, av, &ad);
    return rc;
    }

    @@ -1256,14 +1266,14 @@ static inline int may_rename(struct inode *old_dir,
    struct inode *new_dir,
    struct dentry *new_dentry)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct inode_security_struct *old_dsec, *new_dsec, *old_isec, *new_isec;
    struct avc_audit_data ad;
    u32 av;
    int old_is_dir, new_is_dir;
    int rc;

    - tsec = current->security;
    + csec = current->cred->security;
    old_dsec = old_dir->i_security;
    old_isec = old_dentry->d_inode->i_security;
    old_is_dir = S_ISDIR(old_dentry->d_inode->i_mode);
    @@ -1272,16 +1282,16 @@ static inline int may_rename(struct inode *old_dir,
    AVC_AUDIT_DATA_INIT(&ad, FS);

    ad.u.fs.dentry = old_dentry;
    - rc = avc_has_perm(tsec->sid, old_dsec->sid, SECCLASS_DIR,
    + rc = avc_has_perm(csec->action_sid, old_dsec->sid, SECCLASS_DIR,
    DIR__REMOVE_NAME | DIR__SEARCH, &ad);
    if (rc)
    return rc;
    - rc = avc_has_perm(tsec->sid, old_isec->sid,
    + rc = avc_has_perm(csec->action_sid, old_isec->sid,
    old_isec->sclass, FILE__RENAME, &ad);
    if (rc)
    return rc;
    if (old_is_dir && new_dir != old_dir) {
    - rc = avc_has_perm(tsec->sid, old_isec->sid,
    + rc = avc_has_perm(csec->action_sid, old_isec->sid,
    old_isec->sclass, DIR__REPARENT, &ad);
    if (rc)
    return rc;
    @@ -1291,15 +1301,17 @@ static inline int may_rename(struct inode *old_dir,
    av = DIR__ADD_NAME | DIR__SEARCH;
    if (new_dentry->d_inode)
    av |= DIR__REMOVE_NAME;
    - rc = avc_has_perm(tsec->sid, new_dsec->sid, SECCLASS_DIR, av, &ad);
    + rc = avc_has_perm(csec->action_sid, new_dsec->sid, SECCLASS_DIR, av,
    + &ad);
    if (rc)
    return rc;
    if (new_dentry->d_inode) {
    new_isec = new_dentry->d_inode->i_security;
    new_is_dir = S_ISDIR(new_dentry->d_inode->i_mode);
    - rc = avc_has_perm(tsec->sid, new_isec->sid,
    + rc = avc_has_perm(csec->action_sid, new_isec->sid,
    new_isec->sclass,
    - (new_is_dir ? DIR__RMDIR : FILE__UNLINK), &ad);
    + (new_is_dir ? DIR__RMDIR : FILE__UNLINK),
    + &ad);
    if (rc)
    return rc;
    }
    @@ -1313,12 +1325,12 @@ static int superblock_has_perm(struct task_struct *tsk,
    u32 perms,
    struct avc_audit_data *ad)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct superblock_security_struct *sbsec;

    - tsec = tsk->security;
    + csec = tsk->cred->security;
    sbsec = sb->s_security;
    - return avc_has_perm(tsec->sid, sbsec->sid, SECCLASS_FILESYSTEM,
    + return avc_has_perm(csec->action_sid, sbsec->sid, SECCLASS_FILESYSTEM,
    perms, ad);
    }

    @@ -1371,7 +1383,7 @@ static inline u32 file_to_av(struct file *file)

    static int selinux_ptrace(struct task_struct *parent, struct task_struct *child)
    {
    - struct task_security_struct *psec = parent->security;
    + struct cred_security_struct *psec;
    struct task_security_struct *csec = child->security;
    int rc;

    @@ -1381,8 +1393,12 @@ static int selinux_ptrace(struct task_struct *parent, struct task_struct *child)

    rc = task_has_perm(parent, child, PROCESS__PTRACE);
    /* Save the SID of the tracing process for later use in apply_creds. */
    - if (!(child->ptrace & PT_PTRACED) && !rc)
    - csec->ptrace_sid = psec->sid;
    + if (!(child->ptrace & PT_PTRACED) && !rc) {
    + rcu_read_lock();
    + psec = task_cred(parent)->security;
    + csec->ptrace_sid = psec->action_sid;
    + rcu_read_unlock();
    + }
    return rc;
    }

    @@ -1472,7 +1488,7 @@ static int selinux_sysctl(ctl_table *table, int op)
    {
    int error = 0;
    u32 av;
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    u32 tsid;
    int rc;

    @@ -1480,7 +1496,7 @@ static int selinux_sysctl(ctl_table *table, int op)
    if (rc)
    return rc;

    - tsec = current->security;
    + csec = current->cred->security;

    rc = selinux_sysctl_get_sid(table, (op == 0001) ?
    SECCLASS_DIR : SECCLASS_FILE, &tsid);
    @@ -1492,7 +1508,7 @@ static int selinux_sysctl(ctl_table *table, int op)
    /* The op values are "defined" in sysctl.c, thereby creating
    * a bad coupling between this module and sysctl.c */
    if(op == 001) {
    - error = avc_has_perm(tsec->sid, tsid,
    + error = avc_has_perm(csec->action_sid, tsid,
    SECCLASS_DIR, DIR__SEARCH, NULL);
    } else {
    av = 0;
    @@ -1501,7 +1517,7 @@ static int selinux_sysctl(ctl_table *table, int op)
    if (op & 002)
    av |= FILE__WRITE;
    if (av)
    - error = avc_has_perm(tsec->sid, tsid,
    + error = avc_has_perm(csec->action_sid, tsid,
    SECCLASS_FILE, av, NULL);
    }

    @@ -1589,11 +1605,11 @@ static int selinux_syslog(int type)
    static int selinux_vm_enough_memory(struct mm_struct *mm, long pages)
    {
    int rc, cap_sys_admin = 0;
    - struct task_security_struct *tsec = current->security;
    + struct cred_security_struct *csec = current->cred->security;

    rc = secondary_ops->capable(current, CAP_SYS_ADMIN);
    if (rc == 0)
    - rc = avc_has_perm_noaudit(tsec->sid, tsec->sid,
    + rc = avc_has_perm_noaudit(csec->action_sid, csec->action_sid,
    SECCLASS_CAPABILITY,
    CAP_TO_MASK(CAP_SYS_ADMIN),
    0,
    @@ -1626,6 +1642,7 @@ static int selinux_bprm_alloc_security(struct linux_binprm *bprm)
    static int selinux_bprm_set_security(struct linux_binprm *bprm)
    {
    struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct inode *inode = bprm->file->f_path.dentry->d_inode;
    struct inode_security_struct *isec;
    struct bprm_security_struct *bsec;
    @@ -1643,15 +1660,16 @@ static int selinux_bprm_set_security(struct linux_binprm *bprm)
    return 0;

    tsec = current->security;
    + csec = bprm->cred->security;
    isec = inode->i_security;

    /* Default to the current task SID. */
    - bsec->sid = tsec->sid;
    + bsec->sid = csec->action_sid;

    /* Reset fs, key, and sock SIDs on execve. */
    - tsec->create_sid = 0;
    - tsec->keycreate_sid = 0;
    - tsec->sockcreate_sid = 0;
    + csec->create_sid = 0;
    + csec->keycreate_sid = 0;
    + csec->sockcreate_sid = 0;

    if (tsec->exec_sid) {
    newsid = tsec->exec_sid;
    @@ -1659,7 +1677,7 @@ static int selinux_bprm_set_security(struct linux_binprm *bprm)
    tsec->exec_sid = 0;
    } else {
    /* Check for a default transition on this program. */
    - rc = security_transition_sid(tsec->sid, isec->sid,
    + rc = security_transition_sid(csec->action_sid, isec->sid,
    SECCLASS_PROCESS, &newsid);
    if (rc)
    return rc;
    @@ -1670,16 +1688,16 @@ static int selinux_bprm_set_security(struct linux_binprm *bprm)
    ad.u.fs.dentry = bprm->file->f_path.dentry;

    if (bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID)
    - newsid = tsec->sid;
    + newsid = csec->action_sid;

    - if (tsec->sid == newsid) {
    - rc = avc_has_perm(tsec->sid, isec->sid,
    + if (csec->action_sid == newsid) {
    + rc = avc_has_perm(csec->action_sid, isec->sid,
    SECCLASS_FILE, FILE__EXECUTE_NO_TRANS, &ad);
    if (rc)
    return rc;
    } else {
    /* Check permissions for the transition. */
    - rc = avc_has_perm(tsec->sid, newsid,
    + rc = avc_has_perm(csec->action_sid, newsid,
    SECCLASS_PROCESS, PROCESS__TRANSITION, &ad);
    if (rc)
    return rc;
    @@ -1711,11 +1729,11 @@ static int selinux_bprm_secureexec (struct linux_binprm *bprm)
    struct task_security_struct *tsec = current->security;
    int atsecure = 0;

    - if (tsec->osid != tsec->sid) {
    + if (tsec->osid != tsec->victim_sid) {
    /* Enable secure mode for SIDs transitions unless
    the noatsecure permission is granted between
    the two SIDs, i.e. ahp returns 0. */
    - atsecure = avc_has_perm(tsec->osid, tsec->sid,
    + atsecure = avc_has_perm(tsec->osid, tsec->victim_sid,
    SECCLASS_PROCESS,
    PROCESS__NOATSECURE, NULL);
    }
    @@ -1825,6 +1843,7 @@ static inline void flush_unauthorized_files(struct files_struct * files)
    static void selinux_bprm_apply_creds(struct linux_binprm *bprm, int unsafe)
    {
    struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct bprm_security_struct *bsec;
    u32 sid;
    int rc;
    @@ -1832,17 +1851,17 @@ static void selinux_bprm_apply_creds(struct linux_binprm *bprm, int unsafe)
    secondary_ops->bprm_apply_creds(bprm, unsafe);

    tsec = current->security;
    -
    + csec = bprm->cred->security;
    bsec = bprm->security;
    sid = bsec->sid;

    - tsec->osid = tsec->sid;
    + tsec->osid = tsec->victim_sid;
    bsec->unsafe = 0;
    - if (tsec->sid != sid) {
    + if (tsec->victim_sid != sid) {
    /* Check for shared state. If not ok, leave SID
    unchanged and kill. */
    if (unsafe & LSM_UNSAFE_SHARE) {
    - rc = avc_has_perm(tsec->sid, sid, SECCLASS_PROCESS,
    + rc = avc_has_perm(tsec->victim_sid, sid, SECCLASS_PROCESS,
    PROCESS__SHARE, NULL);
    if (rc) {
    bsec->unsafe = 1;
    @@ -1861,7 +1880,9 @@ static void selinux_bprm_apply_creds(struct linux_binprm *bprm, int unsafe)
    return;
    }
    }
    - tsec->sid = sid;
    + if (csec->action_sid == tsec->victim_sid)
    + csec->action_sid = sid;
    + tsec->victim_sid = sid;
    }
    }

    @@ -1883,7 +1904,7 @@ static void selinux_bprm_post_apply_creds(struct linux_binprm *bprm)
    force_sig_specific(SIGKILL, current);
    return;
    }
    - if (tsec->osid == tsec->sid)
    + if (tsec->osid == tsec->victim_sid)
    return;

    /* Close files for which the new task SID is not authorized. */
    @@ -1895,7 +1916,7 @@ static void selinux_bprm_post_apply_creds(struct linux_binprm *bprm)
    signals. This must occur _after_ the task SID has
    been updated so that any kill done after the flush
    will be checked against the new SID. */
    - rc = avc_has_perm(tsec->osid, tsec->sid, SECCLASS_PROCESS,
    + rc = avc_has_perm(tsec->osid, tsec->victim_sid, SECCLASS_PROCESS,
    PROCESS__SIGINH, NULL);
    if (rc) {
    memset(&itimer, 0, sizeof itimer);
    @@ -1922,7 +1943,7 @@ static void selinux_bprm_post_apply_creds(struct linux_binprm *bprm)
    than the default soft limit for cases where the default
    is lower than the hard limit, e.g. RLIMIT_CORE or
    RLIMIT_STACK.*/
    - rc = avc_has_perm(tsec->osid, tsec->sid, SECCLASS_PROCESS,
    + rc = avc_has_perm(tsec->osid, tsec->victim_sid, SECCLASS_PROCESS,
    PROCESS__RLIMITINH, NULL);
    if (rc) {
    for (i = 0; i < RLIM_NLIMITS; i++) {
    @@ -2124,21 +2145,21 @@ static int selinux_inode_init_security(struct inode *inode, struct inode *dir,
    char **name, void **value,
    size_t *len)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct inode_security_struct *dsec;
    struct superblock_security_struct *sbsec;
    u32 newsid, clen;
    int rc;
    char *namep = NULL, *context;

    - tsec = current->security;
    + csec = current->cred->security;
    dsec = dir->i_security;
    sbsec = dir->i_sb->s_security;

    - if (tsec->create_sid && sbsec->behavior != SECURITY_FS_USE_MNTPOINT) {
    - newsid = tsec->create_sid;
    + if (csec->create_sid && sbsec->behavior != SECURITY_FS_USE_MNTPOINT) {
    + newsid = csec->create_sid;
    } else {
    - rc = security_transition_sid(tsec->sid, dsec->sid,
    + rc = security_transition_sid(csec->action_sid, dsec->sid,
    inode_mode_to_security_class(inode->i_mode),
    &newsid);
    if (rc) {
    @@ -2297,7 +2318,7 @@ static int selinux_inode_getattr(struct vfsmount *mnt, struct dentry *dentry)

    static int selinux_inode_setxattr(struct dentry *dentry, char *name, void *value, size_t size, int flags)
    {
    - struct task_security_struct *tsec = current->security;
    + struct cred_security_struct *csec = current->cred->security;
    struct inode *inode = dentry->d_inode;
    struct inode_security_struct *isec = inode->i_security;
    struct superblock_security_struct *sbsec;
    @@ -2329,7 +2350,7 @@ static int selinux_inode_setxattr(struct dentry *dentry, char *name, void *value
    AVC_AUDIT_DATA_INIT(&ad,FS);
    ad.u.fs.dentry = dentry;

    - rc = avc_has_perm(tsec->sid, isec->sid, isec->sclass,
    + rc = avc_has_perm(csec->action_sid, isec->sid, isec->sclass,
    FILE__RELABELFROM, &ad);
    if (rc)
    return rc;
    @@ -2338,12 +2359,12 @@ static int selinux_inode_setxattr(struct dentry *dentry, char *name, void *value
    if (rc)
    return rc;

    - rc = avc_has_perm(tsec->sid, newsid, isec->sclass,
    + rc = avc_has_perm(csec->action_sid, newsid, isec->sclass,
    FILE__RELABELTO, &ad);
    if (rc)
    return rc;

    - rc = security_validate_transition(isec->sid, newsid, tsec->sid,
    + rc = security_validate_transition(isec->sid, newsid, csec->action_sid,
    isec->sclass);
    if (rc)
    return rc;
    @@ -2577,8 +2598,9 @@ static int selinux_file_mmap(struct file *file, unsigned long reqprot,
    unsigned long prot, unsigned long flags,
    unsigned long addr, unsigned long addr_only)
    {
    + struct cred_security_struct *csec = current->cred->security;
    int rc = 0;
    - u32 sid = ((struct task_security_struct*)(current->security))->sid;
    + u32 sid = csec->action_sid;

    if (addr < mmap_min_addr)
    rc = avc_has_perm(sid, sid, SECCLASS_MEMPROTECT,
    @@ -2692,7 +2714,7 @@ static int selinux_file_set_fowner(struct file *file)

    tsec = current->security;
    fsec = file->f_security;
    - fsec->fown_sid = tsec->sid;
    + fsec->fown_sid = tsec->victim_sid;

    return 0;
    }
    @@ -2716,7 +2738,7 @@ static int selinux_file_send_sigiotask(struct task_struct *tsk,
    else
    perm = signal_to_av(signum);

    - return avc_has_perm(fsec->fown_sid, tsec->sid,
    + return avc_has_perm(fsec->fown_sid, tsec->victim_sid,
    SECCLASS_PROCESS, perm, NULL);
    }

    @@ -2725,6 +2747,31 @@ static int selinux_file_receive(struct file *file)
    return file_has_perm(current, file, file_to_av(file));
    }

    +/* credential security operations */
    +
    +/*
    + * duplicate the security information attached to a credentials record that is
    + * itself undergoing duplication
    + */
    +static int selinux_cred_dup(struct cred *cred)
    +{
    + cred->security = kmemdup(cred->security,
    + sizeof(struct cred_security_struct),
    + GFP_KERNEL);
    + return cred->security ? 0 : -ENOMEM;
    +}
    +
    +/*
    + * destroy the security information attached to a credentials record
    + * - this is done under RCU, and may not be associated with the task that set it
    + * up
    + */
    +static void selinux_cred_destroy(struct cred *cred)
    +{
    + kfree(cred->security);
    +}
    +
    +
    /* task security operations */

    static int selinux_task_create(unsigned long clone_flags)
    @@ -2751,13 +2798,10 @@ static int selinux_task_alloc_security(struct task_struct *tsk)
    tsec2 = tsk->security;

    tsec2->osid = tsec1->osid;
    - tsec2->sid = tsec1->sid;
    + tsec2->victim_sid = tsec1->victim_sid;

    - /* Retain the exec, fs, key, and sock SIDs across fork */
    + /* Retain the exec SID across fork */
    tsec2->exec_sid = tsec1->exec_sid;
    - tsec2->create_sid = tsec1->create_sid;
    - tsec2->keycreate_sid = tsec1->keycreate_sid;
    - tsec2->sockcreate_sid = tsec1->sockcreate_sid;

    /* Retain ptracer SID across fork, if any.
    This will be reset by the ptrace hook upon any
    @@ -2895,7 +2939,8 @@ static int selinux_task_kill(struct task_struct *p, struct siginfo *info,
    perm = signal_to_av(sig);
    tsec = p->security;
    if (secid)
    - rc = avc_has_perm(secid, tsec->sid, SECCLASS_PROCESS, perm, NULL);
    + rc = avc_has_perm(secid, tsec->victim_sid,
    + SECCLASS_PROCESS, perm, NULL);
    else
    rc = task_has_perm(current, p, perm);
    return rc;
    @@ -2929,8 +2974,8 @@ static void selinux_task_reparent_to_init(struct task_struct *p)
    secondary_ops->task_reparent_to_init(p);

    tsec = p->security;
    - tsec->osid = tsec->sid;
    - tsec->sid = SECINITSID_KERNEL;
    + tsec->osid = tsec->victim_sid;
    + tsec->victim_sid = SECINITSID_KERNEL;
    return;
    }

    @@ -2940,7 +2985,7 @@ static void selinux_task_to_inode(struct task_struct *p,
    struct task_security_struct *tsec = p->security;
    struct inode_security_struct *isec = inode->i_security;

    - isec->sid = tsec->sid;
    + isec->sid = tsec->victim_sid;
    isec->initialized = 1;
    return;
    }
    @@ -3165,11 +3210,11 @@ static int socket_has_perm(struct task_struct *task, struct socket *sock,
    u32 perms)
    {
    struct inode_security_struct *isec;
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct avc_audit_data ad;
    int err = 0;

    - tsec = task->security;
    + csec = task->cred->security;
    isec = SOCK_INODE(sock)->i_security;

    if (isec->sid == SECINITSID_KERNEL)
    @@ -3177,7 +3222,8 @@ static int socket_has_perm(struct task_struct *task, struct socket *sock,

    AVC_AUDIT_DATA_INIT(&ad,NET);
    ad.u.net.sk = sock->sk;
    - err = avc_has_perm(tsec->sid, isec->sid, isec->sclass, perms, &ad);
    + err = avc_has_perm(csec->action_sid, isec->sid, isec->sclass, perms,
    + &ad);

    out:
    return err;
    @@ -3187,15 +3233,15 @@ static int selinux_socket_create(int family, int type,
    int protocol, int kern)
    {
    int err = 0;
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    u32 newsid;

    if (kern)
    goto out;

    - tsec = current->security;
    - newsid = tsec->sockcreate_sid ? : tsec->sid;
    - err = avc_has_perm(tsec->sid, newsid,
    + csec = current->cred->security;
    + newsid = csec->sockcreate_sid ? : csec->action_sid;
    + err = avc_has_perm(csec->action_sid, newsid,
    socket_type_to_security_class(family, type,
    protocol), SOCKET__CREATE, NULL);

    @@ -3208,14 +3254,14 @@ static int selinux_socket_post_create(struct socket *sock, int family,
    {
    int err = 0;
    struct inode_security_struct *isec;
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct sk_security_struct *sksec;
    u32 newsid;

    isec = SOCK_INODE(sock)->i_security;

    - tsec = current->security;
    - newsid = tsec->sockcreate_sid ? : tsec->sid;
    + csec = current->cred->security;
    + newsid = csec->sockcreate_sid ? : csec->action_sid;
    isec->sclass = socket_type_to_security_class(family, type, protocol);
    isec->sid = kern ? SECINITSID_KERNEL : newsid;
    isec->initialized = 1;
    @@ -4029,7 +4075,7 @@ static int ipc_alloc_security(struct task_struct *task,
    struct kern_ipc_perm *perm,
    u16 sclass)
    {
    - struct task_security_struct *tsec = task->security;
    + struct cred_security_struct *csec = task->cred->security;
    struct ipc_security_struct *isec;

    isec = kzalloc(sizeof(struct ipc_security_struct), GFP_KERNEL);
    @@ -4038,7 +4084,7 @@ static int ipc_alloc_security(struct task_struct *task,

    isec->sclass = sclass;
    isec->ipc_perm = perm;
    - isec->sid = tsec->sid;
    + isec->sid = csec->action_sid;
    perm->security = isec;

    return 0;
    @@ -4077,17 +4123,18 @@ static void msg_msg_free_security(struct msg_msg *msg)
    static int ipc_has_perm(struct kern_ipc_perm *ipc_perms,
    u32 perms)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct ipc_security_struct *isec;
    struct avc_audit_data ad;

    - tsec = current->security;
    + csec = current->cred->security;
    isec = ipc_perms->security;

    AVC_AUDIT_DATA_INIT(&ad, IPC);
    ad.u.ipc_id = ipc_perms->key;

    - return avc_has_perm(tsec->sid, isec->sid, isec->sclass, perms, &ad);
    + return avc_has_perm(csec->action_sid, isec->sid, isec->sclass, perms,
    + &ad);
    }

    static int selinux_msg_msg_alloc_security(struct msg_msg *msg)
    @@ -4103,7 +4150,7 @@ static void selinux_msg_msg_free_security(struct msg_msg *msg)
    /* message queue security operations */
    static int selinux_msg_queue_alloc_security(struct msg_queue *msq)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct ipc_security_struct *isec;
    struct avc_audit_data ad;
    int rc;
    @@ -4112,13 +4159,13 @@ static int selinux_msg_queue_alloc_security(struct msg_queue *msq)
    if (rc)
    return rc;

    - tsec = current->security;
    + csec = current->cred->security;
    isec = msq->q_perm.security;

    AVC_AUDIT_DATA_INIT(&ad, IPC);
    ad.u.ipc_id = msq->q_perm.key;

    - rc = avc_has_perm(tsec->sid, isec->sid, SECCLASS_MSGQ,
    + rc = avc_has_perm(csec->action_sid, isec->sid, SECCLASS_MSGQ,
    MSGQ__CREATE, &ad);
    if (rc) {
    ipc_free_security(&msq->q_perm);
    @@ -4134,17 +4181,17 @@ static void selinux_msg_queue_free_security(struct msg_queue *msq)

    static int selinux_msg_queue_associate(struct msg_queue *msq, int msqflg)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct ipc_security_struct *isec;
    struct avc_audit_data ad;

    - tsec = current->security;
    + csec = current->cred->security;
    isec = msq->q_perm.security;

    AVC_AUDIT_DATA_INIT(&ad, IPC);
    ad.u.ipc_id = msq->q_perm.key;

    - return avc_has_perm(tsec->sid, isec->sid, SECCLASS_MSGQ,
    + return avc_has_perm(csec->action_sid, isec->sid, SECCLASS_MSGQ,
    MSGQ__ASSOCIATE, &ad);
    }

    @@ -4178,13 +4225,13 @@ static int selinux_msg_queue_msgctl(struct msg_queue *msq, int cmd)

    static int selinux_msg_queue_msgsnd(struct msg_queue *msq, struct msg_msg *msg, int msqflg)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct ipc_security_struct *isec;
    struct msg_security_struct *msec;
    struct avc_audit_data ad;
    int rc;

    - tsec = current->security;
    + csec = current->cred->security;
    isec = msq->q_perm.security;
    msec = msg->security;

    @@ -4196,7 +4243,7 @@ static int selinux_msg_queue_msgsnd(struct msg_queue *msq, struct msg_msg *msg,
    * Compute new sid based on current process and
    * message queue this message will be stored in
    */
    - rc = security_transition_sid(tsec->sid,
    + rc = security_transition_sid(csec->action_sid,
    isec->sid,
    SECCLASS_MSG,
    &msec->sid);
    @@ -4208,11 +4255,11 @@ static int selinux_msg_queue_msgsnd(struct msg_queue *msq, struct msg_msg *msg,
    ad.u.ipc_id = msq->q_perm.key;

    /* Can this process write to the queue? */
    - rc = avc_has_perm(tsec->sid, isec->sid, SECCLASS_MSGQ,
    + rc = avc_has_perm(csec->action_sid, isec->sid, SECCLASS_MSGQ,
    MSGQ__WRITE, &ad);
    if (!rc)
    /* Can this process send the message */
    - rc = avc_has_perm(tsec->sid, msec->sid,
    + rc = avc_has_perm(csec->action_sid, msec->sid,
    SECCLASS_MSG, MSG__SEND, &ad);
    if (!rc)
    /* Can the message be put in the queue? */
    @@ -4239,10 +4286,10 @@ static int selinux_msg_queue_msgrcv(struct msg_queue *msq, struct msg_msg *msg,
    AVC_AUDIT_DATA_INIT(&ad, IPC);
    ad.u.ipc_id = msq->q_perm.key;

    - rc = avc_has_perm(tsec->sid, isec->sid,
    + rc = avc_has_perm(tsec->victim_sid, isec->sid,
    SECCLASS_MSGQ, MSGQ__READ, &ad);
    if (!rc)
    - rc = avc_has_perm(tsec->sid, msec->sid,
    + rc = avc_has_perm(tsec->victim_sid, msec->sid,
    SECCLASS_MSG, MSG__RECEIVE, &ad);
    return rc;
    }
    @@ -4250,7 +4297,7 @@ static int selinux_msg_queue_msgrcv(struct msg_queue *msq, struct msg_msg *msg,
    /* Shared Memory security operations */
    static int selinux_shm_alloc_security(struct shmid_kernel *shp)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct ipc_security_struct *isec;
    struct avc_audit_data ad;
    int rc;
    @@ -4259,13 +4306,13 @@ static int selinux_shm_alloc_security(struct shmid_kernel *shp)
    if (rc)
    return rc;

    - tsec = current->security;
    + csec = current->cred->security;
    isec = shp->shm_perm.security;

    AVC_AUDIT_DATA_INIT(&ad, IPC);
    ad.u.ipc_id = shp->shm_perm.key;

    - rc = avc_has_perm(tsec->sid, isec->sid, SECCLASS_SHM,
    + rc = avc_has_perm(csec->action_sid, isec->sid, SECCLASS_SHM,
    SHM__CREATE, &ad);
    if (rc) {
    ipc_free_security(&shp->shm_perm);
    @@ -4281,17 +4328,17 @@ static void selinux_shm_free_security(struct shmid_kernel *shp)

    static int selinux_shm_associate(struct shmid_kernel *shp, int shmflg)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct ipc_security_struct *isec;
    struct avc_audit_data ad;

    - tsec = current->security;
    + csec = current->cred->security;
    isec = shp->shm_perm.security;

    AVC_AUDIT_DATA_INIT(&ad, IPC);
    ad.u.ipc_id = shp->shm_perm.key;

    - return avc_has_perm(tsec->sid, isec->sid, SECCLASS_SHM,
    + return avc_has_perm(csec->action_sid, isec->sid, SECCLASS_SHM,
    SHM__ASSOCIATE, &ad);
    }

    @@ -4349,7 +4396,7 @@ static int selinux_shm_shmat(struct shmid_kernel *shp,
    /* Semaphore security operations */
    static int selinux_sem_alloc_security(struct sem_array *sma)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct ipc_security_struct *isec;
    struct avc_audit_data ad;
    int rc;
    @@ -4358,13 +4405,13 @@ static int selinux_sem_alloc_security(struct sem_array *sma)
    if (rc)
    return rc;

    - tsec = current->security;
    + csec = current->cred->security;
    isec = sma->sem_perm.security;

    AVC_AUDIT_DATA_INIT(&ad, IPC);
    ad.u.ipc_id = sma->sem_perm.key;

    - rc = avc_has_perm(tsec->sid, isec->sid, SECCLASS_SEM,
    + rc = avc_has_perm(csec->action_sid, isec->sid, SECCLASS_SEM,
    SEM__CREATE, &ad);
    if (rc) {
    ipc_free_security(&sma->sem_perm);
    @@ -4380,17 +4427,17 @@ static void selinux_sem_free_security(struct sem_array *sma)

    static int selinux_sem_associate(struct sem_array *sma, int semflg)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct ipc_security_struct *isec;
    struct avc_audit_data ad;

    - tsec = current->security;
    + csec = current->cred->security;
    isec = sma->sem_perm.security;

    AVC_AUDIT_DATA_INIT(&ad, IPC);
    ad.u.ipc_id = sma->sem_perm.key;

    - return avc_has_perm(tsec->sid, isec->sid, SECCLASS_SEM,
    + return avc_has_perm(csec->action_sid, isec->sid, SECCLASS_SEM,
    SEM__ASSOCIATE, &ad);
    }

    @@ -4506,6 +4553,7 @@ static int selinux_getprocattr(struct task_struct *p,
    char *name, char **value)
    {
    struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    u32 sid;
    int error;
    unsigned len;
    @@ -4517,22 +4565,25 @@ static int selinux_getprocattr(struct task_struct *p,
    }

    tsec = p->security;
    + rcu_read_lock();
    + csec = task_cred(p)->security;

    if (!strcmp(name, "current"))
    - sid = tsec->sid;
    + sid = tsec->victim_sid;
    else if (!strcmp(name, "prev"))
    sid = tsec->osid;
    else if (!strcmp(name, "exec"))
    sid = tsec->exec_sid;
    else if (!strcmp(name, "fscreate"))
    - sid = tsec->create_sid;
    + sid = csec->create_sid;
    else if (!strcmp(name, "keycreate"))
    - sid = tsec->keycreate_sid;
    + sid = csec->keycreate_sid;
    else if (!strcmp(name, "sockcreate"))
    - sid = tsec->sockcreate_sid;
    + sid = csec->sockcreate_sid;
    else
    - return -EINVAL;
    + goto invalid;

    + rcu_read_unlock();
    if (!sid)
    return 0;

    @@ -4540,13 +4591,20 @@ static int selinux_getprocattr(struct task_struct *p,
    if (error)
    return error;
    return len;
    +
    +invalid:
    + rcu_read_unlock();
    + return -EINVAL;
    }

    static int selinux_setprocattr(struct task_struct *p,
    char *name, void *value, size_t size)
    {
    struct task_security_struct *tsec;
    - u32 sid = 0;
    + struct cred_security_struct *csec;
    + struct av_decision avd;
    + struct cred *cred;
    + u32 sid = 0, perm;
    int error;
    char *str = value;

    @@ -4562,17 +4620,19 @@ static int selinux_setprocattr(struct task_struct *p,
    * above restriction is ever removed.
    */
    if (!strcmp(name, "exec"))
    - error = task_has_perm(current, p, PROCESS__SETEXEC);
    + perm = PROCESS__SETEXEC;
    else if (!strcmp(name, "fscreate"))
    - error = task_has_perm(current, p, PROCESS__SETFSCREATE);
    + perm = PROCESS__SETFSCREATE;
    else if (!strcmp(name, "keycreate"))
    - error = task_has_perm(current, p, PROCESS__SETKEYCREATE);
    + perm = PROCESS__SETKEYCREATE;
    else if (!strcmp(name, "sockcreate"))
    - error = task_has_perm(current, p, PROCESS__SETSOCKCREATE);
    + perm = PROCESS__SETSOCKCREATE;
    else if (!strcmp(name, "current"))
    - error = task_has_perm(current, p, PROCESS__SETCURRENT);
    + perm = PROCESS__SETCURRENT;
    else
    - error = -EINVAL;
    + return -EINVAL;
    +
    + error = task_has_perm(current, p, perm);
    if (error)
    return error;

    @@ -4594,20 +4654,37 @@ static int selinux_setprocattr(struct task_struct *p,
    checks and may_create for the file creation checks. The
    operation will then fail if the context is not permitted. */
    tsec = p->security;
    - if (!strcmp(name, "exec"))
    + csec = p->cred->security;
    + switch (perm) {
    + case PROCESS__SETEXEC:
    tsec->exec_sid = sid;
    - else if (!strcmp(name, "fscreate"))
    - tsec->create_sid = sid;
    - else if (!strcmp(name, "keycreate")) {
    + break;
    +
    + case PROCESS__SETKEYCREATE:
    error = may_create_key(sid, p);
    if (error)
    return error;
    - tsec->keycreate_sid = sid;
    - } else if (!strcmp(name, "sockcreate"))
    - tsec->sockcreate_sid = sid;
    - else if (!strcmp(name, "current")) {
    - struct av_decision avd;
    + case PROCESS__SETFSCREATE:
    + case PROCESS__SETSOCKCREATE:
    + cred = dup_cred(current->cred);
    + if (!cred)
    + return -ENOMEM;
    + csec = cred->security;
    + switch (perm) {
    + case PROCESS__SETKEYCREATE:
    + csec->keycreate_sid = sid;
    + break;
    + case PROCESS__SETFSCREATE:
    + csec->create_sid = sid;
    + break;
    + case PROCESS__SETSOCKCREATE:
    + csec->sockcreate_sid = sid;
    + break;
    + }
    + set_current_cred(cred);
    + break;

    + case PROCESS__SETCURRENT:
    if (sid == 0)
    return -EINVAL;

    @@ -4626,11 +4703,16 @@ static int selinux_setprocattr(struct task_struct *p,
    }

    /* Check permissions for the transition. */
    - error = avc_has_perm(tsec->sid, sid, SECCLASS_PROCESS,
    + error = avc_has_perm(csec->action_sid, sid, SECCLASS_PROCESS,
    PROCESS__DYNTRANSITION, NULL);
    if (error)
    return error;

    + cred = dup_cred(current->cred);
    + if (!cred)
    + return -ENOMEM;
    + csec = cred->security;
    +
    /* Check for ptracing, and update the task SID if ok.
    Otherwise, leave SID unchanged and fail. */
    task_lock(p);
    @@ -4638,20 +4720,25 @@ static int selinux_setprocattr(struct task_struct *p,
    error = avc_has_perm_noaudit(tsec->ptrace_sid, sid,
    SECCLASS_PROCESS,
    PROCESS__PTRACE, 0, &avd);
    - if (!error)
    - tsec->sid = sid;
    + if (!error) {
    + csec->action_sid = tsec->victim_sid = sid;
    + }
    task_unlock(p);
    avc_audit(tsec->ptrace_sid, sid, SECCLASS_PROCESS,
    PROCESS__PTRACE, &avd, error, NULL);
    - if (error)
    + if (error) {
    + put_cred(cred);
    return error;
    + }
    } else {
    - tsec->sid = sid;
    + csec->action_sid = tsec->victim_sid = sid;
    task_unlock(p);
    }
    - }
    - else
    + set_current_cred(cred);
    + break;
    + default:
    return -EINVAL;
    + }

    return size;
    }
    @@ -4671,18 +4758,21 @@ static void selinux_release_secctx(char *secdata, u32 seclen)
    static int selinux_key_alloc(struct key *k, struct task_struct *tsk,
    unsigned long flags)
    {
    - struct task_security_struct *tsec = tsk->security;
    + struct cred_security_struct *csec;
    struct key_security_struct *ksec;

    ksec = kzalloc(sizeof(struct key_security_struct), GFP_KERNEL);
    if (!ksec)
    return -ENOMEM;

    + rcu_read_lock();
    + csec = task_cred(tsk)->security;
    ksec->obj = k;
    - if (tsec->keycreate_sid)
    - ksec->sid = tsec->keycreate_sid;
    + if (csec->keycreate_sid)
    + ksec->sid = csec->keycreate_sid;
    else
    - ksec->sid = tsec->sid;
    + ksec->sid = csec->action_sid;
    + rcu_read_unlock();
    k->security = ksec;

    return 0;
    @@ -4697,17 +4787,13 @@ static void selinux_key_free(struct key *k)
    }

    static int selinux_key_permission(key_ref_t key_ref,
    - struct task_struct *ctx,
    - key_perm_t perm)
    + struct task_struct *ctx,
    + key_perm_t perm)
    {
    struct key *key;
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    struct key_security_struct *ksec;
    -
    - key = key_ref_to_ptr(key_ref);
    -
    - tsec = ctx->security;
    - ksec = key->security;
    + u32 action_sid;

    /* if no specific permissions are requested, we skip the
    permission check. No serious, additional covert channels
    @@ -4715,7 +4801,16 @@ static int selinux_key_permission(key_ref_t key_ref,
    if (perm == 0)
    return 0;

    - return avc_has_perm(tsec->sid, ksec->sid,
    + key = key_ref_to_ptr(key_ref);
    +
    + rcu_read_lock();
    + csec = task_cred(ctx)->security;
    + action_sid = csec->action_sid;
    + rcu_read_unlock();
    +
    + ksec = key->security;
    +
    + return avc_has_perm(action_sid, ksec->sid,
    SECCLASS_KEY, perm, NULL);
    }

    @@ -4790,6 +4885,9 @@ static struct security_operations selinux_ops = {
    .file_send_sigiotask = selinux_file_send_sigiotask,
    .file_receive = selinux_file_receive,

    + .cred_dup = selinux_cred_dup,
    + .cred_destroy = selinux_cred_destroy,
    +
    .task_create = selinux_task_create,
    .task_alloc_security = selinux_task_alloc_security,
    .task_free_security = selinux_task_free_security,
    @@ -4898,6 +4996,17 @@ static struct security_operations selinux_ops = {
    #endif
    };

    +/*
    + * initial security credentials
    + * - attached to init_cred which is never released
    + */
    +static struct cred_security_struct init_cred_sec = {
    + .action_sid = SECINITSID_KERNEL,
    + .create_sid = SECINITSID_UNLABELED,
    + .keycreate_sid = SECINITSID_UNLABELED,
    + .sockcreate_sid = SECINITSID_UNLABELED,
    +};
    +
    static __init int selinux_init(void)
    {
    struct task_security_struct *tsec;
    @@ -4909,11 +5018,15 @@ static __init int selinux_init(void)

    printk(KERN_INFO "SELinux: Initializing.\n");

    + /* Set the security state for the initial credentials */
    + init_cred.security = &init_cred_sec;
    + BUG_ON(current->cred != &init_cred);
    +
    /* Set the security state for the initial task. */
    if (task_alloc_security(current))
    panic("SELinux: Failed to initialize initial task.\n");
    tsec = current->security;
    - tsec->osid = tsec->sid = SECINITSID_KERNEL;
    + tsec->osid = tsec->victim_sid = SECINITSID_KERNEL;

    sel_inode_cache = kmem_cache_create("selinux_inode_security",
    sizeof(struct inode_security_struct),
    diff --git a/security/selinux/include/objsec.h b/security/selinux/include/objsec.h
    index 91b88f0..a1dbc1c 100644
    --- a/security/selinux/include/objsec.h
    +++ b/security/selinux/include/objsec.h
    @@ -27,14 +27,22 @@
    #include "flask.h"
    #include "avc.h"

    +/*
    + * the security parameters associated with the credentials record structure
    + * (struct cred::security)
    + */
    +struct cred_security_struct {
    + u32 action_sid; /* perform action as SID */
    + u32 create_sid; /* filesystem object creation as SID */
    + u32 keycreate_sid; /* key creation as SID */
    + u32 sockcreate_sid; /* socket creation as SID */
    +};
    +
    struct task_security_struct {
    struct task_struct *task; /* back pointer to task object */
    u32 osid; /* SID prior to last execve */
    - u32 sid; /* current SID */
    + u32 victim_sid; /* current SID affecting victimisation of this task */
    u32 exec_sid; /* exec SID */
    - u32 create_sid; /* fscreate SID */
    - u32 keycreate_sid; /* keycreate SID */
    - u32 sockcreate_sid; /* fscreate SID */
    u32 ptrace_sid; /* SID of ptrace parent */
    };

    diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
    index c9e92da..9c6737f 100644
    --- a/security/selinux/selinuxfs.c
    +++ b/security/selinux/selinuxfs.c
    @@ -77,13 +77,13 @@ extern void selnl_notify_setenforce(int val);
    static int task_has_security(struct task_struct *tsk,
    u32 perms)
    {
    - struct task_security_struct *tsec;
    + struct cred_security_struct *csec;

    - tsec = tsk->security;
    - if (!tsec)
    + csec = tsk->cred->security;
    + if (!csec)
    return -EACCES;

    - return avc_has_perm(tsec->sid, SECINITSID_SECURITY,
    + return avc_has_perm(csec->action_sid, SECINITSID_SECURITY,
    SECCLASS_SECURITY, perms, NULL);
    }

    diff --git a/security/selinux/xfrm.c b/security/selinux/xfrm.c
    index ba715f4..902d302 100644
    --- a/security/selinux/xfrm.c
    +++ b/security/selinux/xfrm.c
    @@ -240,7 +240,7 @@ static int selinux_xfrm_sec_ctx_alloc(struct xfrm_sec_ctx **ctxp,
    /*
    * Does the subject have permission to set security context?
    */
    - rc = avc_has_perm(tsec->sid, ctx->ctx_sid,
    + rc = avc_has_perm(tsec->action_sid, ctx->ctx_sid,
    SECCLASS_ASSOCIATION,
    ASSOCIATION__SETCONTEXT, NULL);
    if (rc)
    @@ -341,7 +341,7 @@ int selinux_xfrm_policy_delete(struct xfrm_policy *xp)
    int rc = 0;

    if (ctx)
    - rc = avc_has_perm(tsec->sid, ctx->ctx_sid,
    + rc = avc_has_perm(tsec->action_sid, ctx->ctx_sid,
    SECCLASS_ASSOCIATION,
    ASSOCIATION__SETCONTEXT, NULL);

    @@ -383,7 +383,7 @@ int selinux_xfrm_state_delete(struct xfrm_state *x)
    int rc = 0;

    if (ctx)
    - rc = avc_has_perm(tsec->sid, ctx->ctx_sid,
    + rc = avc_has_perm(tsec->action_sid, ctx->ctx_sid,
    SECCLASS_ASSOCIATION,
    ASSOCIATION__SETCONTEXT, NULL);


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. [PATCH 19/24] AFS: Add TestSetPageError()

    Add a TestSetPageError() macro to the suite of page flag manipulators. This
    can be used by AFS to prevent over-excision of rejected writes from the page
    cache.

    Signed-off-by: David Howells
    ---

    include/linux/page-flags.h | 1 +
    1 files changed, 1 insertions(+), 0 deletions(-)

    diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
    index eaf9854..b59506b 100644
    --- a/include/linux/page-flags.h
    +++ b/include/linux/page-flags.h
    @@ -130,6 +130,7 @@
    #define PageError(page) test_bit(PG_error, &(page)->flags)
    #define SetPageError(page) set_bit(PG_error, &(page)->flags)
    #define ClearPageError(page) clear_bit(PG_error, &(page)->flags)
    +#define TestSetPageError(page) test_and_set_bit(PG_error, &(page)->flags)

    #define PageReferenced(page) test_bit(PG_referenced, &(page)->flags)
    #define SetPageReferenced(page) set_bit(PG_referenced, &(page)->flags)

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  4. [PATCH 18/24] NFS: Display local caching state

    Display the local caching state in /proc/fs/nfsfs/volumes.

    Signed-off-by: David Howells
    ---

    fs/nfs/client.c | 7 ++++---
    fs/nfs/fscache.h | 12 ++++++++++++
    2 files changed, 16 insertions(+), 3 deletions(-)

    diff --git a/fs/nfs/client.c b/fs/nfs/client.c
    index 0de4db4..d350668 100644
    --- a/fs/nfs/client.c
    +++ b/fs/nfs/client.c
    @@ -1319,7 +1319,7 @@ static int nfs_volume_list_show(struct seq_file *m, void *v)

    /* display header on line 1 */
    if (v == &nfs_volume_list) {
    - seq_puts(m, "NV SERVER PORT DEV FSID\n");
    + seq_puts(m, "NV SERVER PORT DEV FSID FSC\n");
    return 0;
    }
    /* display one transport per line on subsequent lines */
    @@ -1333,12 +1333,13 @@ static int nfs_volume_list_show(struct seq_file *m, void *v)
    (unsigned long long) server->fsid.major,
    (unsigned long long) server->fsid.minor);

    - seq_printf(m, "v%d %02x%02x%02x%02x %4hx %-7s %-17s\n",
    + seq_printf(m, "v%d %02x%02x%02x%02x %4hx %-7s %-17s %s\n",
    clp->cl_nfsversion,
    NIPQUAD(clp->cl_addr.sin_addr),
    ntohs(clp->cl_addr.sin_port),
    dev,
    - fsid);
    + fsid,
    + nfs_server_fscache_state(server));

    return 0;
    }
    diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h
    index 44bb0d1..77f3450 100644
    --- a/fs/nfs/fscache.h
    +++ b/fs/nfs/fscache.h
    @@ -56,6 +56,17 @@ extern void __nfs_fscache_invalidate_page(struct page *, struct inode *);
    extern int nfs_fscache_release_page(struct page *, gfp_t);

    /*
    + * indicate the client caching state as readable text
    + */
    +static inline const char *nfs_server_fscache_state(struct nfs_server *server)
    +{
    + if (server->nfs_client->fscache &&
    + (server->options & NFS_OPTION_FSCACHE))
    + return "yes";
    + return "no ";
    +}
    +
    +/*
    * release the caching state associated with a page if undergoing complete page
    * invalidation
    */
    @@ -110,6 +121,7 @@ static inline void nfs_fscache_unregister(void) {}
    static inline void nfs_fscache_get_client_cookie(struct nfs_client *clp) {}
    static inline void nfs4_fscache_get_client_cookie(struct nfs_client *clp) {}
    static inline void nfs_fscache_release_client_cookie(struct nfs_client *clp) {}
    +static inline const char *nfs_server_fscache_state(struct nfs_server *server) { return "no "; }

    static inline void nfs_fscache_init_fh_cookie(struct inode *inode) {}
    static inline void nfs_fscache_enable_fh_cookie(struct inode *inode) {}

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  5. [PATCH 04/24] CRED: Move the effective capabilities into the cred struct

    Move the effective capabilities mask from the task struct into the credentials
    record.

    Note that the effective capabilities mask in the cred struct shadows that in
    the task_struct because a thread can have its capabilities masks changed by
    another thread. The shadowing is performed by update_current_cred() which is
    invoked on entry to any system call that might need it.

    Signed-off-by: David Howells
    ---

    fs/buffer.c | 3 +++
    fs/ioprio.c | 3 +++
    fs/open.c | 27 +++++++++------------------
    fs/proc/array.c | 2 +-
    fs/readdir.c | 3 +++
    include/linux/cred.h | 2 ++
    include/linux/init_task.h | 2 +-
    include/linux/sched.h | 2 +-
    ipc/msg.c | 3 +++
    ipc/sem.c | 3 +++
    ipc/shm.c | 3 +++
    kernel/acct.c | 3 +++
    kernel/capability.c | 3 +++
    kernel/compat.c | 3 +++
    kernel/cred.c | 36 +++++++++++++++++++++++++++++-------
    kernel/exit.c | 2 ++
    kernel/fork.c | 6 +++++-
    kernel/futex.c | 3 +++
    kernel/futex_compat.c | 3 +++
    kernel/kexec.c | 3 +++
    kernel/module.c | 6 ++++++
    kernel/ptrace.c | 3 +++
    kernel/sched.c | 9 +++++++++
    kernel/signal.c | 6 ++++++
    kernel/sys.c | 39 +++++++++++++++++++++++++++++++++++++++
    kernel/sysctl.c | 3 +++
    kernel/time.c | 9 +++++++++
    kernel/uid16.c | 3 +++
    mm/mempolicy.c | 6 ++++++
    mm/migrate.c | 3 +++
    mm/mlock.c | 4 ++++
    mm/mmap.c | 3 +++
    mm/mremap.c | 3 +++
    mm/oom_kill.c | 9 +++++++--
    mm/swapfile.c | 6 ++++++
    net/compat.c | 6 ++++++
    net/socket.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
    security/commoncap.c | 32 +++++++++++++++++---------------
    security/dummy.c | 22 ++++++++++++++++++----
    39 files changed, 282 insertions(+), 50 deletions(-)

    diff --git a/fs/buffer.c b/fs/buffer.c
    index 0e5ec37..9aabf79 100644
    --- a/fs/buffer.c
    +++ b/fs/buffer.c
    @@ -2909,6 +2909,9 @@ asmlinkage long sys_bdflush(int func, long data)
    {
    static int msg_count;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!capable(CAP_SYS_ADMIN))
    return -EPERM;

    diff --git a/fs/ioprio.c b/fs/ioprio.c
    index 10d2c21..d32b7b7 100644
    --- a/fs/ioprio.c
    +++ b/fs/ioprio.c
    @@ -63,6 +63,9 @@ asmlinkage long sys_ioprio_set(int which, int who, int ioprio)
    struct pid *pgrp;
    int ret;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    switch (class) {
    case IOPRIO_CLASS_RT:
    if (!capable(CAP_SYS_ADMIN))
    diff --git a/fs/open.c b/fs/open.c
    index 0c05863..f765ec5 100644
    --- a/fs/open.c
    +++ b/fs/open.c
    @@ -450,7 +450,7 @@ out:
    asmlinkage long sys_faccessat(int dfd, const char __user *filename, int mode)
    {
    struct nameidata nd;
    - kernel_cap_t old_cap;
    + kernel_cap_t old_cap, want_cap = CAP_EMPTY_SET;
    struct cred *cred;
    int res;

    @@ -461,33 +461,26 @@ asmlinkage long sys_faccessat(int dfd, const char __user *filename, int mode)
    if (res < 0)
    return res;

    - old_cap = current->cap_effective;
    + /* Clear the capabilities if we switch to a non-root user */
    + if (!current->uid)
    + want_cap = current->cap_permitted;
    +
    + old_cap = current->cred->cap_effective;

    if (current->cred->uid != current->uid ||
    - current->cred->gid != current->gid) {
    + current->cred->gid != current->gid ||
    + current->cred->cap_effective != want_cap) {
    cred = dup_cred(current->cred);
    if (!cred)
    return -ENOMEM;

    change_fsuid(cred, current->uid);
    change_fsgid(cred, current->gid);
    + change_cap(cred, want_cap);
    } else {
    cred = get_current_cred();
    }

    - /*
    - * Clear the capabilities if we switch to a non-root user
    - *
    - * FIXME: There is a race here against sys_capset. The
    - * capabilities can change yet we will restore the old
    - * value below. We should hold task_capabilities_lock,
    - * but we cannot because user_path_walk can sleep.
    - */
    - if (current->uid)
    - cap_clear(current->cap_effective);
    - else
    - current->cap_effective = current->cap_permitted;
    -
    cred = __set_current_cred(cred);
    res = __user_walk_fd(dfd, filename, LOOKUP_FOLLOW|LOOKUP_ACCESS, &nd);
    if (res)
    @@ -506,8 +499,6 @@ out_path_release:
    path_release(&nd);
    out:
    set_current_cred(cred);
    - current->cap_effective = old_cap;
    -
    return res;
    }

    diff --git a/fs/proc/array.c b/fs/proc/array.c
    index dc2f83a..1a406c7 100644
    --- a/fs/proc/array.c
    +++ b/fs/proc/array.c
    @@ -286,7 +286,7 @@ static inline char *task_cap(struct task_struct *p, char *buffer)
    "CapEff:\t%016x\n",
    cap_t(p->cap_inheritable),
    cap_t(p->cap_permitted),
    - cap_t(p->cap_effective));
    + cap_t(p->_cap_effective));
    }

    static inline char *task_context_switch_counts(struct task_struct *p,
    diff --git a/fs/readdir.c b/fs/readdir.c
    index 57e6aa9..33c69ac 100644
    --- a/fs/readdir.c
    +++ b/fs/readdir.c
    @@ -103,6 +103,9 @@ asmlinkage long old_readdir(unsigned int fd, struct old_linux_dirent __user * di
    struct file * file;
    struct readdir_callback buf;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    error = -EBADF;
    file = fget(fd);
    if (!file)
    diff --git a/include/linux/cred.h b/include/linux/cred.h
    index 7e35b2f..78924d5 100644
    --- a/include/linux/cred.h
    +++ b/include/linux/cred.h
    @@ -24,6 +24,7 @@ struct cred {
    atomic_t usage;
    uid_t uid; /* fsuid as was */
    gid_t gid; /* fsgid as was */
    + kernel_cap_t cap_effective;
    struct rcu_head exterminate; /* cred destroyer */
    struct group_info *group_info;
    void *security;
    @@ -48,6 +49,7 @@ extern void put_cred(struct cred *);
    extern void change_fsuid(struct cred *, uid_t);
    extern void change_fsgid(struct cred *, gid_t);
    extern void change_groups(struct cred *, struct group_info *);
    +extern void change_cap(struct cred *, kernel_cap_t);
    extern struct cred *dup_cred(const struct cred *);

    /**
    diff --git a/include/linux/init_task.h b/include/linux/init_task.h
    index 5cb7931..56d4be3 100644
    --- a/include/linux/init_task.h
    +++ b/include/linux/init_task.h
    @@ -141,7 +141,7 @@ extern struct nsproxy init_nsproxy;
    .sibling = LIST_HEAD_INIT(tsk.sibling), \
    .group_leader = &tsk, \
    .cred = &init_cred, \
    - .cap_effective = CAP_INIT_EFF_SET, \
    + ._cap_effective = CAP_INIT_EFF_SET, \
    .cap_inheritable = CAP_INIT_INH_SET, \
    .cap_permitted = CAP_FULL_SET, \
    .keep_capabilities = 0, \
    diff --git a/include/linux/sched.h b/include/linux/sched.h
    index ca0d553..52f2b64 100644
    --- a/include/linux/sched.h
    +++ b/include/linux/sched.h
    @@ -1037,7 +1037,7 @@ struct task_struct {
    struct cred *cred;
    uid_t uid,euid,suid;
    gid_t gid,egid,sgid;
    - kernel_cap_t cap_effective, cap_inheritable, cap_permitted;
    + kernel_cap_t _cap_effective, cap_inheritable, cap_permitted;
    unsigned keep_capabilities:1;
    struct user_struct *user;
    #ifdef CONFIG_KEYS
    diff --git a/ipc/msg.c b/ipc/msg.c
    index a03fcb5..a351c89 100644
    --- a/ipc/msg.c
    +++ b/ipc/msg.c
    @@ -393,6 +393,9 @@ asmlinkage long sys_msgctl(int msqid, int cmd, struct msqid_ds __user *buf)
    if (msqid < 0 || cmd < 0)
    return -EINVAL;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    version = ipc_parse_version(&cmd);
    ns = current->nsproxy->ipc_ns;

    diff --git a/ipc/sem.c b/ipc/sem.c
    index b676fef..9691b40 100644
    --- a/ipc/sem.c
    +++ b/ipc/sem.c
    @@ -927,6 +927,9 @@ asmlinkage long sys_semctl (int semid, int semnum, int cmd, union semun arg)
    if (semid < 0)
    return -EINVAL;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    version = ipc_parse_version(&cmd);
    ns = current->nsproxy->ipc_ns;

    diff --git a/ipc/shm.c b/ipc/shm.c
    index a86a3a5..709a4fe 100644
    --- a/ipc/shm.c
    +++ b/ipc/shm.c
    @@ -589,6 +589,9 @@ asmlinkage long sys_shmctl (int shmid, int cmd, struct shmid_ds __user *buf)
    goto out;
    }

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    version = ipc_parse_version(&cmd);
    ns = current->nsproxy->ipc_ns;

    diff --git a/kernel/acct.c b/kernel/acct.c
    index 24f0f8b..01961a5 100644
    --- a/kernel/acct.c
    +++ b/kernel/acct.c
    @@ -253,6 +253,9 @@ asmlinkage long sys_acct(const char __user *name)
    {
    int error;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!capable(CAP_SYS_PACCT))
    return -EPERM;

    diff --git a/kernel/capability.c b/kernel/capability.c
    index c8d3c77..3ae73f9 100644
    --- a/kernel/capability.c
    +++ b/kernel/capability.c
    @@ -178,6 +178,9 @@ asmlinkage long sys_capset(cap_user_header_t header, const cap_user_data_t data)
    int ret;
    pid_t pid;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (get_user(version, &header->version))
    return -EFAULT;

    diff --git a/kernel/compat.c b/kernel/compat.c
    index 3bae374..04be932 100644
    --- a/kernel/compat.c
    +++ b/kernel/compat.c
    @@ -909,6 +909,9 @@ asmlinkage long compat_sys_adjtimex(struct compat_timex __user *utp)
    struct timex txc;
    int ret;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    memset(&txc, 0, sizeof(struct timex));

    if (!access_ok(VERIFY_READ, utp, sizeof(struct compat_timex)) ||
    diff --git a/kernel/cred.c b/kernel/cred.c
    index 9868eef..f545634 100644
    --- a/kernel/cred.c
    +++ b/kernel/cred.c
    @@ -21,16 +21,19 @@
    */
    struct cred init_cred = {
    .usage = ATOMIC_INIT(2),
    + .cap_effective = CAP_INIT_EFF_SET,
    .group_info = &init_groups,
    };

    /**
    * update_current_cred - Bring the current task's creds up to date
    *
    - * Bring the current task's credentials up to date with respect to the keyrings
    - * they shadow. The process and session level keyrings may get changed by
    - * sibling threads with the same process, but the change can't be applied back
    - * to this thread's cred struct except by this thread itself.
    + * Bring the current task's credential record up to date with respect to the
    + * effective capability mask and keyrings it shadows. The capabilities mask
    + * may get changed by other processes, and process and session level keyrings
    + * may get changed by sibling threads with the same process, but the change
    + * can't be applied back to this thread's cred struct except by this thread
    + * itself.
    */
    int update_current_cred(void)
    {
    @@ -46,16 +49,21 @@ int update_current_cred(void)
    key_ref_to_ptr(cred->process_keyring) == sig->process_keyring &&
    key_ref_to_ptr(cred->thread_keyring) == current->thread_keyring &&
    #endif
    - true)
    + cred->cap_effective != current->_cap_effective)
    return 0;

    - cred = kmalloc(sizeof(struct cred), GFP_KERNEL);
    + cred = kmemdup(current->cred, sizeof(struct cred), GFP_KERNEL);
    if (!cred)
    return -ENOMEM;

    - *cred = *current->cred;
    + if (security_cred_dup(cred) < 0) {
    + kfree(cred);
    + return -ENOMEM;
    + }
    +
    atomic_set(&cred->usage, 1);
    get_group_info(cred->group_info);
    + cred->cap_effective = current->_cap_effective;

    #ifdef CONFIG_KEYS
    rcu_read_lock();
    @@ -188,3 +196,17 @@ void change_groups(struct cred *cred, struct group_info *group_info)
    }

    EXPORT_SYMBOL(change_groups);
    +
    +/**
    + * change_cap - Change the supplementary groups in a new credential record
    + * @cred: The credential record to alter
    + * @cap: The capabilities to set
    + *
    + * Change the effective capabilities in a new credential record.
    + */
    +void change_cap(struct cred *cred, kernel_cap_t cap)
    +{
    + cred->cap_effective = cap;
    +}
    +
    +EXPORT_SYMBOL(change_cap);
    diff --git a/kernel/exit.c b/kernel/exit.c
    index c366ae7..a9916e5 100644
    --- a/kernel/exit.c
    +++ b/kernel/exit.c
    @@ -888,6 +888,8 @@ fastcall NORET_TYPE void do_exit(long code)
    struct task_struct *tsk = current;
    int group_dead;

    + update_current_cred();
    +
    profile_task_exit(tsk);

    WARN_ON(atomic_read(&tsk->fs_excl));
    diff --git a/kernel/fork.c b/kernel/fork.c
    index 677c353..e2948ed 100644
    --- a/kernel/fork.c
    +++ b/kernel/fork.c
    @@ -1422,9 +1422,13 @@ long do_fork(unsigned long clone_flags,
    {
    struct task_struct *p;
    int trace = 0;
    - struct pid *pid = alloc_pid();
    + struct pid *pid;
    long nr;

    + if (update_current_cred())
    + return -ENOMEM;
    +
    + pid = alloc_pid();
    if (!pid)
    return -EAGAIN;
    nr = pid->nr;
    diff --git a/kernel/futex.c b/kernel/futex.c
    index e8935b1..40070fe 100644
    --- a/kernel/futex.c
    +++ b/kernel/futex.c
    @@ -1846,6 +1846,9 @@ sys_get_robust_list(int pid, struct robust_list_head __user * __user *head_ptr,
    struct robust_list_head __user *head;
    unsigned long ret;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!pid)
    head = current->robust_list;
    else {
    diff --git a/kernel/futex_compat.c b/kernel/futex_compat.c
    index 7e52eb0..a872029 100644
    --- a/kernel/futex_compat.c
    +++ b/kernel/futex_compat.c
    @@ -109,6 +109,9 @@ compat_sys_get_robust_list(int pid, compat_uptr_t __user *head_ptr,
    struct compat_robust_list_head __user *head;
    unsigned long ret;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!pid)
    head = current->compat_robust_list;
    else {
    diff --git a/kernel/kexec.c b/kernel/kexec.c
    index 25db14b..e1feb2f 100644
    --- a/kernel/kexec.c
    +++ b/kernel/kexec.c
    @@ -921,6 +921,9 @@ asmlinkage long sys_kexec_load(unsigned long entry, unsigned long nr_segments,
    int locked;
    int result;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    /* We only trust the superuser with rebooting the system. */
    if (!capable(CAP_SYS_BOOT))
    return -EPERM;
    diff --git a/kernel/module.c b/kernel/module.c
    index db0ead0..32893a5 100644
    --- a/kernel/module.c
    +++ b/kernel/module.c
    @@ -660,6 +660,9 @@ sys_delete_module(const char __user *name_user, unsigned int flags)
    char name[MODULE_NAME_LEN];
    int ret, forced = 0;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!capable(CAP_SYS_MODULE))
    return -EPERM;

    @@ -1978,6 +1981,9 @@ sys_init_module(void __user *umod,
    struct module *mod;
    int ret = 0;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    /* Must have permission */
    if (!capable(CAP_SYS_MODULE))
    return -EPERM;
    diff --git a/kernel/ptrace.c b/kernel/ptrace.c
    index 3eca7a5..15fb1ff 100644
    --- a/kernel/ptrace.c
    +++ b/kernel/ptrace.c
    @@ -456,6 +456,9 @@ asmlinkage long sys_ptrace(long request, long pid, long addr, long data)
    struct task_struct *child;
    long ret;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    /*
    * This lock_kernel fixes a subtle race with suid exec
    */
    diff --git a/kernel/sched.c b/kernel/sched.c
    index 6107a0c..602f526 100644
    --- a/kernel/sched.c
    +++ b/kernel/sched.c
    @@ -4063,6 +4063,9 @@ asmlinkage long sys_nice(int increment)
    {
    long nice, retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    /*
    * Setpriority might change our priority at the same moment.
    * We don't have to worry. Conceptually one call occurs first
    @@ -4295,6 +4298,9 @@ do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
    struct task_struct *p;
    int retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!param || pid < 0)
    return -EINVAL;
    if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
    @@ -4468,6 +4474,9 @@ asmlinkage long sys_sched_setaffinity(pid_t pid, unsigned int len,
    cpumask_t new_mask;
    int retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    retval = get_user_cpu_mask(user_mask_ptr, len, &new_mask);
    if (retval)
    return retval;
    diff --git a/kernel/signal.c b/kernel/signal.c
    index 9fb91a3..0a3358f 100644
    --- a/kernel/signal.c
    +++ b/kernel/signal.c
    @@ -2197,6 +2197,9 @@ sys_kill(int pid, int sig)
    {
    struct siginfo info;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    info.si_signo = sig;
    info.si_errno = 0;
    info.si_code = SI_USER;
    @@ -2212,6 +2215,9 @@ static int do_tkill(int tgid, int pid, int sig)
    struct siginfo info;
    struct task_struct *p;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    error = -ESRCH;
    info.si_signo = sig;
    info.si_errno = 0;
    diff --git a/kernel/sys.c b/kernel/sys.c
    index 9bb591f..ff34679 100644
    --- a/kernel/sys.c
    +++ b/kernel/sys.c
    @@ -670,6 +670,9 @@ asmlinkage long sys_setpriority(int which, int who, int niceval)
    int error = -EINVAL;
    struct pid *pgrp;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (which > PRIO_USER || which < PRIO_PROCESS)
    goto out;

    @@ -896,6 +899,9 @@ asmlinkage long sys_reboot(int magic1, int magic2, unsigned int cmd, void __user
    {
    char buffer[256];

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    /* We only trust the superuser with rebooting the system. */
    if (!capable(CAP_SYS_BOOT))
    return -EPERM;
    @@ -1019,6 +1025,9 @@ asmlinkage long sys_setregid(gid_t rgid, gid_t egid)
    int new_egid = old_egid;
    int retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    retval = security_task_setgid(rgid, egid, (gid_t)-1, LSM_SETID_RE);
    if (retval)
    return retval;
    @@ -1072,6 +1081,9 @@ asmlinkage long sys_setgid(gid_t gid)
    int old_egid = current->egid;
    int retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    retval = security_task_setgid(gid, (gid_t)-1, (gid_t)-1, LSM_SETID_ID);
    if (retval)
    return retval;
    @@ -1150,6 +1162,9 @@ asmlinkage long sys_setreuid(uid_t ruid, uid_t euid)
    int old_ruid, old_euid, old_suid, new_ruid, new_euid;
    int retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    retval = security_task_setuid(ruid, euid, (uid_t)-1, LSM_SETID_RE);
    if (retval)
    return retval;
    @@ -1221,6 +1236,9 @@ asmlinkage long sys_setuid(uid_t uid)
    int old_ruid, old_suid, new_suid;
    int retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    retval = security_task_setuid(uid, (uid_t)-1, (uid_t)-1, LSM_SETID_ID);
    if (retval)
    return retval;
    @@ -1271,6 +1289,9 @@ asmlinkage long sys_setresuid(uid_t ruid, uid_t euid, uid_t suid)
    int old_suid = current->suid;
    int retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    retval = security_task_setuid(ruid, euid, suid, LSM_SETID_RES);
    if (retval)
    return retval;
    @@ -1333,6 +1354,9 @@ asmlinkage long sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid)
    struct cred *cred;
    int retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    retval = security_task_setgid(rgid, egid, sgid, LSM_SETID_RES);
    if (retval)
    return retval;
    @@ -1876,6 +1900,9 @@ asmlinkage long sys_setgroups(int gidsetsize, gid_t __user *grouplist)
    struct group_info *group_info;
    int retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!capable(CAP_SETGID))
    return -EPERM;
    if ((unsigned)gidsetsize > NGROUPS_MAX)
    @@ -1941,6 +1968,9 @@ asmlinkage long sys_sethostname(char __user *name, int len)
    int errno;
    char tmp[__NEW_UTS_LEN];

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!capable(CAP_SYS_ADMIN))
    return -EPERM;
    if (len < 0 || len > __NEW_UTS_LEN)
    @@ -1986,6 +2016,9 @@ asmlinkage long sys_setdomainname(char __user *name, int len)
    int errno;
    char tmp[__NEW_UTS_LEN];

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!capable(CAP_SYS_ADMIN))
    return -EPERM;
    if (len < 0 || len > __NEW_UTS_LEN)
    @@ -2045,6 +2078,9 @@ asmlinkage long sys_setrlimit(unsigned int resource, struct rlimit __user *rlim)
    unsigned long it_prof_secs;
    int retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (resource >= RLIM_NLIMITS)
    return -EINVAL;
    if (copy_from_user(&new_rlim, rlim, sizeof(*rlim)))
    @@ -2226,6 +2262,9 @@ asmlinkage long sys_prctl(int option, unsigned long arg2, unsigned long arg3,
    {
    long error;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    error = security_task_prctl(option, arg2, arg3, arg4, arg5);
    if (error)
    return error;
    diff --git a/kernel/sysctl.c b/kernel/sysctl.c
    index 53a456e..9447293 100644
    --- a/kernel/sysctl.c
    +++ b/kernel/sysctl.c
    @@ -1347,6 +1347,9 @@ asmlinkage long sys_sysctl(struct __sysctl_args __user *args)
    struct __sysctl_args tmp;
    int error;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (copy_from_user(&tmp, args, sizeof(tmp)))
    return -EFAULT;

    diff --git a/kernel/time.c b/kernel/time.c
    index 2289a8d..975f47d 100644
    --- a/kernel/time.c
    +++ b/kernel/time.c
    @@ -82,6 +82,9 @@ asmlinkage long sys_stime(time_t __user *tptr)
    struct timespec tv;
    int err;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (get_user(tv.tv_sec, tptr))
    return -EFAULT;

    @@ -186,6 +189,9 @@ asmlinkage long sys_settimeofday(struct timeval __user *tv,
    struct timespec new_ts;
    struct timezone new_tz;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (tv) {
    if (copy_from_user(&user_tv, tv, sizeof(*tv)))
    return -EFAULT;
    @@ -205,6 +211,9 @@ asmlinkage long sys_adjtimex(struct timex __user *txc_p)
    struct timex txc; /* Local copy of parameter */
    int ret;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    /* Copy the user data space into the kernel copy
    * structure. But bear in mind that the structures
    * may change
    diff --git a/kernel/uid16.c b/kernel/uid16.c
    index 5a8b95e..5238a96 100644
    --- a/kernel/uid16.c
    +++ b/kernel/uid16.c
    @@ -187,6 +187,9 @@ asmlinkage long sys_setgroups16(int gidsetsize, old_gid_t __user *grouplist)
    struct group_info *group_info;
    int retval;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!capable(CAP_SETGID))
    return -EPERM;
    if ((unsigned)gidsetsize > NGROUPS_MAX)
    diff --git a/mm/mempolicy.c b/mm/mempolicy.c
    index 3d6ac95..64cfcf2 100644
    --- a/mm/mempolicy.c
    +++ b/mm/mempolicy.c
    @@ -878,6 +878,9 @@ asmlinkage long sys_mbind(unsigned long start, unsigned long len,
    nodemask_t nodes;
    int err;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    err = get_nodes(&nodes, nmask, maxnode);
    if (err)
    return err;
    @@ -914,6 +917,9 @@ asmlinkage long sys_migrate_pages(pid_t pid, unsigned long maxnode,
    nodemask_t task_nodes;
    int err;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    err = get_nodes(&old, old_nodes, maxnode);
    if (err)
    return err;
    diff --git a/mm/migrate.c b/mm/migrate.c
    index e2fdbce..79a1909 100644
    --- a/mm/migrate.c
    +++ b/mm/migrate.c
    @@ -915,6 +915,9 @@ asmlinkage long sys_move_pages(pid_t pid, unsigned long nr_pages,
    struct mm_struct *mm;
    struct page_to_node *pm = NULL;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    /* Check flags */
    if (flags & ~(MPOL_MF_MOVE|MPOL_MF_MOVE_ALL))
    return -EINVAL;
    diff --git a/mm/mlock.c b/mm/mlock.c
    index 7b26560..67985f4 100644
    --- a/mm/mlock.c
    +++ b/mm/mlock.c
    @@ -138,6 +138,8 @@ asmlinkage long sys_mlock(unsigned long start, size_t len)
    unsigned long lock_limit;
    int error = -ENOMEM;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    if (!can_do_mlock())
    return -EPERM;

    @@ -203,6 +205,8 @@ asmlinkage long sys_mlockall(int flags)
    if (!flags || (flags & ~(MCL_CURRENT | MCL_FUTURE)))
    goto out;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    ret = -EPERM;
    if (!can_do_mlock())
    goto out;
    diff --git a/mm/mmap.c b/mm/mmap.c
    index 0d40e66..1b7b0ff 100644
    --- a/mm/mmap.c
    +++ b/mm/mmap.c
    @@ -240,6 +240,9 @@ asmlinkage unsigned long sys_brk(unsigned long brk)
    unsigned long newbrk, oldbrk;
    struct mm_struct *mm = current->mm;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    down_write(&mm->mmap_sem);

    if (brk < mm->end_code)
    diff --git a/mm/mremap.c b/mm/mremap.c
    index 8ea5c24..0d49048 100644
    --- a/mm/mremap.c
    +++ b/mm/mremap.c
    @@ -418,6 +418,9 @@ asmlinkage unsigned long sys_mremap(unsigned long addr,
    {
    unsigned long ret;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    down_write(&current->mm->mmap_sem);
    ret = do_mremap(addr, old_len, new_len, flags, new_addr);
    up_write(&current->mm->mmap_sem);
    diff --git a/mm/oom_kill.c b/mm/oom_kill.c
    index f9b82ad..df5edda 100644
    --- a/mm/oom_kill.c
    +++ b/mm/oom_kill.c
    @@ -53,6 +53,7 @@ unsigned long badness(struct task_struct *p, unsigned long uptime)
    unsigned long points, cpu_time, run_time, s;
    struct mm_struct *mm;
    struct task_struct *child;
    + kernel_cap_t cap_effective;

    task_lock(p);
    mm = p->mm;
    @@ -123,7 +124,11 @@ unsigned long badness(struct task_struct *p, unsigned long uptime)
    * Superuser processes are usually more important, so we make it
    * less likely that we kill those.
    */
    - if (cap_t(p->cap_effective) & CAP_TO_MASK(CAP_SYS_ADMIN) ||
    + rcu_read_lock();
    + cap_effective = task_cred(p)->cap_effective;
    + rcu_read_unlock();
    +
    + if (cap_t(cap_effective) & CAP_TO_MASK(CAP_SYS_ADMIN) ||
    p->uid == 0 || p->euid == 0)
    points /= 4;

    @@ -133,7 +138,7 @@ unsigned long badness(struct task_struct *p, unsigned long uptime)
    * tend to only have this flag set on applications they think
    * of as important.
    */
    - if (cap_t(p->cap_effective) & CAP_TO_MASK(CAP_SYS_RAWIO))
    + if (cap_t(cap_effective) & CAP_TO_MASK(CAP_SYS_RAWIO))
    points /= 4;

    /*
    diff --git a/mm/swapfile.c b/mm/swapfile.c
    index f071648..9539da4 100644
    --- a/mm/swapfile.c
    +++ b/mm/swapfile.c
    @@ -1183,6 +1183,9 @@ asmlinkage long sys_swapoff(const char __user * specialfile)
    int i, type, prev;
    int err;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!capable(CAP_SYS_ADMIN))
    return -EPERM;

    @@ -1433,6 +1436,9 @@ asmlinkage long sys_swapon(const char __user * specialfile, int swap_flags)
    struct inode *inode = NULL;
    int did_down = 0;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (!capable(CAP_SYS_ADMIN))
    return -EPERM;
    spin_lock(&swap_lock);
    diff --git a/net/compat.c b/net/compat.c
    index d74d821..c20f404 100644
    --- a/net/compat.c
    +++ b/net/compat.c
    @@ -483,6 +483,9 @@ asmlinkage long compat_sys_setsockopt(int fd, int level, int optname,
    int err;
    struct socket *sock;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (level == SOL_IPV6 && optname == IPT_SO_SET_REPLACE)
    return do_netfilter_replace(fd, level, optname,
    optval, optlen);
    @@ -603,6 +606,9 @@ asmlinkage long compat_sys_getsockopt(int fd, int level, int optname,
    int err;
    struct socket *sock;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if ((sock = sockfd_lookup(fd, &err))!=NULL)
    {
    err = security_socket_getsockopt(sock, level,
    diff --git a/net/socket.c b/net/socket.c
    index 50bfeef..034d221 100644
    --- a/net/socket.c
    +++ b/net/socket.c
    @@ -1200,6 +1200,9 @@ asmlinkage long sys_socket(int family, int type, int protocol)
    int retval;
    struct socket *sock;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    retval = sock_create(family, type, protocol, &sock);
    if (retval < 0)
    goto out;
    @@ -1228,6 +1231,9 @@ asmlinkage long sys_socketpair(int family, int type, int protocol,
    int fd1, fd2, err;
    struct file *newfile1, *newfile2;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    /*
    * Obtain the first socket and check if the underlying protocol
    * supports the socketpair call.
    @@ -1323,6 +1329,9 @@ asmlinkage long sys_bind(int fd, struct sockaddr __user *umyaddr, int addrlen)
    char address[MAX_SOCK_ADDR];
    int err, fput_needed;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    sock = sockfd_lookup_light(fd, &err, &fput_needed);
    if (sock) {
    err = move_addr_to_kernel(umyaddr, addrlen, address);
    @@ -1353,6 +1362,9 @@ asmlinkage long sys_listen(int fd, int backlog)
    struct socket *sock;
    int err, fput_needed;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    sock = sockfd_lookup_light(fd, &err, &fput_needed);
    if (sock) {
    if ((unsigned)backlog > sysctl_somaxconn)
    @@ -1387,6 +1399,9 @@ asmlinkage long sys_accept(int fd, struct sockaddr __user *upeer_sockaddr,
    int err, len, newfd, fput_needed;
    char address[MAX_SOCK_ADDR];

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    sock = sockfd_lookup_light(fd, &err, &fput_needed);
    if (!sock)
    goto out;
    @@ -1476,6 +1491,9 @@ asmlinkage long sys_connect(int fd, struct sockaddr __user *uservaddr,
    char address[MAX_SOCK_ADDR];
    int err, fput_needed;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    sock = sockfd_lookup_light(fd, &err, &fput_needed);
    if (!sock)
    goto out;
    @@ -1508,6 +1526,9 @@ asmlinkage long sys_getsockname(int fd, struct sockaddr __user *usockaddr,
    char address[MAX_SOCK_ADDR];
    int len, err, fput_needed;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    sock = sockfd_lookup_light(fd, &err, &fput_needed);
    if (!sock)
    goto out;
    @@ -1539,6 +1560,9 @@ asmlinkage long sys_getpeername(int fd, struct sockaddr __user *usockaddr,
    char address[MAX_SOCK_ADDR];
    int len, err, fput_needed;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    sock = sockfd_lookup_light(fd, &err, &fput_needed);
    if (sock != NULL) {
    err = security_socket_getpeername(sock);
    @@ -1576,6 +1600,9 @@ asmlinkage long sys_sendto(int fd, void __user *buff, size_t len,
    int fput_needed;
    struct file *sock_file;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    sock_file = fget_light(fd, &fput_needed);
    err = -EBADF;
    if (!sock_file)
    @@ -1637,6 +1664,9 @@ asmlinkage long sys_recvfrom(int fd, void __user *ubuf, size_t size,
    struct file *sock_file;
    int fput_needed;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    sock_file = fget_light(fd, &fput_needed);
    err = -EBADF;
    if (!sock_file)
    @@ -1693,6 +1723,9 @@ asmlinkage long sys_setsockopt(int fd, int level, int optname,
    if (optlen < 0)
    return -EINVAL;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    sock = sockfd_lookup_light(fd, &err, &fput_needed);
    if (sock != NULL) {
    err = security_socket_setsockopt(sock, level, optname);
    @@ -1724,6 +1757,9 @@ asmlinkage long sys_getsockopt(int fd, int level, int optname,
    int err, fput_needed;
    struct socket *sock;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    sock = sockfd_lookup_light(fd, &err, &fput_needed);
    if (sock != NULL) {
    err = security_socket_getsockopt(sock, level, optname);
    @@ -1753,6 +1789,9 @@ asmlinkage long sys_shutdown(int fd, int how)
    int err, fput_needed;
    struct socket *sock;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    sock = sockfd_lookup_light(fd, &err, &fput_needed);
    if (sock != NULL) {
    err = security_socket_shutdown(sock, how);
    @@ -1789,6 +1828,9 @@ asmlinkage long sys_sendmsg(int fd, struct msghdr __user *msg, unsigned flags)
    int err, ctl_len, iov_size, total_len;
    int fput_needed;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    err = -EFAULT;
    if (MSG_CMSG_COMPAT & flags) {
    if (get_compat_msghdr(&msg_sys, msg_compat))
    @@ -1896,6 +1938,9 @@ asmlinkage long sys_recvmsg(int fd, struct msghdr __user *msg,
    struct sockaddr __user *uaddr;
    int __user *uaddr_len;

    + if (update_current_cred() < 0)
    + return -ENOMEM;
    +
    if (MSG_CMSG_COMPAT & flags) {
    if (get_compat_msghdr(&msg_sys, msg_compat))
    return -EFAULT;
    diff --git a/security/commoncap.c b/security/commoncap.c
    index 7520361..6a56164 100644
    --- a/security/commoncap.c
    +++ b/security/commoncap.c
    @@ -25,7 +25,7 @@

    int cap_netlink_send(struct sock *sk, struct sk_buff *skb)
    {
    - NETLINK_CB(skb).eff_cap = current->cap_effective;
    + NETLINK_CB(skb).eff_cap = current->cred->cap_effective;
    return 0;
    }

    @@ -43,7 +43,7 @@ EXPORT_SYMBOL(cap_netlink_recv);
    int cap_capable (struct task_struct *tsk, int cap)
    {
    /* Derived from include/linux/sched.h:capable. */
    - if (cap_raised(tsk->cap_effective, cap))
    + if (cap_raised(tsk->cred->cap_effective, cap))
    return 0;
    return -EPERM;
    }
    @@ -68,7 +68,9 @@ int cap_capget (struct task_struct *target, kernel_cap_t *effective,
    kernel_cap_t *inheritable, kernel_cap_t *permitted)
    {
    /* Derived from kernel/capability.c:sys_capget. */
    - *effective = cap_t (target->cap_effective);
    + rcu_read_lock();
    + *effective = cap_t (task_cred(target)->cap_effective);
    + rcu_read_unlock();
    *inheritable = cap_t (target->cap_inheritable);
    *permitted = cap_t (target->cap_permitted);
    return 0;
    @@ -103,7 +105,7 @@ int cap_capset_check (struct task_struct *target, kernel_cap_t *effective,
    void cap_capset_set (struct task_struct *target, kernel_cap_t *effective,
    kernel_cap_t *inheritable, kernel_cap_t *permitted)
    {
    - target->cap_effective = *effective;
    + target->_cap_effective = *effective;
    target->cap_inheritable = *inheritable;
    target->cap_permitted = *permitted;
    }
    @@ -162,15 +164,15 @@ void cap_bprm_apply_creds (struct linux_binprm *bprm, int unsafe)
    }
    }

    - current->suid = current->euid = current->fsuid = bprm->e_uid;
    - current->sgid = current->egid = current->fsgid = bprm->e_gid;
    + current->suid = current->euid = current->cred->uid = bprm->e_uid;
    + current->sgid = current->egid = current->cred->gid = bprm->e_gid;

    /* For init, we want to retain the capabilities set
    * in the init_task struct. Thus we skip the usual
    * capability rules */
    if (!is_init(current)) {
    current->cap_permitted = new_permitted;
    - current->cap_effective =
    + current->_cap_effective =
    cap_intersect (new_permitted, bprm->cap_effective);
    }

    @@ -246,13 +248,13 @@ static inline void cap_emulate_setxuid (int old_ruid, int old_euid,
    (current->uid != 0 && current->euid != 0 && current->suid != 0) &&
    !current->keep_capabilities) {
    cap_clear (current->cap_permitted);
    - cap_clear (current->cap_effective);
    + cap_clear (current->_cap_effective);
    }
    if (old_euid == 0 && current->euid != 0) {
    - cap_clear (current->cap_effective);
    + cap_clear (current->_cap_effective);
    }
    if (old_euid != 0 && current->euid == 0) {
    - current->cap_effective = current->cap_permitted;
    + current->_cap_effective = current->cap_permitted;
    }
    }

    @@ -280,12 +282,12 @@ int cap_task_post_setuid (uid_t old_ruid, uid_t old_euid, uid_t old_suid,
    */

    if (!issecure (SECURE_NO_SETUID_FIXUP)) {
    - if (old_fsuid == 0 && current->fsuid != 0) {
    - cap_t (current->cap_effective) &=
    + if (old_fsuid == 0 && current->cred->uid != 0) {
    + cap_t (current->_cap_effective) &=
    ~CAP_FS_MASK;
    }
    - if (old_fsuid != 0 && current->fsuid == 0) {
    - cap_t (current->cap_effective) |=
    + if (old_fsuid != 0 && current->cred->uid == 0) {
    + cap_t (current->_cap_effective) |=
    (cap_t (current->cap_permitted) &
    CAP_FS_MASK);
    }
    @@ -301,7 +303,7 @@ int cap_task_post_setuid (uid_t old_ruid, uid_t old_euid, uid_t old_suid,

    void cap_task_reparent_to_init (struct task_struct *p)
    {
    - p->cap_effective = CAP_INIT_EFF_SET;
    + p->_cap_effective = CAP_INIT_EFF_SET;
    p->cap_inheritable = CAP_INIT_INH_SET;
    p->cap_permitted = CAP_FULL_SET;
    p->keep_capabilities = 0;
    diff --git a/security/dummy.c b/security/dummy.c
    index 187fc4b..7e52156 100644
    --- a/security/dummy.c
    +++ b/security/dummy.c
    @@ -76,7 +76,13 @@ static int dummy_acct (struct file *file)

    static int dummy_capable (struct task_struct *tsk, int cap)
    {
    - if (cap_raised (tsk->cap_effective, cap))
    + kernel_cap_t cap_effective;
    +
    + rcu_read_lock();
    + cap_effective = task_cred(tsk)->cap_effective;
    + rcu_read_unlock();
    +
    + if (cap_raised (cap_effective, cap))
    return 0;
    return -EPERM;
    }
    @@ -146,7 +152,12 @@ static void dummy_bprm_apply_creds (struct linux_binprm *bprm, int unsafe)
    change_fsuid(bprm->cred, bprm->e_uid);
    change_fsgid(bprm->cred, bprm->e_gid);

    - dummy_capget(current, &current->cap_effective, &current->cap_inheritable, &current->cap_permitted);
    + dummy_capget(current,
    + &current->_cap_effective,
    + &current->cap_inheritable,
    + &current->cap_permitted);
    +
    + change_cap(bprm->cred, current->_cap_effective);
    }

    static void dummy_bprm_post_apply_creds (struct linux_binprm *bprm)
    @@ -499,7 +510,10 @@ static int dummy_task_setuid (uid_t id0, uid_t id1, uid_t id2, int flags)

    static int dummy_task_post_setuid (uid_t id0, uid_t id1, uid_t id2, int flags)
    {
    - dummy_capget(current, &current->cap_effective, &current->cap_inheritable, &current->cap_permitted);
    + dummy_capget(current,
    + &current->_cap_effective,
    + &current->cap_inheritable,
    + &current->cap_permitted);
    return 0;
    }

    @@ -697,7 +711,7 @@ static int dummy_sem_semop (struct sem_array *sma,

    static int dummy_netlink_send (struct sock *sk, struct sk_buff *skb)
    {
    - NETLINK_CB(skb).eff_cap = current->cap_effective;
    + NETLINK_CB(skb).eff_cap = current->cred->cap_effective;
    return 0;
    }


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  6. [PATCH 15/24] CacheFiles: A cache that backs onto a mounted filesystem

    Add an FS-Cache cache-backend that permits a mounted filesystem to be used as a
    backing store for the cache.


    CacheFiles uses a userspace daemon to do some of the cache management - such as
    reaping stale nodes and culling. This is called cachefilesd and lives in
    /sbin. The source for the daemon can be downloaded from:

    http://people.redhat.com/~dhowells/c.../cachefilesd.c

    And an example configuration from:

    http://people.redhat.com/~dhowells/c...chefilesd.conf

    The filesystem and data integrity of the cache are only as good as those of the
    filesystem providing the backing services. Note that CacheFiles does not
    attempt to journal anything since the journalling interfaces of the various
    filesystems are very specific in nature.

    CacheFiles creates a proc-file - "/proc/fs/cachefiles" - that is used for
    communication with the daemon. Only one thing may have this open at once, and
    whilst it is open, a cache is at least partially in existence. The daemon
    opens this and sends commands down it to control the cache.

    CacheFiles is currently limited to a single cache.

    CacheFiles attempts to maintain at least a certain percentage of free space on
    the filesystem, shrinking the cache by culling the objects it contains to make
    space if necessary - see the "Cache Culling" section. This means it can be
    placed on the same medium as a live set of data, and will expand to make use of
    spare space and automatically contract when the set of data requires more
    space.


    ============
    REQUIREMENTS
    ============

    The use of CacheFiles and its daemon requires the following features to be
    available in the system and in the cache filesystem:

    - dnotify.

    - extended attributes (xattrs).

    - openat() and friends.

    - bmap() support on files in the filesystem (FIBMAP ioctl).

    - The use of bmap() to detect a partial page at the end of the file.

    It is strongly recommended that the "dir_index" option is enabled on Ext3
    filesystems being used as a cache.


    =============
    CONFIGURATION
    =============

    The cache is configured by a script in /etc/cachefilesd.conf. These commands
    set up cache ready for use. The following script commands are available:

    (*) brun %
    (*) bcull %
    (*) bstop %

    Configure the culling limits. Optional. See the section on culling
    The defaults are 7%, 5% and 1% respectively.

    (*) dir

    Specify the directory containing the root of the cache. Mandatory.

    (*) tag

    Specify a tag to FS-Cache to use in distinguishing multiple caches.
    Optional. The default is "CacheFiles".

    (*) debug

    Specify a numeric bitmask to control debugging in the kernel module.
    Optional. The default is zero (all off).


    ==================
    STARTING THE CACHE
    ==================

    The cache is started by running the daemon. The daemon opens the cache proc
    file, configures the cache and tells it to begin caching. At that point the
    cache binds to fscache and the cache becomes live.

    The daemon is run as follows:

    /sbin/cachefilesd [-d]* [-s] [-n] [-f ]

    The flags are:

    (*) -d

    Increase the debugging level. This can be specified multiple times and
    is cumulative with itself.

    (*) -s

    Send messages to stderr instead of syslog.

    (*) -n

    Don't daemonise and go into background.

    (*) -f

    Use an alternative configuration file rather than the default one.


    ===============
    THINGS TO AVOID
    ===============

    Do not mount other things within the cache as this will cause problems. The
    kernel module contains its own very cut-down path walking facility that ignores
    mountpoints, but the daemon can't avoid them.

    Do not create, rename or unlink files and directories in the cache whilst the
    cache is active, as this may cause the state to become uncertain.

    Renaming files in the cache might make objects appear to be other objects (the
    filename is part of the lookup key).

    Do not change or remove the extended attributes attached to cache files by the
    cache as this will cause the cache state management to get confused.

    Do not create files or directories in the cache, lest the cache get confused or
    serve incorrect data.

    Do not chmod files in the cache. The module creates things with minimal
    permissions to prevent random users being able to access them directly.


    =============
    CACHE CULLING
    =============

    The cache may need culling occasionally to make space. This involves
    discarding objects from the cache that have been used less recently than
    anything else. Culling is based on the access time of data objects. Empty
    directories are culled if not in use.

    Cache culling is done on the basis of the percentage of blocks available in the
    underlying filesystem. There are three "limits":

    (*) brun

    If the amount of available space in the cache rises above this limit, then
    culling is turned off.

    (*) bcull

    If the amount of available space in the cache falls below this limit, then
    culling is started.

    (*) bstop

    If the amount of available space in the cache falls below this limit, then
    no further allocation of disk space is permitted until culling has raised
    the amount above this limit again.

    These must be configured thusly:

    0 <= bstop < bcull < brun < 100

    Note that these are percentages of available space, and do _not_ appear as 100
    minus the percentage displayed by the "df" program.

    The userspace daemon scans the cache to build up a table of cullable objects.
    These are then culled in least recently used order. A new scan of the cache is
    started as soon as space is made in the table. Objects will be skipped if
    their atimes have changed or if the kernel module says it is still using them.


    ===============
    CACHE STRUCTURE
    ===============

    The CacheFiles module will create two directories in the directory it was
    given:

    (*) cache/

    (*) graveyard/

    The active cache objects all reside in the first directory. The CacheFiles
    kernel module moves any retired or culled objects that it can't simply unlink
    to the graveyard from which the daemon will actually delete them.

    The daemon uses dnotify to monitor the graveyard directory, and will delete
    anything that appears therein.


    The module represents index objects as directories with the filename "I..." or
    "J...". Note that the "cache/" directory is itself a special index.

    Data objects are represented as files if they have no children, or directories
    if they do. Their filenames all begin "D..." or "E...". If represented as a
    directory, data objects will have a file in the directory called "data" that
    actually holds the data.

    Special objects are similar to data objects, except their filenames begin
    "S..." or "T...".


    If an object has children, then it will be represented as a directory.
    Immediately in the representative directory are a collection of directories
    named for hash values of the child object keys with an '@' prepended. Into
    this directory, if possible, will be placed the representations of the child
    objects:

    INDEX INDEX INDEX DATA FILES
    ========= ========== ================================= ================
    cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400
    cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...DB1ry
    cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...N22ry
    cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...FP1ry


    If the key is so long that it exceeds NAME_MAX with the decorations added on to
    it, then it will be cut into pieces, the first few of which will be used to
    make a nest of directories, and the last one of which will be the objects
    inside the last directory. The names of the intermediate directories will have
    '+' prepended:

    J1223/@23/+xy...z/+kl...m/Epqr


    Note that keys are raw data, and not only may they exceed NAME_MAX in size,
    they may also contain things like '/' and NUL characters, and so they may not
    be suitable for turning directly into a filename.

    To handle this, CacheFiles will use a suitably printable filename directly and
    "base-64" encode ones that aren't directly suitable. The two versions of
    object filenames indicate the encoding:

    OBJECT TYPE PRINTABLE ENCODED
    =============== =============== ===============
    Index "I..." "J..."
    Data "D..." "E..."
    Special "S..." "T..."

    Intermediate directories are always "@" or "+" as appropriate.


    Each object in the cache has an extended attribute label that holds the object
    type ID (required to distinguish special objects) and the auxiliary data from
    the netfs. The latter is used to detect stale objects in the cache and update
    or retire them.


    Note that CacheFiles will erase from the cache any file it doesn't recognise or
    any file of an incorrect type (such as a FIFO file or a device file).


    This documentation is added by the patch to:

    Documentation/filesystems/caching/cachefiles.txt

    Signed-Off-By: David Howells
    ---

    Documentation/filesystems/caching/cachefiles.txt | 395 ++++++++++
    fs/Kconfig | 1
    fs/Makefile | 1
    fs/cachefiles/Kconfig | 33 +
    fs/cachefiles/Makefile | 18
    fs/cachefiles/cf-bind.c | 289 +++++++
    fs/cachefiles/cf-daemon.c | 720 +++++++++++++++++++
    fs/cachefiles/cf-interface.c | 444 ++++++++++++
    fs/cachefiles/cf-internal.h | 369 ++++++++++
    fs/cachefiles/cf-key.c | 159 ++++
    fs/cachefiles/cf-main.c | 109 +++
    fs/cachefiles/cf-namei.c | 743 +++++++++++++++++++
    fs/cachefiles/cf-proc.c | 166 ++++
    fs/cachefiles/cf-rdwr.c | 849 ++++++++++++++++++++++
    fs/cachefiles/cf-security.c | 94 ++
    fs/cachefiles/cf-xattr.c | 292 ++++++++
    16 files changed, 4682 insertions(+), 0 deletions(-)

    diff --git a/Documentation/filesystems/caching/cachefiles.txt b/Documentation/filesystems/caching/cachefiles.txt
    new file mode 100644
    index 0000000..b502cff
    --- /dev/null
    +++ b/Documentation/filesystems/caching/cachefiles.txt
    @@ -0,0 +1,395 @@
    + ===============================================
    + CacheFiles: CACHE ON ALREADY MOUNTED FILESYSTEM
    + ===============================================
    +
    +Contents:
    +
    + (*) Overview.
    +
    + (*) Requirements.
    +
    + (*) Configuration.
    +
    + (*) Starting the cache.
    +
    + (*) Things to avoid.
    +
    + (*) Cache culling.
    +
    + (*) Cache structure.
    +
    + (*) Security model and SELinux.
    +
    +========
    +OVERVIEW
    +========
    +
    +CacheFiles is a caching backend that's meant to use as a cache a directory on
    +an already mounted filesystem of a local type (such as Ext3).
    +
    +CacheFiles uses a userspace daemon to do some of the cache management - such as
    +reaping stale nodes and culling. This is called cachefilesd and lives in
    +/sbin.
    +
    +The filesystem and data integrity of the cache are only as good as those of the
    +filesystem providing the backing services. Note that CacheFiles does not
    +attempt to journal anything since the journalling interfaces of the various
    +filesystems are very specific in nature.
    +
    +CacheFiles creates a misc character device - "/dev/cachefiles" - that is used
    +to communication with the daemon. Only one thing may have this open at once,
    +and whilst it is open, a cache is at least partially in existence. The daemon
    +opens this and sends commands down it to control the cache.
    +
    +CacheFiles is currently limited to a single cache.
    +
    +CacheFiles attempts to maintain at least a certain percentage of free space on
    +the filesystem, shrinking the cache by culling the objects it contains to make
    +space if necessary - see the "Cache Culling" section. This means it can be
    +placed on the same medium as a live set of data, and will expand to make use of
    +spare space and automatically contract when the set of data requires more
    +space.
    +
    +
    +============
    +REQUIREMENTS
    +============
    +
    +The use of CacheFiles and its daemon requires the following features to be
    +available in the system and in the cache filesystem:
    +
    + - dnotify.
    +
    + - extended attributes (xattrs).
    +
    + - openat() and friends.
    +
    + - bmap() support on files in the filesystem (FIBMAP ioctl).
    +
    + - The use of bmap() to detect a partial page at the end of the file.
    +
    +It is strongly recommended that the "dir_index" option is enabled on Ext3
    +filesystems being used as a cache.
    +
    +
    +=============
    +CONFIGURATION
    +=============
    +
    +The cache is configured by a script in /etc/cachefilesd.conf. These commands
    +set up cache ready for use. The following script commands are available:
    +
    + (*) brun %
    + (*) bcull %
    + (*) bstop %
    + (*) frun %
    + (*) fcull %
    + (*) fstop %
    +
    + Configure the culling limits. Optional. See the section on culling
    + The defaults are 7% (run), 5% (cull) and 1% (stop) respectively.
    +
    + The commands beginning with a 'b' are file space (block) limits, those
    + beginning with an 'f' are file count limits.
    +
    + (*) dir
    +
    + Specify the directory containing the root of the cache. Mandatory.
    +
    + (*) tag
    +
    + Specify a tag to FS-Cache to use in distinguishing multiple caches.
    + Optional. The default is "CacheFiles".
    +
    + (*) debug
    +
    + Specify a numeric bitmask to control debugging in the kernel module.
    + Optional. The default is zero (all off). The following values can be
    + OR'd into the mask to collect various information:
    +
    + 1 Turn on trace of function entry (_enter() macros)
    + 2 Turn on trace of function exit (_leave() macros)
    + 4 Turn on trace of internal debug points (_debug())
    +
    + This mask can also be set through sysfs, eg:
    +
    + echo 5 >/sys/modules/cachefiles/parameters/debug
    +
    +
    +==================
    +STARTING THE CACHE
    +==================
    +
    +The cache is started by running the daemon. The daemon opens the cache device,
    +configures the cache and tells it to begin caching. At that point the cache
    +binds to fscache and the cache becomes live.
    +
    +The daemon is run as follows:
    +
    + /sbin/cachefilesd [-d]* [-s] [-n] [-f ]
    +
    +The flags are:
    +
    + (*) -d
    +
    + Increase the debugging level. This can be specified multiple times and
    + is cumulative with itself.
    +
    + (*) -s
    +
    + Send messages to stderr instead of syslog.
    +
    + (*) -n
    +
    + Don't daemonise and go into background.
    +
    + (*) -f
    +
    + Use an alternative configuration file rather than the default one.
    +
    +
    +===============
    +THINGS TO AVOID
    +===============
    +
    +Do not mount other things within the cache as this will cause problems. The
    +kernel module contains its own very cut-down path walking facility that ignores
    +mountpoints, but the daemon can't avoid them.
    +
    +Do not create, rename or unlink files and directories in the cache whilst the
    +cache is active, as this may cause the state to become uncertain.
    +
    +Renaming files in the cache might make objects appear to be other objects (the
    +filename is part of the lookup key).
    +
    +Do not change or remove the extended attributes attached to cache files by the
    +cache as this will cause the cache state management to get confused.
    +
    +Do not create files or directories in the cache, lest the cache get confused or
    +serve incorrect data.
    +
    +Do not chmod files in the cache. The module creates things with minimal
    +permissions to prevent random users being able to access them directly.
    +
    +
    +=============
    +CACHE CULLING
    +=============
    +
    +The cache may need culling occasionally to make space. This involves
    +discarding objects from the cache that have been used less recently than
    +anything else. Culling is based on the access time of data objects. Empty
    +directories are culled if not in use.
    +
    +Cache culling is done on the basis of the percentage of blocks and the
    +percentage of files available in the underlying filesystem. There are six
    +"limits":
    +
    + (*) brun
    + (*) frun
    +
    + If the amount of free space and the number of available files in the cache
    + rises above both these limits, then culling is turned off.
    +
    + (*) bcull
    + (*) fcull
    +
    + If the amount of available space or the number of available files in the
    + cache falls below either of these limits, then culling is started.
    +
    + (*) bstop
    + (*) fstop
    +
    + If the amount of available space or the number of available files in the
    + cache falls below either of these limits, then no further allocation of
    + disk space or files is permitted until culling has raised things above
    + these limits again.
    +
    +These must be configured thusly:
    +
    + 0 <= bstop < bcull < brun < 100
    + 0 <= fstop < fcull < frun < 100
    +
    +Note that these are percentages of available space and available files, and do
    +_not_ appear as 100 minus the percentage displayed by the "df" program.
    +
    +The userspace daemon scans the cache to build up a table of cullable objects.
    +These are then culled in least recently used order. A new scan of the cache is
    +started as soon as space is made in the table. Objects will be skipped if
    +their atimes have changed or if the kernel module says it is still using them.
    +
    +
    +===============
    +CACHE STRUCTURE
    +===============
    +
    +The CacheFiles module will create two directories in the directory it was
    +given:
    +
    + (*) cache/
    +
    + (*) graveyard/
    +
    +The active cache objects all reside in the first directory. The CacheFiles
    +kernel module moves any retired or culled objects that it can't simply unlink
    +to the graveyard from which the daemon will actually delete them.
    +
    +The daemon uses dnotify to monitor the graveyard directory, and will delete
    +anything that appears therein.
    +
    +
    +The module represents index objects as directories with the filename "I..." or
    +"J...". Note that the "cache/" directory is itself a special index.
    +
    +Data objects are represented as files if they have no children, or directories
    +if they do. Their filenames all begin "D..." or "E...". If represented as a
    +directory, data objects will have a file in the directory called "data" that
    +actually holds the data.
    +
    +Special objects are similar to data objects, except their filenames begin
    +"S..." or "T...".
    +
    +
    +If an object has children, then it will be represented as a directory.
    +Immediately in the representative directory are a collection of directories
    +named for hash values of the child object keys with an '@' prepended. Into
    +this directory, if possible, will be placed the representations of the child
    +objects:
    +
    + INDEX INDEX INDEX DATA FILES
    + ========= ========== ================================= ================
    + cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400
    + cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...DB1ry
    + cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...N22ry
    + cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...FP1ry
    +
    +
    +If the key is so long that it exceeds NAME_MAX with the decorations added on to
    +it, then it will be cut into pieces, the first few of which will be used to
    +make a nest of directories, and the last one of which will be the objects
    +inside the last directory. The names of the intermediate directories will have
    +'+' prepended:
    +
    + J1223/@23/+xy...z/+kl...m/Epqr
    +
    +
    +Note that keys are raw data, and not only may they exceed NAME_MAX in size,
    +they may also contain things like '/' and NUL characters, and so they may not
    +be suitable for turning directly into a filename.
    +
    +To handle this, CacheFiles will use a suitably printable filename directly and
    +"base-64" encode ones that aren't directly suitable. The two versions of
    +object filenames indicate the encoding:
    +
    + OBJECT TYPE PRINTABLE ENCODED
    + =============== =============== ===============
    + Index "I..." "J..."
    + Data "D..." "E..."
    + Special "S..." "T..."
    +
    +Intermediate directories are always "@" or "+" as appropriate.
    +
    +
    +Each object in the cache has an extended attribute label that holds the object
    +type ID (required to distinguish special objects) and the auxiliary data from
    +the netfs. The latter is used to detect stale objects in the cache and update
    +or retire them.
    +
    +
    +Note that CacheFiles will erase from the cache any file it doesn't recognise or
    +any file of an incorrect type (such as a FIFO file or a device file).
    +
    +
    +==========================
    +SECURITY MODEL AND SELINUX
    +==========================
    +
    +CacheFiles is implemented to deal properly with the LSM security features of
    +the Linux kernel and the SELinux facility.
    +
    +One of the problems that CacheFiles faces is that it is generally acting on
    +behalf of a process, and running in that process's context, and that includes a
    +security context that is not appropriate for accessing the cache - either
    +because the files in the cache are inaccessible to that process, or because if
    +the process creates a file in the cache, that file may be inaccessible to other
    +processes.
    +
    +The way CacheFiles works is to temporarily change the security context (fsuid,
    +fsgid and actor security label) that the process acts as - without changing the
    +security context of the process when it the target of an operation performed by
    +some other process (so signalling and suchlike still work correctly).
    +
    +
    +When the CacheFiles module is asked to bind to its cache, it:
    +
    + (1) Finds the security label attached to the root cache directory and uses
    + that as the security label with which it will create files. By default,
    + this is:
    +
    + cachefiles_var_t
    +
    + (2) Finds the security label of the process which issued the bind request
    + (presumed to be the cachefilesd daemon), which by default will be:
    +
    + cachefilesd_t
    +
    + and asks LSM to supply a security ID as which it should act given the
    + daemon's label. By default, this will be:
    +
    + cachefiles_kernel_t
    +
    + SELinux transitions the daemon's security ID to the module's security ID
    + based on a rule of this form in the policy.
    +
    + type_transition kernel_t : process ;
    +
    + For instance:
    +
    + type_transition cachefilesd_t kernel_t : process cachefiles_kernel_t;
    +
    +
    +The module's security ID gives it permission to create, move and remove files
    +and directories in the cache, to find and access directories and files in the
    +cache, to set and access extended attributes on cache objects, and to read and
    +write files in the cache.
    +
    +The daemon's security ID gives it only a very restricted set of permissions: it
    +may scan directories, stat files and erase files and directories. It may
    +not read or write files in the cache, and so it is precluded from accessing the
    +data cached therein; nor is it permitted to create new files in the cache.
    +
    +
    +There are policy source files available in:
    +
    + http://people.redhat.com/~dhowells/f...sd-0.8.tar.bz2
    +
    +and later versions. In that tarball, see the files:
    +
    + cachefilesd.te
    + cachefilesd.fc
    + cachefilesd.if
    +
    +They are built and installed directly by the RPM.
    +
    +If a non-RPM based system is being used, then copy the above files to their own
    +directory and run:
    +
    + make -f /usr/share/selinux/devel/Makefile
    + semodule -i cachefilesd.pp
    +
    +You will need checkpolicy and selinux-policy-devel installed prior to the
    +build.
    +
    +
    +By default, the cache is located in /var/fscache, but if it is desirable that
    +it should be elsewhere, than either the above policy files must be altered, or
    +an auxiliary policy must be installed to label the alternate location of the
    +cache.
    +
    +For instructions on how to add an auxiliary policy to enable the cache to be
    +located elsewhere when SELinux is in enforcing mode, please see:
    +
    + /usr/share/doc/cachefilesd-*/move-cache.txt
    +
    +When the cachefilesd rpm is installed; alternatively, the document can be found
    +in the sources.
    diff --git a/fs/Kconfig b/fs/Kconfig
    index e367141..8ae7eda 100644
    --- a/fs/Kconfig
    +++ b/fs/Kconfig
    @@ -631,6 +631,7 @@ config GENERIC_ACL
    menu "Caches"

    source "fs/fscache/Kconfig"
    +source "fs/cachefiles/Kconfig"

    endmenu

    diff --git a/fs/Makefile b/fs/Makefile
    index 4eecc9a..becdb4f 100644
    --- a/fs/Makefile
    +++ b/fs/Makefile
    @@ -116,6 +116,7 @@ obj-$(CONFIG_AFS_FS) += afs/
    obj-$(CONFIG_BEFS_FS) += befs/
    obj-$(CONFIG_HOSTFS) += hostfs/
    obj-$(CONFIG_HPPFS) += hppfs/
    +obj-$(CONFIG_CACHEFILES) += cachefiles/
    obj-$(CONFIG_DEBUG_FS) += debugfs/
    obj-$(CONFIG_OCFS2_FS) += ocfs2/
    obj-$(CONFIG_GFS2_FS) += gfs2/
    diff --git a/fs/cachefiles/Kconfig b/fs/cachefiles/Kconfig
    new file mode 100644
    index 0000000..ddbdd85
    --- /dev/null
    +++ b/fs/cachefiles/Kconfig
    @@ -0,0 +1,33 @@
    +
    +config CACHEFILES
    + tristate "Filesystem caching on files"
    + depends on FSCACHE
    + help
    + This permits use of a mounted filesystem as a cache for other
    + filesystems - primarily networking filesystems - thus allowing fast
    + local disk to enhance the speed of slower devices.
    +
    + See Documentation/filesystems/caching/cachefiles.txt for more
    + information.
    +
    +config CACHEFILES_DEBUG
    + bool "Debug CacheFiles"
    + depends on CACHEFILES
    + help
    + This permits debugging to be dynamically enabled in the filesystem
    + caching on files module. If this is set, the debugging output may be
    + enabled by setting bits in /sys/modules/cachefiles/parameter/debug or
    + by including a debugging specifier in /etc/cachefilesd.conf.
    +
    +config CACHEFILES_HISTOGRAM
    + bool "Gather latency information on CacheFiles"
    + depends on CACHEFILES && FSCACHE_PROC
    + help
    +
    + This option causes latency information to be gathered on CacheFiles
    + operation and exported through file:
    +
    + /proc/fs/fscache/cachefiles/histogram
    +
    + See Documentation/filesystems/caching/cachefiles.txt for more
    + information.
    diff --git a/fs/cachefiles/Makefile b/fs/cachefiles/Makefile
    new file mode 100644
    index 0000000..8a9c1bd
    --- /dev/null
    +++ b/fs/cachefiles/Makefile
    @@ -0,0 +1,18 @@
    +#
    +# Makefile for caching in a mounted filesystem
    +#
    +
    +cachefiles-y := \
    + cf-bind.o \
    + cf-daemon.o \
    + cf-interface.o \
    + cf-key.o \
    + cf-main.o \
    + cf-namei.o \
    + cf-rdwr.o \
    + cf-security.o \
    + cf-xattr.o
    +
    +cachefiles-$(CONFIG_CACHEFILES_HISTOGRAM) += cf-proc.o
    +
    +obj-$(CONFIG_CACHEFILES) := cachefiles.o
    diff --git a/fs/cachefiles/cf-bind.c b/fs/cachefiles/cf-bind.c
    new file mode 100644
    index 0000000..b524e08
    --- /dev/null
    +++ b/fs/cachefiles/cf-bind.c
    @@ -0,0 +1,289 @@
    +/* Bind and unbind a cache from the filesystem backing it
    + *
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    + * Written by David Howells (dhowells@redhat.com)
    + *
    + * This program is free software; you can redistribute it and/or
    + * modify it under the terms of the GNU General Public Licence
    + * as published by the Free Software Foundation; either version
    + * 2 of the Licence, or (at your option) any later version.
    + */
    +
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include "cf-internal.h"
    +
    +static int cachefiles_daemon_add_cache(struct cachefiles_cache *caches);
    +
    +/*
    + * bind a directory as a cache
    + */
    +int cachefiles_daemon_bind(struct cachefiles_cache *cache, char *args)
    +{
    + _enter("{%u,%u,%u,%u,%u,%u},%s",
    + cache->frun_percent,
    + cache->fcull_percent,
    + cache->fstop_percent,
    + cache->brun_percent,
    + cache->bcull_percent,
    + cache->bstop_percent,
    + args);
    +
    + /* start by checking things over */
    + ASSERT(cache->fstop_percent >= 0 &&
    + cache->fstop_percent < cache->fcull_percent &&
    + cache->fcull_percent < cache->frun_percent &&
    + cache->frun_percent < 100);
    +
    + ASSERT(cache->bstop_percent >= 0 &&
    + cache->bstop_percent < cache->bcull_percent &&
    + cache->bcull_percent < cache->brun_percent &&
    + cache->brun_percent < 100);
    +
    + if (*args) {
    + kerror("'bind' command doesn't take an argument");
    + return -EINVAL;
    + }
    +
    + if (!cache->rootdirname) {
    + kerror("No cache directory specified");
    + return -EINVAL;
    + }
    +
    + /* don't permit already bound caches to be re-bound */
    + if (test_bit(CACHEFILES_READY, &cache->flags)) {
    + kerror("Cache already bound");
    + return -EBUSY;
    + }
    +
    + /* make sure we have copies of the tag and dirname strings */
    + if (!cache->tag) {
    + /* the tag string is released by the fops->release()
    + * function, so we don't release it on error here */
    + cache->tag = kstrdup("CacheFiles", GFP_KERNEL);
    + if (!cache->tag)
    + return -ENOMEM;
    + }
    +
    + /* add the cache */
    + return cachefiles_daemon_add_cache(cache);
    +}
    +
    +/*
    + * add a cache
    + */
    +static int cachefiles_daemon_add_cache(struct cachefiles_cache *cache)
    +{
    + struct cachefiles_object *fsdef;
    + struct nameidata nd;
    + struct kstatfs stats;
    + struct dentry *graveyard, *cachedir, *root;
    + struct cred *saved_cred;
    + int ret;
    +
    + _enter("");
    +
    + /* we want to work under the module's security ID */
    + ret = cachefiles_get_security_ID(cache);
    + if (ret < 0)
    + return ret;
    +
    + cachefiles_begin_secure(cache, &saved_cred);
    +
    + /* allocate the root index object */
    + ret = -ENOMEM;
    +
    + fsdef = kmem_cache_alloc(cachefiles_object_jar, GFP_KERNEL);
    + if (!fsdef)
    + goto error_root_object;
    +
    + ASSERTCMP(fsdef->backer, ==, NULL);
    +
    + atomic_set(&fsdef->usage, 1);
    + fsdef->type = FSCACHE_COOKIE_TYPE_INDEX;
    +
    + _debug("- fsdef %p", fsdef);
    +
    + /* look up the directory at the root of the cache */
    + memset(&nd, 0, sizeof(nd));
    +
    + ret = path_lookup(cache->rootdirname, LOOKUP_DIRECTORY, &nd);
    + if (ret < 0)
    + goto error_open_root;
    +
    + cache->mnt = mntget(nd.mnt);
    + root = dget(nd.dentry);
    + path_release(&nd);
    +
    + /* check parameters */
    + ret = -EOPNOTSUPP;
    + if (!root->d_inode ||
    + !root->d_inode->i_op ||
    + !root->d_inode->i_op->lookup ||
    + !root->d_inode->i_op->mkdir ||
    + !root->d_inode->i_op->setxattr ||
    + !root->d_inode->i_op->getxattr ||
    + !root->d_sb ||
    + !root->d_sb->s_op ||
    + !root->d_sb->s_op->statfs ||
    + !root->d_sb->s_op->sync_fs)
    + goto error_unsupported;
    +
    + ret = -EROFS;
    + if (root->d_sb->s_flags & MS_RDONLY)
    + goto error_unsupported;
    +
    + /* determine the security of the on-disk cache as this governs
    + * security ID of files we create */
    + ret = cachefiles_determine_cache_secid(cache, root);
    + if (ret < 0)
    + goto error_unsupported;
    +
    + cachefiles_end_secure(cache, saved_cred);
    + cachefiles_begin_secure(cache, &saved_cred);
    +
    + /* get the cache size and blocksize */
    + ret = vfs_statfs(root, &stats);
    + if (ret < 0)
    + goto error_unsupported;
    +
    + ret = -ERANGE;
    + if (stats.f_bsize <= 0)
    + goto error_unsupported;
    +
    + ret = -EOPNOTSUPP;
    + if (stats.f_bsize > PAGE_SIZE)
    + goto error_unsupported;
    +
    + cache->bsize = stats.f_bsize;
    + cache->bshift = 0;
    + if (stats.f_bsize < PAGE_SIZE)
    + cache->bshift = PAGE_SHIFT - ilog2(stats.f_bsize);
    +
    + _debug("blksize %u (shift %u)",
    + cache->bsize, cache->bshift);
    +
    + _debug("size %llu, avail %llu",
    + (unsigned long long) stats.f_blocks,
    + (unsigned long long) stats.f_bavail);
    +
    + /* set up caching limits */
    + do_div(stats.f_files, 100);
    + cache->fstop = stats.f_files * cache->fstop_percent;
    + cache->fcull = stats.f_files * cache->fcull_percent;
    + cache->frun = stats.f_files * cache->frun_percent;
    +
    + _debug("limits {%llu,%llu,%llu} files",
    + (unsigned long long) cache->frun,
    + (unsigned long long) cache->fcull,
    + (unsigned long long) cache->fstop);
    +
    + stats.f_blocks >>= cache->bshift;
    + do_div(stats.f_blocks, 100);
    + cache->bstop = stats.f_blocks * cache->bstop_percent;
    + cache->bcull = stats.f_blocks * cache->bcull_percent;
    + cache->brun = stats.f_blocks * cache->brun_percent;
    +
    + _debug("limits {%llu,%llu,%llu} blocks",
    + (unsigned long long) cache->brun,
    + (unsigned long long) cache->bcull,
    + (unsigned long long) cache->bstop);
    +
    + /* get the cache directory and check its type */
    + cachedir = cachefiles_get_directory(cache, root, "cache");
    + if (IS_ERR(cachedir)) {
    + ret = PTR_ERR(cachedir);
    + goto error_unsupported;
    + }
    +
    + fsdef->dentry = cachedir;
    +
    + ret = cachefiles_check_object_type(fsdef);
    + if (ret < 0)
    + goto error_unsupported;
    +
    + /* get the graveyard directory */
    + graveyard = cachefiles_get_directory(cache, root, "graveyard");
    + if (IS_ERR(graveyard)) {
    + ret = PTR_ERR(graveyard);
    + goto error_unsupported;
    + }
    +
    + cache->graveyard = graveyard;
    +
    + /* publish the cache */
    + fscache_init_cache(&cache->cache,
    + &cachefiles_cache_ops,
    + "%02x:%02x",
    + MAJOR(fsdef->dentry->d_sb->s_dev),
    + MINOR(fsdef->dentry->d_sb->s_dev));
    +
    + ret = fscache_add_cache(&cache->cache, &fsdef->fscache, cache->tag);
    + if (ret < 0)
    + goto error_add_cache;
    +
    + /* done */
    + set_bit(CACHEFILES_READY, &cache->flags);
    + dput(root);
    +
    + printk(KERN_INFO "CacheFiles:"
    + " File cache on %s registered\n",
    + cache->cache.identifier);
    +
    + /* check how much space the cache has */
    + cachefiles_has_space(cache, 0, 0);
    + cachefiles_end_secure(cache, saved_cred);
    + return 0;
    +
    +error_add_cache:
    + dput(cache->graveyard);
    + cache->graveyard = NULL;
    +error_unsupported:
    + mntput(cache->mnt);
    + cache->mnt = NULL;
    + dput(fsdef->dentry);
    + fsdef->dentry = NULL;
    + dput(root);
    +error_open_root:
    + kmem_cache_free(cachefiles_object_jar, fsdef);
    +error_root_object:
    + cachefiles_end_secure(cache, saved_cred);
    + kerror("Failed to register: %d", ret);
    + return ret;
    +}
    +
    +/*
    + * unbind a cache on fd release
    + */
    +void cachefiles_daemon_unbind(struct cachefiles_cache *cache)
    +{
    + _enter("");
    +
    + if (test_bit(CACHEFILES_READY, &cache->flags)) {
    + printk(KERN_INFO "CacheFiles:"
    + " File cache on %s unregistering\n",
    + cache->cache.identifier);
    +
    + fscache_withdraw_cache(&cache->cache);
    + }
    +
    + if (cache->cache.fsdef)
    + cache->cache.ops->put_object(cache->cache.fsdef);
    +
    + dput(cache->graveyard);
    + mntput(cache->mnt);
    +
    + kfree(cache->rootdirname);
    + kfree(cache->tag);
    +
    + _leave("");
    +}
    diff --git a/fs/cachefiles/cf-daemon.c b/fs/cachefiles/cf-daemon.c
    new file mode 100644
    index 0000000..fa91a20
    --- /dev/null
    +++ b/fs/cachefiles/cf-daemon.c
    @@ -0,0 +1,720 @@
    +/* Daemon interface
    + *
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    + * Written by David Howells (dhowells@redhat.com)
    + *
    + * This program is free software; you can redistribute it and/or
    + * modify it under the terms of the GNU General Public Licence
    + * as published by the Free Software Foundation; either version
    + * 2 of the Licence, or (at your option) any later version.
    + */
    +
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include "cf-internal.h"
    +
    +static int cachefiles_daemon_open(struct inode *, struct file *);
    +static int cachefiles_daemon_release(struct inode *, struct file *);
    +static ssize_t cachefiles_daemon_read(struct file *, char __user *, size_t, loff_t *);
    +static ssize_t cachefiles_daemon_write(struct file *, const char __user *, size_t, loff_t *);
    +static unsigned int cachefiles_daemon_poll(struct file *, struct poll_table_struct *);
    +static int cachefiles_daemon_frun(struct cachefiles_cache *cache, char *args);
    +static int cachefiles_daemon_fcull(struct cachefiles_cache *cache, char *args);
    +static int cachefiles_daemon_fstop(struct cachefiles_cache *cache, char *args);
    +static int cachefiles_daemon_brun(struct cachefiles_cache *cache, char *args);
    +static int cachefiles_daemon_bcull(struct cachefiles_cache *cache, char *args);
    +static int cachefiles_daemon_bstop(struct cachefiles_cache *cache, char *args);
    +static int cachefiles_daemon_cull(struct cachefiles_cache *cache, char *args);
    +static int cachefiles_daemon_debug(struct cachefiles_cache *cache, char *args);
    +static int cachefiles_daemon_dir(struct cachefiles_cache *cache, char *args);
    +static int cachefiles_daemon_tag(struct cachefiles_cache *cache, char *args);
    +static int cachefiles_daemon_inuse(struct cachefiles_cache *cache, char *args);
    +
    +static unsigned long cachefiles_open;
    +
    +const struct file_operations cachefiles_daemon_fops = {
    + .owner = THIS_MODULE,
    + .open = cachefiles_daemon_open,
    + .release = cachefiles_daemon_release,
    + .read = cachefiles_daemon_read,
    + .write = cachefiles_daemon_write,
    + .poll = cachefiles_daemon_poll,
    +};
    +
    +struct cachefiles_daemon_cmd {
    + char name[8];
    + int (*handler)(struct cachefiles_cache *cache, char *args);
    +};
    +
    +static const struct cachefiles_daemon_cmd cachefiles_daemon_cmds[] = {
    + { "bind", cachefiles_daemon_bind },
    + { "brun", cachefiles_daemon_brun },
    + { "bcull", cachefiles_daemon_bcull },
    + { "bstop", cachefiles_daemon_bstop },
    + { "cull", cachefiles_daemon_cull },
    + { "debug", cachefiles_daemon_debug },
    + { "dir", cachefiles_daemon_dir },
    + { "frun", cachefiles_daemon_frun },
    + { "fcull", cachefiles_daemon_fcull },
    + { "fstop", cachefiles_daemon_fstop },
    + { "inuse", cachefiles_daemon_inuse },
    + { "tag", cachefiles_daemon_tag },
    + { "", NULL }
    +};
    +
    +
    +/*
    + * do various checks
    + */
    +static int cachefiles_daemon_open(struct inode *inode, struct file *file)
    +{
    + struct cachefiles_cache *cache;
    +
    + _enter("");
    +
    + /* only the superuser may do this */
    + if (!capable(CAP_SYS_ADMIN))
    + return -EPERM;
    +
    + /* the cachefiles device may only be open once at a time */
    + if (xchg(&cachefiles_open, 1) == 1)
    + return -EBUSY;
    +
    + /* allocate a cache record */
    + cache = kzalloc(sizeof(struct cachefiles_cache), GFP_KERNEL);
    + if (!cache) {
    + cachefiles_open = 0;
    + return -ENOMEM;
    + }
    +
    + mutex_init(&cache->daemon_mutex);
    + cache->active_nodes = RB_ROOT;
    + rwlock_init(&cache->active_lock);
    + init_waitqueue_head(&cache->daemon_pollwq);
    +
    + /* set default caching limits
    + * - limit at 1% free space and/or free files
    + * - cull below 5% free space and/or free files
    + * - cease culling above 7% free space and/or free files
    + */
    + cache->frun_percent = 7;
    + cache->fcull_percent = 5;
    + cache->fstop_percent = 1;
    + cache->brun_percent = 7;
    + cache->bcull_percent = 5;
    + cache->bstop_percent = 1;
    +
    + file->private_data = cache;
    + cache->cachefilesd = file;
    + return 0;
    +}
    +
    +/*
    + * release a cache
    + */
    +static int cachefiles_daemon_release(struct inode *inode, struct file *file)
    +{
    + struct cachefiles_cache *cache = file->private_data;
    +
    + _enter("");
    +
    + ASSERT(cache);
    +
    + set_bit(CACHEFILES_DEAD, &cache->flags);
    +
    + cachefiles_daemon_unbind(cache);
    +
    + ASSERT(!cache->active_nodes.rb_node);
    +
    + /* clean up the control file interface */
    + cache->cachefilesd = NULL;
    + file->private_data = NULL;
    + cachefiles_open = 0;
    +
    + kfree(cache);
    +
    + _leave("");
    + return 0;
    +}
    +
    +/*
    + * read the cache state
    + */
    +static ssize_t cachefiles_daemon_read(struct file *file, char __user *_buffer,
    + size_t buflen, loff_t *pos)
    +{
    + struct cachefiles_cache *cache = file->private_data;
    + char buffer[256];
    + int n;
    +
    + _enter(",,%zu,", buflen);
    +
    + if (!test_bit(CACHEFILES_READY, &cache->flags))
    + return 0;
    +
    + /* check how much space the cache has */
    + cachefiles_has_space(cache, 0, 0);
    +
    + /* summarise */
    + clear_bit(CACHEFILES_STATE_CHANGED, &cache->flags);
    +
    + n = snprintf(buffer, sizeof(buffer),
    + "cull=%c"
    + " frun=%llx"
    + " fcull=%llx"
    + " fstop=%llx"
    + " brun=%llx"
    + " bcull=%llx"
    + " bstop=%llx",
    + test_bit(CACHEFILES_CULLING, &cache->flags) ? '1' : '0',
    + (unsigned long long) cache->frun,
    + (unsigned long long) cache->fcull,
    + (unsigned long long) cache->fstop,
    + (unsigned long long) cache->brun,
    + (unsigned long long) cache->bcull,
    + (unsigned long long) cache->bstop
    + );
    +
    + if (n > buflen)
    + return -EMSGSIZE;
    +
    + if (copy_to_user(_buffer, buffer, n) != 0)
    + return -EFAULT;
    +
    + return n;
    +}
    +
    +/*
    + * command the cache
    + */
    +static ssize_t cachefiles_daemon_write(struct file *file,
    + const char __user *_data,
    + size_t datalen,
    + loff_t *pos)
    +{
    + const struct cachefiles_daemon_cmd *cmd;
    + struct cachefiles_cache *cache = file->private_data;
    + ssize_t ret;
    + char *data, *args, *cp;
    +
    + _enter(",,%zu,", datalen);
    +
    + ASSERT(cache);
    +
    + if (test_bit(CACHEFILES_DEAD, &cache->flags))
    + return -EIO;
    +
    + if (datalen < 0 || datalen > PAGE_SIZE - 1)
    + return -EOPNOTSUPP;
    +
    + /* drag the command string into the kernel so we can parse it */
    + data = kmalloc(datalen + 1, GFP_KERNEL);
    + if (!data)
    + return -ENOMEM;
    +
    + ret = -EFAULT;
    + if (copy_from_user(data, _data, datalen) != 0)
    + goto error;
    +
    + data[datalen] = '\0';
    +
    + ret = -EINVAL;
    + if (memchr(data, '\0', datalen))
    + goto error;
    +
    + /* strip any newline */
    + cp = memchr(data, '\n', datalen);
    + if (cp) {
    + if (cp == data)
    + goto error;
    +
    + *cp = '\0';
    + }
    +
    + /* parse the command */
    + ret = -EOPNOTSUPP;
    +
    + for (args = data; *args; args++)
    + if (isspace(*args))
    + break;
    + if (*args) {
    + if (args == data)
    + goto error;
    + *args = '\0';
    + for (args++; isspace(*args); args++)
    + continue;
    + }
    +
    + /* run the appropriate command handler */
    + for (cmd = cachefiles_daemon_cmds; cmd->name[0]; cmd++)
    + if (strcmp(cmd->name, data) == 0)
    + goto found_command;
    +
    +error:
    + kfree(data);
    + _leave(" = %zd", ret);
    + return ret;
    +
    +found_command:
    + mutex_lock(&cache->daemon_mutex);
    +
    + ret = -EIO;
    + if (!test_bit(CACHEFILES_DEAD, &cache->flags))
    + ret = cmd->handler(cache, args);
    +
    + mutex_unlock(&cache->daemon_mutex);
    +
    + if (ret == 0)
    + ret = datalen;
    + goto error;
    +}
    +
    +/*
    + * poll for culling state
    + * - use POLLOUT to indicate culling state
    + */
    +static unsigned int cachefiles_daemon_poll(struct file *file,
    + struct poll_table_struct *poll)
    +{
    + struct cachefiles_cache *cache = file->private_data;
    + unsigned int mask;
    +
    + poll_wait(file, &cache->daemon_pollwq, poll);
    + mask = 0;
    +
    + if (test_bit(CACHEFILES_STATE_CHANGED, &cache->flags))
    + mask |= POLLIN;
    +
    + if (test_bit(CACHEFILES_CULLING, &cache->flags))
    + mask |= POLLOUT;
    +
    + return mask;
    +}
    +
    +/*
    + * give a range error for cache space constraints
    + * - can be tail-called
    + */
    +static int cachefiles_daemon_range_error(struct cachefiles_cache *cache, char *args)
    +{
    + kerror("Free space limits must be in range"
    + " 0%%<=stop +
    + return -EINVAL;
    +}
    +
    +/*
    + * set the percentage of files at which to stop culling
    + * - command: "frun %"
    + */
    +static int cachefiles_daemon_frun(struct cachefiles_cache *cache, char *args)
    +{
    + unsigned long frun;
    +
    + _enter(",%s", args);
    +
    + if (!*args)
    + return -EINVAL;
    +
    + frun = simple_strtoul(args, &args, 10);
    + if (args[0] != '%' || args[1] != '\0')
    + return -EINVAL;
    +
    + if (frun <= cache->fcull_percent || frun >= 100)
    + return cachefiles_daemon_range_error(cache, args);
    +
    + cache->frun_percent = frun;
    + return 0;
    +}
    +
    +/*
    + * set the percentage of files at which to start culling
    + * - command: "fcull %"
    + */
    +static int cachefiles_daemon_fcull(struct cachefiles_cache *cache, char *args)
    +{
    + unsigned long fcull;
    +
    + _enter(",%s", args);
    +
    + if (!*args)
    + return -EINVAL;
    +
    + fcull = simple_strtoul(args, &args, 10);
    + if (args[0] != '%' || args[1] != '\0')
    + return -EINVAL;
    +
    + if (fcull <= cache->fstop_percent || fcull >= cache->frun_percent)
    + return cachefiles_daemon_range_error(cache, args);
    +
    + cache->fcull_percent = fcull;
    + return 0;
    +}
    +
    +/*
    + * set the percentage of files at which to stop allocating
    + * - command: "fstop %"
    + */
    +static int cachefiles_daemon_fstop(struct cachefiles_cache *cache, char *args)
    +{
    + unsigned long fstop;
    +
    + _enter(",%s", args);
    +
    + if (!*args)
    + return -EINVAL;
    +
    + fstop = simple_strtoul(args, &args, 10);
    + if (args[0] != '%' || args[1] != '\0')
    + return -EINVAL;
    +
    + if (fstop < 0 || fstop >= cache->fcull_percent)
    + return cachefiles_daemon_range_error(cache, args);
    +
    + cache->fstop_percent = fstop;
    + return 0;
    +}
    +
    +/*
    + * set the percentage of blocks at which to stop culling
    + * - command: "brun %"
    + */
    +static int cachefiles_daemon_brun(struct cachefiles_cache *cache, char *args)
    +{
    + unsigned long brun;
    +
    + _enter(",%s", args);
    +
    + if (!*args)
    + return -EINVAL;
    +
    + brun = simple_strtoul(args, &args, 10);
    + if (args[0] != '%' || args[1] != '\0')
    + return -EINVAL;
    +
    + if (brun <= cache->bcull_percent || brun >= 100)
    + return cachefiles_daemon_range_error(cache, args);
    +
    + cache->brun_percent = brun;
    + return 0;
    +}
    +
    +/*
    + * set the percentage of blocks at which to start culling
    + * - command: "bcull %"
    + */
    +static int cachefiles_daemon_bcull(struct cachefiles_cache *cache, char *args)
    +{
    + unsigned long bcull;
    +
    + _enter(",%s", args);
    +
    + if (!*args)
    + return -EINVAL;
    +
    + bcull = simple_strtoul(args, &args, 10);
    + if (args[0] != '%' || args[1] != '\0')
    + return -EINVAL;
    +
    + if (bcull <= cache->bstop_percent || bcull >= cache->brun_percent)
    + return cachefiles_daemon_range_error(cache, args);
    +
    + cache->bcull_percent = bcull;
    + return 0;
    +}
    +
    +/*
    + * set the percentage of blocks at which to stop allocating
    + * - command: "bstop %"
    + */
    +static int cachefiles_daemon_bstop(struct cachefiles_cache *cache, char *args)
    +{
    + unsigned long bstop;
    +
    + _enter(",%s", args);
    +
    + if (!*args)
    + return -EINVAL;
    +
    + bstop = simple_strtoul(args, &args, 10);
    + if (args[0] != '%' || args[1] != '\0')
    + return -EINVAL;
    +
    + if (bstop < 0 || bstop >= cache->bcull_percent)
    + return cachefiles_daemon_range_error(cache, args);
    +
    + cache->bstop_percent = bstop;
    + return 0;
    +}
    +
    +/*
    + * set the cache directory
    + * - command: "dir "
    + */
    +static int cachefiles_daemon_dir(struct cachefiles_cache *cache, char *args)
    +{
    + char *dir;
    +
    + _enter(",%s", args);
    +
    + if (!*args) {
    + kerror("Empty directory specified");
    + return -EINVAL;
    + }
    +
    + if (cache->rootdirname) {
    + kerror("Second cache directory specified");
    + return -EEXIST;
    + }
    +
    + dir = kstrdup(args, GFP_KERNEL);
    + if (!dir)
    + return -ENOMEM;
    +
    + cache->rootdirname = dir;
    + return 0;
    +}
    +
    +/*
    + * set the cache tag
    + * - command: "tag "
    + */
    +static int cachefiles_daemon_tag(struct cachefiles_cache *cache, char *args)
    +{
    + char *tag;
    +
    + _enter(",%s", args);
    +
    + if (!*args) {
    + kerror("Empty tag specified");
    + return -EINVAL;
    + }
    +
    + if (cache->tag)
    + return -EEXIST;
    +
    + tag = kstrdup(args, GFP_KERNEL);
    + if (!tag)
    + return -ENOMEM;
    +
    + cache->tag = tag;
    + return 0;
    +}
    +
    +/*
    + * request a node in the cache be culled from the current working directory
    + * - command: "cull "
    + */
    +static int cachefiles_daemon_cull(struct cachefiles_cache *cache, char *args)
    +{
    + struct fs_struct *fs;
    + struct dentry *dir;
    + struct cred *saved_cred;
    + int ret;
    +
    + _enter(",%s", args);
    +
    + if (strchr(args, '/'))
    + goto inval;
    +
    + if (!test_bit(CACHEFILES_READY, &cache->flags)) {
    + kerror("cull applied to unready cache");
    + return -EIO;
    + }
    +
    + if (test_bit(CACHEFILES_DEAD, &cache->flags)) {
    + kerror("cull applied to dead cache");
    + return -EIO;
    + }
    +
    + /* extract the directory dentry from the cwd */
    + fs = current->fs;
    + read_lock(&fs->lock);
    + dir = dget(fs->pwd);
    + read_unlock(&fs->lock);
    +
    + if (!S_ISDIR(dir->d_inode->i_mode))
    + goto notdir;
    +
    + cachefiles_begin_secure(cache, &saved_cred);
    + ret = cachefiles_cull(cache, dir, args);
    + cachefiles_end_secure(cache, saved_cred);
    +
    + dput(dir);
    + _leave(" = %d", ret);
    + return ret;
    +
    +notdir:
    + dput(dir);
    + kerror("cull command requires dirfd to be a directory");
    + return -ENOTDIR;
    +
    +inval:
    + kerror("cull command requires dirfd and filename");
    + return -EINVAL;
    +}
    +
    +/*
    + * set debugging mode
    + * - command: "debug "
    + */
    +static int cachefiles_daemon_debug(struct cachefiles_cache *cache, char *args)
    +{
    + unsigned long mask;
    +
    + _enter(",%s", args);
    +
    + mask = simple_strtoul(args, &args, 0);
    + if (args[0] != '\0')
    + goto inval;
    +
    + cachefiles_debug = mask;
    + _leave(" = 0");
    + return 0;
    +
    +inval:
    + kerror("debug command requires mask");
    + return -EINVAL;
    +}
    +
    +/*
    + * find out whether an object in the current working directory is in use or not
    + * - command: "inuse "
    + */
    +static int cachefiles_daemon_inuse(struct cachefiles_cache *cache, char *args)
    +{
    + struct fs_struct *fs;
    + struct dentry *dir;
    + struct cred *saved_cred;
    + int ret;
    +
    + _enter(",%s", args);
    +
    + if (strchr(args, '/'))
    + goto inval;
    +
    + if (!test_bit(CACHEFILES_READY, &cache->flags)) {
    + kerror("inuse applied to unready cache");
    + return -EIO;
    + }
    +
    + if (test_bit(CACHEFILES_DEAD, &cache->flags)) {
    + kerror("inuse applied to dead cache");
    + return -EIO;
    + }
    +
    + /* extract the directory dentry from the cwd */
    + fs = current->fs;
    + read_lock(&fs->lock);
    + dir = dget(fs->pwd);
    + read_unlock(&fs->lock);
    +
    + if (!S_ISDIR(dir->d_inode->i_mode))
    + goto notdir;
    +
    + cachefiles_begin_secure(cache, &saved_cred);
    + ret = cachefiles_check_in_use(cache, dir, args);
    + cachefiles_end_secure(cache, saved_cred);
    +
    + dput(dir);
    + _leave(" = %d", ret);
    + return ret;
    +
    +notdir:
    + dput(dir);
    + kerror("inuse command requires dirfd to be a directory");
    + return -ENOTDIR;
    +
    +inval:
    + kerror("inuse command requires dirfd and filename");
    + return -EINVAL;
    +}
    +
    +/*
    + * see if we have space for a number of pages and/or a number of files in the
    + * cache
    + */
    +int cachefiles_has_space(struct cachefiles_cache *cache,
    + unsigned fnr, unsigned bnr)
    +{
    + struct kstatfs stats;
    + int ret;
    +
    + _enter("{%llu,%llu,%llu,%llu,%llu,%llu},%u,%u",
    + (unsigned long long) cache->frun,
    + (unsigned long long) cache->fcull,
    + (unsigned long long) cache->fstop,
    + (unsigned long long) cache->brun,
    + (unsigned long long) cache->bcull,
    + (unsigned long long) cache->bstop,
    + fnr, bnr);
    +
    + /* find out how many pages of blockdev are available */
    + memset(&stats, 0, sizeof(stats));
    +
    + ret = vfs_statfs(cache->mnt->mnt_root, &stats);
    + if (ret < 0) {
    + if (ret == -EIO)
    + cachefiles_io_error(cache, "statfs failed");
    + _leave(" = %d", ret);
    + return ret;
    + }
    +
    + stats.f_bavail >>= cache->bshift;
    +
    + _debug("avail %llu,%llu",
    + (unsigned long long) stats.f_ffree,
    + (unsigned long long) stats.f_bavail);
    +
    + /* see if there is sufficient space */
    + if (stats.f_ffree > fnr)
    + stats.f_ffree -= fnr;
    + else
    + stats.f_ffree = 0;
    +
    + if (stats.f_bavail > bnr)
    + stats.f_bavail -= bnr;
    + else
    + stats.f_bavail = 0;
    +
    + ret = -ENOBUFS;
    + if (stats.f_ffree < cache->fstop ||
    + stats.f_bavail < cache->bstop)
    + goto begin_cull;
    +
    + ret = 0;
    + if (stats.f_ffree < cache->fcull ||
    + stats.f_bavail < cache->bcull)
    + goto begin_cull;
    +
    + if (test_bit(CACHEFILES_CULLING, &cache->flags) &&
    + stats.f_ffree >= cache->frun &&
    + stats.f_bavail >= cache->brun &&
    + test_and_clear_bit(CACHEFILES_CULLING, &cache->flags)
    + ) {
    + _debug("cease culling");
    + cachefiles_state_changed(cache);
    + }
    +
    + _leave(" = 0");
    + return 0;
    +
    +begin_cull:
    + if (!test_and_set_bit(CACHEFILES_CULLING, &cache->flags)) {
    + _debug("### CULL CACHE ###");
    + cachefiles_state_changed(cache);
    + }
    +
    + _leave(" = %d", ret);
    + return ret;
    +}
    diff --git a/fs/cachefiles/cf-interface.c b/fs/cachefiles/cf-interface.c
    new file mode 100644
    index 0000000..956ab6f
    --- /dev/null
    +++ b/fs/cachefiles/cf-interface.c
    @@ -0,0 +1,444 @@
    +/* FS-Cache interface to CacheFiles
    + *
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    + * Written by David Howells (dhowells@redhat.com)
    + *
    + * This program is free software; you can redistribute it and/or
    + * modify it under the terms of the GNU General Public Licence
    + * as published by the Free Software Foundation; either version
    + * 2 of the Licence, or (at your option) any later version.
    + */
    +
    +#include
    +#include
    +#include "cf-internal.h"
    +
    +#define list_to_page(head) (list_entry((head)->prev, struct page, lru))
    +
    +struct cachefiles_lookup_data {
    + struct cachefiles_xattr *auxdata; /* auxiliary data */
    + char *key; /* key path */
    +};
    +
    +static int cachefiles_attr_changed(struct fscache_object *_object);
    +
    +/*
    + * allocate an object record for a cookie lookup and prepare the lookup data
    + */
    +static struct fscache_object *cachefiles_alloc_object(
    + struct fscache_cache *_cache,
    + struct fscache_cookie *cookie)
    +{
    + struct cachefiles_lookup_data *lookup_data;
    + struct cachefiles_object *object;
    + struct cachefiles_cache *cache;
    + struct cachefiles_xattr *auxdata;
    + unsigned keylen, auxlen;
    + void *buffer;
    + char *key;
    +
    + cache = container_of(_cache, struct cachefiles_cache, cache);
    +
    + _enter("{%s},%p,", cache->cache.identifier, cookie);
    +
    + lookup_data = kmalloc(sizeof(lookup_data), GFP_KERNEL);
    + if (!lookup_data)
    + goto nomem_lookup_data;
    +
    + /* create a new object record and a temporary leaf image */
    + object = kmem_cache_alloc(cachefiles_object_jar, GFP_KERNEL);
    + if (!object)
    + goto nomem_object;
    +
    + ASSERTCMP(object->backer, ==, NULL);
    +
    + BUG_ON(test_bit(CACHEFILES_OBJECT_ACTIVE, &object->flags));
    + atomic_set(&object->usage, 1);
    +
    + fscache_object_init(&object->fscache);
    + object->fscache.cookie = cookie;
    + object->fscache.cache = &cache->cache;
    +
    + object->type = cookie->def->type;
    +
    + /* get hold of the raw key
    + * - stick the length on the front and leave space on the back for the
    + * encoder
    + */
    + buffer = kmalloc((2 + 512) + 3, GFP_KERNEL);
    + if (!buffer)
    + goto nomem_buffer;
    +
    + keylen = cookie->def->get_key(cookie->netfs_data, buffer + 2, 512);
    + ASSERTCMP(keylen, <, 512);
    +
    + *(uint16_t *)buffer = keylen;
    + ((char *)buffer)[keylen + 2] = 0;
    + ((char *)buffer)[keylen + 3] = 0;
    + ((char *)buffer)[keylen + 4] = 0;
    +
    + /* turn the raw key into something that can work with as a filename */
    + key = cachefiles_cook_key(buffer, keylen + 2, object->type);
    + if (!key)
    + goto nomem_key;
    +
    + /* get hold of the auxiliary data and prepend the object type */
    + auxdata = buffer;
    + auxlen = 0;
    + if (cookie->def->get_aux) {
    + auxlen = cookie->def->get_aux(cookie->netfs_data,
    + auxdata->data, 511);
    + ASSERTCMP(auxlen, <, 511);
    + }
    +
    + auxdata->len = auxlen + 1;
    + auxdata->type = cookie->def->type;
    +
    + lookup_data->auxdata = auxdata;
    + lookup_data->key = key;
    + object->lookup_data = lookup_data;
    +
    + _leave(" = %p [%p]", &object->fscache, lookup_data);
    + return &object->fscache;
    +
    +nomem_key:
    + kfree(buffer);
    +nomem_buffer:
    + BUG_ON(test_bit(CACHEFILES_OBJECT_ACTIVE, &object->flags));
    + kmem_cache_free(cachefiles_object_jar, object);
    +nomem_object:
    + kfree(lookup_data);
    +nomem_lookup_data:
    + _leave(" = -ENOMEM");
    + return ERR_PTR(-ENOMEM);
    +}
    +
    +/*
    + * attempt to look up the nominated node in this cache
    + */
    +static void cachefiles_lookup_object(struct fscache_object *_object)
    +{
    + struct cachefiles_lookup_data *lookup_data;
    + struct cachefiles_object *parent, *object;
    + struct cachefiles_cache *cache;
    + struct cred *saved_cred;
    + int ret;
    +
    + _enter("{OBJ%x}", _object->debug_id);
    +
    + cache = container_of(_object->cache, struct cachefiles_cache, cache);
    + parent = container_of(_object->parent,
    + struct cachefiles_object, fscache);
    + object = container_of(_object, struct cachefiles_object, fscache);
    + lookup_data = object->lookup_data;
    +
    + ASSERTCMP(lookup_data, !=, NULL);
    +
    + /* look up the key, creating any missing bits */
    + cachefiles_begin_secure(cache, &saved_cred);
    + ret = cachefiles_walk_to_object(parent, object,
    + lookup_data->key,
    + lookup_data->auxdata);
    + cachefiles_end_secure(cache, saved_cred);
    +
    + /* polish off by setting the attributes of non-index files */
    + if (ret == 0 &&
    + object->fscache.cookie->def->type != FSCACHE_COOKIE_TYPE_INDEX)
    + cachefiles_attr_changed(&object->fscache);
    +
    + if (ret < 0)
    + fscache_object_lookup_error(&object->fscache);
    +
    + _leave(" [%d]", ret);
    +}
    +
    +/*
    + * indication of lookup completion
    + */
    +static void cachefiles_lookup_complete(struct fscache_object *_object)
    +{
    + struct cachefiles_object *object;
    +
    + object = container_of(_object, struct cachefiles_object, fscache);
    +
    + _enter("{OBJ%x,%p}", object->fscache.debug_id, object->lookup_data);
    +
    + if (object->lookup_data) {
    + kfree(object->lookup_data->key);
    + kfree(object->lookup_data->auxdata);
    + kfree(object->lookup_data);
    + object->lookup_data = NULL;
    + }
    +}
    +
    +/*
    + * increment the usage count on an inode object (may fail if unmounting)
    + */
    +static struct fscache_object *cachefiles_grab_object(struct fscache_object *_object)
    +{
    + struct cachefiles_object *object;
    +
    + _enter("{OBJ%x}", _object->debug_id);
    +
    + object = container_of(_object, struct cachefiles_object, fscache);
    +
    +#ifdef CACHEFILES_DEBUG_SLAB
    + ASSERT((atomic_read(&object->usage) & 0xffff0000) != 0x6b6b0000);
    +#endif
    +
    + atomic_inc(&object->usage);
    + return &object->fscache;
    +}
    +
    +/*
    + * update the auxilliary data for an object object on disk
    + */
    +static void cachefiles_update_object(struct fscache_object *_object)
    +{
    + struct cachefiles_object *object;
    + struct cachefiles_xattr *auxdata;
    + struct cachefiles_cache *cache;
    + struct fscache_cookie *cookie;
    + struct cred *saved_cred;
    + unsigned auxlen;
    +
    + _enter("{OBJ%x}", _object->debug_id);
    +
    + object = container_of(_object, struct cachefiles_object, fscache);
    + cache = container_of(object->fscache.cache, struct cachefiles_cache,
    + cache);
    + cookie = object->fscache.cookie;
    +
    + if (!cookie->def->get_aux) {
    + _leave(" [no aux]");
    + return;
    + }
    +
    + auxdata = kmalloc(2 + 512 + 3, GFP_KERNEL);
    + if (!auxdata) {
    + _leave(" [nomem]");
    + return;
    + }
    +
    + auxlen = cookie->def->get_aux(cookie->netfs_data, auxdata->data, 511);
    + ASSERTCMP(auxlen, <, 511);
    +
    + auxdata->len = auxlen + 1;
    + auxdata->type = cookie->def->type;
    +
    + cachefiles_begin_secure(cache, &saved_cred);
    + cachefiles_update_object_xattr(object, auxdata);
    + cachefiles_end_secure(cache, saved_cred);
    + kfree(auxdata);
    + _leave("");
    +}
    +
    +/*
    + * discard the resources pinned by an object and effect retirement if
    + * requested
    + */
    +static void cachefiles_drop_object(struct fscache_object *_object)
    +{
    + struct cachefiles_object *object;
    + struct cachefiles_cache *cache;
    + struct cred *saved_cred;
    +
    + ASSERT(_object);
    +
    + object = container_of(_object, struct cachefiles_object, fscache);
    +
    + _enter("{OBJ%x,%d}",
    + object->fscache.debug_id, atomic_read(&object->usage));
    +
    + cache = container_of(object->fscache.cache,
    + struct cachefiles_cache, cache);
    +
    +#ifdef CACHEFILES_DEBUG_SLAB
    + ASSERT((atomic_read(&object->usage) & 0xffff0000) != 0x6b6b0000);
    +#endif
    +
    + /* delete retired objects */
    + if (object->fscache.state == FSCACHE_OBJECT_RECYCLING &&
    + _object != cache->cache.fsdef
    + ) {
    + _debug("- retire object OBJ%x", object->fscache.debug_id);
    + cachefiles_begin_secure(cache, &saved_cred);
    + cachefiles_delete_object(cache, object);
    + cachefiles_end_secure(cache, saved_cred);
    + }
    +
    + /* close the filesystem stuff attached to the object */
    + if (object->backer != object->dentry)
    + dput(object->backer);
    + object->backer = NULL;
    +
    + /* note that the object is now inactive */
    + if (test_bit(CACHEFILES_OBJECT_ACTIVE, &object->flags)) {
    + write_lock(&cache->active_lock);
    + if (!test_and_clear_bit(CACHEFILES_OBJECT_ACTIVE,
    + &object->flags))
    + BUG();
    + rb_erase(&object->active_node, &cache->active_nodes);
    + wake_up_bit(&object->flags, CACHEFILES_OBJECT_ACTIVE);
    + write_unlock(&cache->active_lock);
    + }
    +
    + dput(object->dentry);
    + object->dentry = NULL;
    +
    + _leave("");
    +}
    +
    +/*
    + * dispose of a reference to an object
    + */
    +static void cachefiles_put_object(struct fscache_object *_object)
    +{
    + struct cachefiles_object *object;
    +
    + ASSERT(_object);
    +
    + object = container_of(_object, struct cachefiles_object, fscache);
    +
    + _enter("{OBJ%x,%d}",
    + object->fscache.debug_id, atomic_read(&object->usage));
    +
    +#ifdef CACHEFILES_DEBUG_SLAB
    + ASSERT((atomic_read(&object->usage) & 0xffff0000) != 0x6b6b0000);
    +#endif
    +
    + ASSERTIFCMP(object->fscache.parent,
    + object->fscache.parent->n_children, >, 0);
    +
    + if (atomic_dec_and_test(&object->usage)) {
    + _debug("- kill object OBJ%x", object->fscache.debug_id);
    +
    + ASSERT(!test_bit(CACHEFILES_OBJECT_ACTIVE, &object->flags));
    + ASSERTCMP(object->fscache.parent, ==, NULL);
    + ASSERTCMP(object->backer, ==, NULL);
    + ASSERTCMP(object->dentry, ==, NULL);
    + ASSERTCMP(object->fscache.n_ops, ==, 0);
    + ASSERTCMP(object->fscache.n_children, ==, 0);
    +
    + if (object->lookup_data) {
    + kfree(object->lookup_data->key);
    + kfree(object->lookup_data->auxdata);
    + kfree(object->lookup_data);
    + object->lookup_data = NULL;
    + }
    +
    + kmem_cache_free(cachefiles_object_jar, object);
    + }
    +
    + _leave("");
    +}
    +
    +/*
    + * sync a cache
    + */
    +static void cachefiles_sync_cache(struct fscache_cache *_cache)
    +{
    + struct cachefiles_cache *cache;
    + struct cred *saved_cred;
    + int ret;
    +
    + _enter("%p", _cache);
    +
    + cache = container_of(_cache, struct cachefiles_cache, cache);
    +
    + /* make sure all pages pinned by operations on behalf of the netfs are
    + * written to disc */
    + cachefiles_begin_secure(cache, &saved_cred);
    + ret = fsync_super(cache->mnt->mnt_sb);
    + cachefiles_end_secure(cache, saved_cred);
    +
    + if (ret == -EIO)
    + cachefiles_io_error(cache,
    + "Attempt to sync backing fs superblock"
    + " returned error %d",
    + ret);
    +}
    +
    +/*
    + * notification the attributes on an object have changed
    + * - called with reads/writes excluded by FS-Cache
    + */
    +static int cachefiles_attr_changed(struct fscache_object *_object)
    +{
    + struct cachefiles_object *object;
    + struct cachefiles_cache *cache;
    + struct iattr newattrs;
    + struct cred *saved_cred;
    + uint64_t ni_size;
    + loff_t oi_size;
    + int ret;
    +
    + _object->cookie->def->get_attr(_object->cookie->netfs_data, &ni_size);
    +
    + _enter("{OBJ%x},[%llu]",
    + _object->debug_id, (unsigned long long) ni_size);
    +
    + object = container_of(_object, struct cachefiles_object, fscache);
    + cache = container_of(object->fscache.cache,
    + struct cachefiles_cache, cache);
    +
    + if (ni_size == object->i_size)
    + return 0;
    +
    + if (!object->backer)
    + return -ENOBUFS;
    +
    + ASSERT(S_ISREG(object->backer->d_inode->i_mode));
    +
    + fscache_set_store_limit(&object->fscache, ni_size);
    +
    + oi_size = i_size_read(object->backer->d_inode);
    + if (oi_size == ni_size)
    + return 0;
    +
    + newattrs.ia_size = ni_size;
    + newattrs.ia_valid = ATTR_SIZE;
    +
    + cachefiles_begin_secure(cache, &saved_cred);
    + mutex_lock(&object->backer->d_inode->i_mutex);
    + ret = notify_change(object->backer, &newattrs);
    + mutex_unlock(&object->backer->d_inode->i_mutex);
    + cachefiles_end_secure(cache, saved_cred);
    +
    + if (ret == -EIO) {
    + fscache_set_store_limit(&object->fscache, 0);
    + cachefiles_io_error_obj(object, "Size set failed");
    + ret = -ENOBUFS;
    + }
    +
    + _leave(" = %d", ret);
    + return ret;
    +}
    +
    +/*
    + * dissociate a cache from all the pages it was backing
    + */
    +static void cachefiles_dissociate_pages(struct fscache_cache *cache)
    +{
    + _enter("");
    +}
    +
    +const struct fscache_cache_ops cachefiles_cache_ops = {
    + .name = "cachefiles",
    + .alloc_object = cachefiles_alloc_object,
    + .lookup_object = cachefiles_lookup_object,
    + .lookup_complete = cachefiles_lookup_complete,
    + .grab_object = cachefiles_grab_object,
    + .update_object = cachefiles_update_object,
    + .drop_object = cachefiles_drop_object,
    + .put_object = cachefiles_put_object,
    + .sync_cache = cachefiles_sync_cache,
    + .attr_changed = cachefiles_attr_changed,
    + .read_or_alloc_page = cachefiles_read_or_alloc_page,
    + .read_or_alloc_pages = cachefiles_read_or_alloc_pages,
    + .allocate_page = cachefiles_allocate_page,
    + .allocate_pages = cachefiles_allocate_pages,
    + .write_page = cachefiles_write_page,
    + .uncache_page = cachefiles_uncache_page,
    + .dissociate_pages = cachefiles_dissociate_pages,
    +};
    diff --git a/fs/cachefiles/cf-internal.h b/fs/cachefiles/cf-internal.h
    new file mode 100644
    index 0000000..e62af5d
    --- /dev/null
    +++ b/fs/cachefiles/cf-internal.h
    @@ -0,0 +1,369 @@
    +/* General netfs cache on cache files internal defs
    + *
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    + * Written by David Howells (dhowells@redhat.com)
    + *
    + * This program is free software; you can redistribute it and/or
    + * modify it under the terms of the GNU General Public Licence
    + * as published by the Free Software Foundation; either version
    + * 2 of the Licence, or (at your option) any later version.
    + */
    +
    +#include
    +#include
    +#include
    +#include
    +#include
    +
    +struct cachefiles_cache;
    +struct cachefiles_object;
    +
    +extern unsigned cachefiles_debug;
    +#define CACHEFILES_DEBUG_KENTER 1
    +#define CACHEFILES_DEBUG_KLEAVE 2
    +#define CACHEFILES_DEBUG_KDEBUG 4
    +
    +/*
    + * node records
    + */
    +struct cachefiles_object {
    + struct fscache_object fscache; /* fscache handle */
    + struct cachefiles_lookup_data *lookup_data; /* cached lookup data */
    + struct dentry *dentry; /* the file/dir representing this object */
    + struct dentry *backer; /* backing file */
    + loff_t i_size; /* object size */
    + unsigned long flags;
    +#define CACHEFILES_OBJECT_ACTIVE 0 /* T if marked active */
    + atomic_t usage; /* object usage count */
    + uint8_t type; /* object type */
    + uint8_t new; /* T if object new */
    + spinlock_t work_lock;
    + struct rb_node active_node; /* link in active tree (dentry is key) */
    +};
    +
    +extern struct kmem_cache *cachefiles_object_jar;
    +
    +/*
    + * Cache files cache definition
    + */
    +struct cachefiles_cache {
    + struct fscache_cache cache; /* FS-Cache record */
    + struct vfsmount *mnt; /* mountpoint holding the cache */
    + struct dentry *graveyard; /* directory into which dead objects go */
    + struct file *cachefilesd; /* manager daemon handle */
    + struct cred *cache_cred; /* credentials for accessing cache */
    + struct mutex daemon_mutex; /* command serialisation mutex */
    + wait_queue_head_t daemon_pollwq; /* poll waitqueue for daemon */
    + struct rb_root active_nodes; /* active nodes (can't be culled) */
    + rwlock_t active_lock; /* lock for active_nodes */
    + atomic_t gravecounter; /* graveyard uniquifier */
    + unsigned frun_percent; /* when to stop culling (% files) */
    + unsigned fcull_percent; /* when to start culling (% files) */
    + unsigned fstop_percent; /* when to stop allocating (% files) */
    + unsigned brun_percent; /* when to stop culling (% blocks) */
    + unsigned bcull_percent; /* when to start culling (% blocks) */
    + unsigned bstop_percent; /* when to stop allocating (% blocks) */
    + unsigned bsize; /* cache's block size */
    + unsigned bshift; /* min(ilog2(PAGE_SIZE / bsize), 0) */
    + uint64_t frun; /* when to stop culling */
    + uint64_t fcull; /* when to start culling */
    + uint64_t fstop; /* when to stop allocating */
    + sector_t brun; /* when to stop culling */
    + sector_t bcull; /* when to start culling */
    + sector_t bstop; /* when to stop allocating */
    + unsigned long flags;
    +#define CACHEFILES_READY 0 /* T if cache prepared */
    +#define CACHEFILES_DEAD 1 /* T if cache dead */
    +#define CACHEFILES_CULLING 2 /* T if cull engaged */
    +#define CACHEFILES_STATE_CHANGED 3 /* T if state changed (poll trigger) */
    + char *rootdirname; /* name of cache root directory */
    + char *tag; /* cache binding tag */
    +};
    +
    +/*
    + * backing file read tracking
    + */
    +struct cachefiles_one_read {
    + wait_queue_t monitor; /* link into monitored waitqueue */
    + struct page *back_page; /* backing file page we're waiting for */
    + struct page *netfs_page; /* netfs page we're going to fill */
    + struct fscache_retrieval *op; /* retrieval op covering this */
    + struct list_head op_link; /* link in op's todo list */
    +};
    +
    +/*
    + * backing file write tracking
    + */
    +struct cachefiles_one_write {
    + struct page *netfs_page; /* netfs page to copy */
    + struct cachefiles_object *object;
    + struct list_head obj_link; /* link in object's lists */
    + fscache_rw_complete_t end_io_func;
    + void *context;
    +};
    +
    +/*
    + * auxiliary data xattr buffer
    + */
    +struct cachefiles_xattr {
    + uint16_t len;
    + uint8_t type;
    + uint8_t data[];
    +};
    +
    +/*
    + * note change of state for daemon
    + */
    +static inline void cachefiles_state_changed(struct cachefiles_cache *cache)
    +{
    + set_bit(CACHEFILES_STATE_CHANGED, &cache->flags);
    + wake_up_all(&cache->daemon_pollwq);
    +}
    +
    +/*
    + * cf-bind.c
    + */
    +extern int cachefiles_daemon_bind(struct cachefiles_cache *cache, char *args);
    +extern void cachefiles_daemon_unbind(struct cachefiles_cache *cache);
    +
    +/*
    + * cf-daemon.c
    + */
    +extern const struct file_operations cachefiles_daemon_fops;
    +
    +extern int cachefiles_has_space(struct cachefiles_cache *cache,
    + unsigned fnr, unsigned bnr);
    +
    +/*
    + * cf-interface.c
    + */
    +extern const struct fscache_cache_ops cachefiles_cache_ops;
    +
    +/*
    + * cf-key.c
    + */
    +extern char *cachefiles_cook_key(const u8 *raw, int keylen, uint8_t type);
    +
    +/*
    + * cf-namei.c
    + */
    +extern int cachefiles_delete_object(struct cachefiles_cache *cache,
    + struct cachefiles_object *object);
    +extern int cachefiles_walk_to_object(struct cachefiles_object *parent,
    + struct cachefiles_object *object,
    + const char *key,
    + struct cachefiles_xattr *auxdata);
    +extern struct dentry *cachefiles_get_directory(struct cachefiles_cache *cache,
    + struct dentry *dir,
    + const char *name);
    +
    +extern int cachefiles_cull(struct cachefiles_cache *cache, struct dentry *dir,
    + char *filename);
    +
    +extern int cachefiles_check_in_use(struct cachefiles_cache *cache,
    + struct dentry *dir, char *filename);
    +
    +/*
    + * cf-proc.c
    + */
    +#ifdef CONFIG_CACHEFILES_HISTOGRAM
    +extern atomic_t cachefiles_lookup_histogram[HZ];
    +extern atomic_t cachefiles_mkdir_histogram[HZ];
    +extern atomic_t cachefiles_create_histogram[HZ];
    +
    +extern int __init cachefiles_proc_init(void);
    +extern void cachefiles_proc_cleanup(void);
    +static inline
    +void cachefiles_hist(atomic_t histogram[], unsigned long start_jif)
    +{
    + unsigned long jif = jiffies - start_jif;
    + if (jif >= HZ)
    + jif = HZ - 1;
    + atomic_inc(&histogram[jif]);
    +}
    +
    +#else
    +#define cachefiles_proc_init() (0)
    +#define cachefiles_proc_cleanup() do {} while(0)
    +#define cachefiles_hist(hist, start_jif) do {} while(0)
    +#endif
    +
    +/*
    + * cf-rdwr.c
    + */
    +extern int cachefiles_read_or_alloc_page(struct fscache_retrieval *,
    + struct page *, gfp_t);
    +extern int cachefiles_read_or_alloc_pages(struct fscache_retrieval *,
    + struct list_head *, unsigned *,
    + gfp_t);
    +extern int cachefiles_allocate_page(struct fscache_retrieval *, struct page *,
    + gfp_t);
    +extern int cachefiles_allocate_pages(struct fscache_retrieval *,
    + struct list_head *, unsigned *, gfp_t);
    +extern int cachefiles_write_page(struct fscache_storage *, struct page *);
    +extern void cachefiles_uncache_page(struct fscache_object *, struct page *);
    +
    +/*
    + * cf-security.c
    + */
    +extern int cachefiles_get_security_ID(struct cachefiles_cache *cache);
    +extern int cachefiles_determine_cache_secid(struct cachefiles_cache *cache,
    + struct dentry *root);
    +
    +static inline void cachefiles_begin_secure(struct cachefiles_cache *cache,
    + struct cred **_saved_cred)
    +{
    + *_saved_cred = __set_current_cred(get_cred(cache->cache_cred));
    +}
    +
    +static inline void cachefiles_end_secure(struct cachefiles_cache *cache,
    + struct cred *saved_cred)
    +{
    + set_current_cred(saved_cred);
    +}
    +
    +/*
    + * cf-xattr.c
    + */
    +extern int cachefiles_check_object_type(struct cachefiles_object *object);
    +extern int cachefiles_set_object_xattr(struct cachefiles_object *object,
    + struct cachefiles_xattr *auxdata);
    +extern int cachefiles_update_object_xattr(struct cachefiles_object *object,
    + struct cachefiles_xattr *auxdata);
    +extern int cachefiles_check_object_xattr(struct cachefiles_object *object,
    + struct cachefiles_xattr *auxdata);
    +extern int cachefiles_remove_object_xattr(struct cachefiles_cache *cache,
    + struct dentry *dentry);
    +
    +
    +/*
    + * error handling
    + */
    +#define kerror(FMT,...) printk(KERN_ERR "CacheFiles: "FMT"\n" ,##__VA_ARGS__);
    +
    +#define cachefiles_io_error(___cache, FMT, ...) \
    +do { \
    + kerror("I/O Error: " FMT ,##__VA_ARGS__); \
    + fscache_io_error(&(___cache)->cache); \
    + set_bit(CACHEFILES_DEAD, &(___cache)->flags); \
    +} while(0)
    +
    +#define cachefiles_io_error_obj(object, FMT, ...) \
    +do { \
    + struct cachefiles_cache *___cache; \
    + \
    + ___cache = container_of((object)->fscache.cache, \
    + struct cachefiles_cache, cache); \
    + cachefiles_io_error(___cache, FMT ,##__VA_ARGS__); \
    +} while(0)
    +
    +
    +/*
    + * debug tracing
    + */
    +#define dbgprintk(FMT,...) \
    + printk("[%-6.6s] "FMT"\n",current->comm ,##__VA_ARGS__)
    +
    +/* make sure we maintain the format strings, even when debugging is disabled */
    +static inline void _dbprintk(const char *fmt, ...)
    + __attribute__((format(printf,1,2)));
    +static inline void _dbprintk(const char *fmt, ...)
    +{
    +}
    +
    +#define kenter(FMT,...) dbgprintk("==> %s("FMT")",__FUNCTION__ ,##__VA_ARGS__)
    +#define kleave(FMT,...) dbgprintk("<== %s()"FMT"",__FUNCTION__ ,##__VA_ARGS__)
    +#define kdebug(FMT,...) dbgprintk(FMT ,##__VA_ARGS__)
    +
    +
    +#if defined(__KDEBUG)
    +#define _enter(FMT,...) kenter(FMT,##__VA_ARGS__)
    +#define _leave(FMT,...) kleave(FMT,##__VA_ARGS__)
    +#define _debug(FMT,...) kdebug(FMT,##__VA_ARGS__)
    +
    +#elif defined(CONFIG_CACHEFILES_DEBUG)
    +#define _enter(FMT,...) \
    +do { \
    + if (cachefiles_debug & CACHEFILES_DEBUG_KENTER) \
    + kenter(FMT,##__VA_ARGS__); \
    +} while (0)
    +
    +#define _leave(FMT,...) \
    +do { \
    + if (cachefiles_debug & CACHEFILES_DEBUG_KLEAVE) \
    + kleave(FMT,##__VA_ARGS__); \
    +} while (0)
    +
    +#define _debug(FMT,...) \
    +do { \
    + if (cachefiles_debug & CACHEFILES_DEBUG_KDEBUG) \
    + kdebug(FMT,##__VA_ARGS__); \
    +} while (0)
    +
    +#else
    +#define _enter(FMT,...) _dbprintk("==> %s("FMT")",__FUNCTION__ ,##__VA_ARGS__)
    +#define _leave(FMT,...) _dbprintk("<== %s()"FMT"",__FUNCTION__ ,##__VA_ARGS__)
    +#define _debug(FMT,...) _dbprintk(FMT ,##__VA_ARGS__)
    +#endif
    +
    +#if 1 // defined(__KDEBUGALL)
    +
    +#define ASSERT(X) \
    +do { \
    + if (unlikely(!(X))) { \
    + printk(KERN_ERR "\n"); \
    + printk(KERN_ERR "CacheFiles: Assertion failed\n"); \
    + BUG(); \
    + } \
    +} while(0)
    +
    +#define ASSERTCMP(X, OP, Y) \
    +do { \
    + if (unlikely(!((X) OP (Y)))) { \
    + printk(KERN_ERR "\n"); \
    + printk(KERN_ERR "CacheFiles: Assertion failed\n"); \
    + printk(KERN_ERR "%lx " #OP " %lx is false\n", \
    + (unsigned long)(X), (unsigned long)(Y)); \
    + BUG(); \
    + } \
    +} while(0)
    +
    +#define ASSERTIF(C, X) \
    +do { \
    + if (unlikely((C) && !(X))) { \
    + printk(KERN_ERR "\n"); \
    + printk(KERN_ERR "CacheFiles: Assertion failed\n"); \
    + BUG(); \
    + } \
    +} while(0)
    +
    +#define ASSERTIFCMP(C, X, OP, Y) \
    +do { \
    + if (unlikely((C) && !((X) OP (Y)))) { \
    + printk(KERN_ERR "\n"); \
    + printk(KERN_ERR "CacheFiles: Assertion failed\n"); \
    + printk(KERN_ERR "%lx " #OP " %lx is false\n", \
    + (unsigned long)(X), (unsigned long)(Y)); \
    + BUG(); \
    + } \
    +} while(0)
    +
    +#else
    +
    +#define ASSERT(X) \
    +do { \
    +} while(0)
    +
    +#define ASSERTCMP(X, OP, Y) \
    +do { \
    +} while(0)
    +
    +#define ASSERTIF(C, X) \
    +do { \
    +} while(0)
    +
    +#define ASSERTIFCMP(C, X, OP, Y) \
    +do { \
    +} while(0)
    +
    +#endif
    diff --git a/fs/cachefiles/cf-key.c b/fs/cachefiles/cf-key.c
    new file mode 100644
    index 0000000..6956eec
    --- /dev/null
    +++ b/fs/cachefiles/cf-key.c
    @@ -0,0 +1,159 @@
    +/* Key to pathname encoder
    + *
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    + * Written by David Howells (dhowells@redhat.com)
    + *
    + * This program is free software; you can redistribute it and/or
    + * modify it under the terms of the GNU General Public Licence
    + * as published by the Free Software Foundation; either version
    + * 2 of the Licence, or (at your option) any later version.
    + */
    +
    +#include
    +#include "cf-internal.h"
    +
    +static const char cachefiles_charmap[64] =
    + "0123456789" /* 0 - 9 */
    + "abcdefghijklmnopqrstuvwxyz" /* 10 - 35 */
    + "ABCDEFGHIJKLMNOPQRSTUVWXYZ" /* 36 - 61 */
    + "_-" /* 62 - 63 */
    + ;
    +
    +static const char cachefiles_filecharmap[256] = {
    + /* we skip space and tab and control chars */
    + [ 33 ... 46 ] = 1, /* '!' -> '.' */
    + /* we skip '/' as it's significant to pathwalk */
    + [ 48 ... 127 ] = 1, /* '0' -> '~' */
    +};
    +
    +/*
    + * turn the raw key into something cooked
    + * - the raw key should include the length in the two bytes at the front
    + * - the key may be up to 514 bytes in length (including the length word)
    + * - "base64" encode the strange keys, mapping 3 bytes of raw to four of
    + * cooked
    + * - need to cut the cooked key into 252 char lengths (189 raw bytes)
    + */
    +char *cachefiles_cook_key(const u8 *raw, int keylen, uint8_t type)
    +{
    + unsigned char csum, ch;
    + unsigned int acc;
    + char *key;
    + int loop, len, max, seg, mark, print;
    +
    + _enter(",%d", keylen);
    +
    + BUG_ON(keylen < 2 || keylen > 514);
    +
    + csum = raw[0] + raw[1];
    + print = 1;
    + for (loop = 2; loop < keylen; loop++) {
    + ch = raw[loop];
    + csum += ch;
    + print &= cachefiles_filecharmap[ch];
    + }
    +
    + if (print) {
    + /* if the path is usable ASCII, then we render it directly */
    + max = keylen - 2;
    + max += 2; /* two base64'd length chars on the front */
    + max += 5; /* @checksum/M */
    + max += 3 * 2; /* maximum number of segment dividers (".../M")
    + * is ((514 + 251) / 252) = 3
    + */
    + max += 1; /* NUL on end */
    + } else {
    + /* calculate the maximum length of the cooked key */
    + keylen = (keylen + 2) / 3;
    +
    + max = keylen * 4;
    + max += 5; /* @checksum/M */
    + max += 3 * 2; /* maximum number of segment dividers (".../M")
    + * is ((514 + 188) / 189) = 3
    + */
    + max += 1; /* NUL on end */
    + }
    +
    + max += 1; /* 2nd NUL on end */
    +
    + _debug("max: %d", max);
    +
    + key = kmalloc(max, GFP_KERNEL);
    + if (!key)
    + return NULL;
    +
    + len = 0;
    +
    + /* build the cooked key */
    + sprintf(key, "@%02x%c+", (unsigned) csum, 0);
    + len = 5;
    + mark = len - 1;
    +
    + if (print) {
    + acc = *(uint16_t *) raw;
    + raw += 2;
    +
    + key[len + 1] = cachefiles_charmap[acc & 63];
    + acc >>= 6;
    + key[len] = cachefiles_charmap[acc & 63];
    + len += 2;
    +
    + seg = 250;
    + for (loop = keylen; loop > 0; loop--) {
    + if (seg <= 0) {
    + key[len++] = '\0';
    + mark = len;
    + key[len++] = '+';
    + seg = 252;
    + }
    +
    + key[len++] = *raw++;
    + ASSERT(len < max);
    + }
    +
    + switch (type) {
    + case FSCACHE_COOKIE_TYPE_INDEX: type = 'I'; break;
    + case FSCACHE_COOKIE_TYPE_DATAFILE: type = 'D'; break;
    + default: type = 'S'; break;
    + }
    + } else {
    + seg = 252;
    + for (loop = keylen; loop > 0; loop--) {
    + if (seg <= 0) {
    + key[len++] = '\0';
    + mark = len;
    + key[len++] = '+';
    + seg = 252;
    + }
    +
    + acc = *raw++;
    + acc |= *raw++ << 8;
    + acc |= *raw++ << 16;
    +
    + _debug("acc: %06x", acc);
    +
    + key[len++] = cachefiles_charmap[acc & 63];
    + acc >>= 6;
    + key[len++] = cachefiles_charmap[acc & 63];
    + acc >>= 6;
    + key[len++] = cachefiles_charmap[acc & 63];
    + acc >>= 6;
    + key[len++] = cachefiles_charmap[acc & 63];
    +
    + ASSERT(len < max);
    + }
    +
    + switch (type) {
    + case FSCACHE_COOKIE_TYPE_INDEX: type = 'J'; break;
    + case FSCACHE_COOKIE_TYPE_DATAFILE: type = 'E'; break;
    + default: type = 'T'; break;
    + }
    + }
    +
    + key[mark] = type;
    + key[len++] = 0;
    + key[len] = 0;
    +
    + _leave(" = %p %d", key, len);
    + return key;
    +}
    diff --git a/fs/cachefiles/cf-main.c b/fs/cachefiles/cf-main.c
    new file mode 100644
    index 0000000..67bfdb3
    --- /dev/null
    +++ b/fs/cachefiles/cf-main.c
    @@ -0,0 +1,109 @@
    +/* Network filesystem caching backend to use cache files on a premounted
    + * filesystem
    + *
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    + * Written by David Howells (dhowells@redhat.com)
    + *
    + * This program is free software; you can redistribute it and/or
    + * modify it under the terms of the GNU General Public Licence
    + * as published by the Free Software Foundation; either version
    + * 2 of the Licence, or (at your option) any later version.
    + */
    +
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include "cf-internal.h"
    +
    +unsigned cachefiles_debug;
    +module_param_named(debug, cachefiles_debug, uint, S_IWUSR | S_IRUGO);
    +MODULE_PARM_DESC(cachefiles_debug, "CacheFiles debugging mask");
    +
    +MODULE_DESCRIPTION("Mounted-filesystem based cache");
    +MODULE_AUTHOR("Red Hat, Inc.");
    +MODULE_LICENSE("GPL");
    +
    +struct kmem_cache *cachefiles_object_jar;
    +
    +static struct miscdevice cachefiles_dev = {
    + .minor = MISC_DYNAMIC_MINOR,
    + .name = "cachefiles",
    + .fops = &cachefiles_daemon_fops,
    +};
    +
    +static void cachefiles_object_init_once(void *_object,
    + struct kmem_cache *cachep,
    + unsigned long flags)
    +{
    + struct cachefiles_object *object = _object;
    +
    + memset(object, 0, sizeof(*object));
    + fscache_object_init(&object->fscache);
    + spin_lock_init(&object->work_lock);
    +}
    +
    +/*
    + * initialise the fs caching module
    + */
    +static int __init cachefiles_init(void)
    +{
    + int ret;
    +
    + ret = misc_register(&cachefiles_dev);
    + if (ret < 0)
    + goto error_dev;
    +
    + /* create an object jar */
    + ret = -ENOMEM;
    + cachefiles_object_jar =
    + kmem_cache_create("cachefiles_object_jar",
    + sizeof(struct cachefiles_object),
    + 0,
    + SLAB_HWCACHE_ALIGN,
    + cachefiles_object_init_once);
    + if (!cachefiles_object_jar) {
    + printk(KERN_NOTICE
    + "CacheFiles: Failed to allocate an object jar\n");
    + goto error_object_jar;
    + }
    +
    + ret = cachefiles_proc_init();
    + if (ret < 0)
    + goto error_proc;
    +
    + printk(KERN_INFO "CacheFiles: Loaded\n");
    + return 0;
    +
    +error_proc:
    + kmem_cache_destroy(cachefiles_object_jar);
    +error_object_jar:
    + misc_deregister(&cachefiles_dev);
    +error_dev:
    + kerror("failed to register: %d", ret);
    + return ret;
    +}
    +
    +fs_initcall(cachefiles_init);
    +
    +/*
    + * clean up on module removal
    + */
    +static void __exit cachefiles_exit(void)
    +{
    + printk(KERN_INFO "CacheFiles: Unloading\n");
    +
    + cachefiles_proc_cleanup();
    + kmem_cache_destroy(cachefiles_object_jar);
    + misc_deregister(&cachefiles_dev);
    +}
    +
    +module_exit(cachefiles_exit);
    diff --git a/fs/cachefiles/cf-namei.c b/fs/cachefiles/cf-namei.c
    new file mode 100644
    index 0000000..809ddcb
    --- /dev/null
    +++ b/fs/cachefiles/cf-namei.c
    @@ -0,0 +1,743 @@
    +/* CacheFiles path walking and related routines
    + *
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    + * Written by David Howells (dhowells@redhat.com)
    + *
    + * This program is free software; you can redistribute it and/or
    + * modify it under the terms of the GNU General Public Licence
    + * as published by the Free Software Foundation; either version
    + * 2 of the Licence, or (at your option) any later version.
    + */
    +
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include "cf-internal.h"
    +
    +static int cachefiles_wait_bit(void *flags)
    +{
    + schedule();
    + return 0;
    +}
    +
    +/*
    + * record the fact that an object is now active
    + */
    +static void cachefiles_mark_object_active(struct cachefiles_cache *cache,
    + struct cachefiles_object *object)
    +{
    + struct cachefiles_object *xobject;
    + struct rb_node **_p, *_parent = NULL;
    + struct dentry *dentry;
    +
    + _enter(",%p", object);
    +
    +try_again:
    + write_lock(&cache->active_lock);
    +
    + if (test_and_set_bit(CACHEFILES_OBJECT_ACTIVE, &object->flags))
    + BUG();
    +
    + dentry = object->dentry;
    + _p = &cache->active_nodes.rb_node;
    + while (*_p) {
    + _parent = *_p;
    + xobject = rb_entry(_parent,
    + struct cachefiles_object, active_node);
    +
    + if (xobject->dentry > dentry)
    + _p = &(*_p)->rb_left;
    + else if (xobject->dentry < dentry)
    + _p = &(*_p)->rb_right;
    + else
    + goto wait_for_old_object;
    + }
    +
    + rb_link_node(&object->active_node, _parent, _p);
    + rb_insert_color(&object->active_node, &cache->active_nodes);
    +
    + write_unlock(&cache->active_lock);
    + _leave("");
    + return;
    +
    + /* an old object from a previous incarnation is hogging the slot - we
    + * need to wait for it to be destroyed */
    +wait_for_old_object:
    + _debug("old OBJ%x", xobject->fscache.debug_id);
    + ASSERTCMP(xobject->fscache.state, >=, FSCACHE_OBJECT_DYING);
    + atomic_inc(&xobject->usage);
    + //clear_bit(CACHEFILES_OBJECT_ACTIVE, &object->flags);
    + write_unlock(&cache->active_lock);
    +
    + _debug(">>> wait");
    + wait_on_bit(&xobject->flags, CACHEFILES_OBJECT_ACTIVE,
    + cachefiles_wait_bit, TASK_UNINTERRUPTIBLE);
    + _debug("<<< waited");
    +
    + cache->cache.ops->put_object(&xobject->fscache);
    + goto try_again;
    +}
    +
    +/*
    + * delete an object representation from the cache
    + * - file backed objects are unlinked
    + * - directory backed objects are stuffed into the graveyard for userspace to
    + * delete
    + * - unlocks the directory mutex
    + */
    +static int cachefiles_bury_object(struct cachefiles_cache *cache,
    + struct dentry *dir,
    + struct dentry *rep)
    +{
    + struct dentry *grave, *trap;
    + char nbuffer[8 + 8 + 1];
    + int ret;
    +
    + _enter(",'%*.*s','%*.*s'",
    + dir->d_name.len, dir->d_name.len, dir->d_name.name,
    + rep->d_name.len, rep->d_name.len, rep->d_name.name);
    +
    + /* non-directories can just be unlinked */
    + if (!S_ISDIR(rep->d_inode->i_mode)) {
    + _debug("unlink stale object");
    + ret = vfs_unlink(dir->d_inode, rep);
    +
    + mutex_unlock(&dir->d_inode->i_mutex);
    +
    + if (ret == -EIO)
    + cachefiles_io_error(cache, "Unlink failed");
    +
    + _leave(" = %d", ret);
    + return ret;
    + }
    +
    + /* directories have to be moved to the graveyard */
    + _debug("move stale object to graveyard");
    + mutex_unlock(&dir->d_inode->i_mutex);
    +
    +try_again:
    + /* first step is to make up a grave dentry in the graveyard */
    + sprintf(nbuffer, "%08x%08x",
    + (uint32_t) xtime.tv_sec,
    + (uint32_t) atomic_inc_return(&cache->gravecounter));
    +
    + /* do the multiway lock magic */
    + trap = lock_rename(cache->graveyard, dir);
    +
    + /* do some checks before getting the grave dentry */
    + if (rep->d_parent != dir) {
    + /* the entry was probably culled when we dropped the parent dir
    + * lock */
    + unlock_rename(cache->graveyard, dir);
    + _leave(" = 0 [culled?]");
    + return 0;
    + }
    +
    + if (!S_ISDIR(cache->graveyard->d_inode->i_mode)) {
    + unlock_rename(cache->graveyard, dir);
    + cachefiles_io_error(cache, "Graveyard no longer a directory");
    + return -EIO;
    + }
    +
    + if (trap == rep) {
    + unlock_rename(cache->graveyard, dir);
    + cachefiles_io_error(cache, "May not make directory loop");
    + return -EIO;
    + }
    +
    + if (d_mountpoint(rep)) {
    + unlock_rename(cache->graveyard, dir);
    + cachefiles_io_error(cache, "Mountpoint in cache");
    + return -EIO;
    + }
    +
    + grave = lookup_one_len(nbuffer, cache->graveyard, strlen(nbuffer));
    + if (IS_ERR(grave)) {
    + unlock_rename(cache->graveyard, dir);
    +
    + if (PTR_ERR(grave) == -ENOMEM) {
    + _leave(" = -ENOMEM");
    + return -ENOMEM;
    + }
    +
    + cachefiles_io_error(cache, "Lookup error %ld",
    + PTR_ERR(grave));
    + return -EIO;
    + }
    +
    + if (grave->d_inode) {
    + unlock_rename(cache->graveyard, dir);
    + dput(grave);
    + grave = NULL;
    + cond_resched();
    + goto try_again;
    + }
    +
    + if (d_mountpoint(grave)) {
    + unlock_rename(cache->graveyard, dir);
    + dput(grave);
    + cachefiles_io_error(cache, "Mountpoint in graveyard");
    + return -EIO;
    + }
    +
    + /* target should not be an ancestor of source */
    + if (trap == grave) {
    + unlock_rename(cache->graveyard, dir);
    + dput(grave);
    + cachefiles_io_error(cache, "May not make directory loop");
    + return -EIO;
    + }
    +
    + /* attempt the rename */
    + ret = vfs_rename(dir->d_inode, rep, cache->graveyard->d_inode, grave);
    + if (ret != 0 && ret != -ENOMEM)
    + cachefiles_io_error(cache, "Rename failed with error %d", ret);
    +
    + unlock_rename(cache->graveyard, dir);
    + dput(grave);
    + _leave(" = 0");
    + return 0;
    +}
    +
    +/*
    + * delete an object representation from the cache
    + */
    +int cachefiles_delete_object(struct cachefiles_cache *cache,
    + struct cachefiles_object *object)
    +{
    + struct dentry *dir;
    + int ret;
    +
    + _enter(",{%p}", object->dentry);
    +
    + ASSERT(object->dentry);
    + ASSERT(object->dentry->d_inode);
    + ASSERT(object->dentry->d_parent);
    +
    + dir = dget_parent(object->dentry);
    +
    + mutex_lock(&dir->d_inode->i_mutex);
    + ret = cachefiles_bury_object(cache, dir, object->dentry);
    +
    + dput(dir);
    + _leave(" = %d", ret);
    + return ret;
    +}
    +
    +/*
    + * walk from the parent object to the child object through the backing
    + * filesystem, creating directories as we go
    + */
    +int cachefiles_walk_to_object(struct cachefiles_object *parent,
    + struct cachefiles_object *object,
    + const char *key,
    + struct cachefiles_xattr *auxdata)
    +{
    + struct cachefiles_cache *cache;
    + struct dentry *dir, *next = NULL;
    + unsigned long start;
    + const char *name;
    + int ret, nlen;
    +
    + _enter("{%p},,%s,", parent->dentry, key);
    +
    + cache = container_of(parent->fscache.cache,
    + struct cachefiles_cache, cache);
    +
    + ASSERT(parent->dentry);
    + ASSERT(parent->dentry->d_inode);
    +
    + if (!(S_ISDIR(parent->dentry->d_inode->i_mode))) {
    + // TODO: convert file to dir
    + _leave("looking up in none directory");
    + return -ENOBUFS;
    + }
    +
    + dir = dget(parent->dentry);
    +
    +advance:
    + /* attempt to transit the first directory component */
    + name = key;
    + nlen = strlen(key);
    +
    + /* key ends in a double NUL */
    + key = key + nlen + 1;
    + if (!*key)
    + key = NULL;
    +
    +lookup_again:
    + /* search the current directory for the element name */
    + _debug("lookup '%s'", name);
    +
    + mutex_lock(&dir->d_inode->i_mutex);
    +
    + start = jiffies;
    + next = lookup_one_len(name, dir, nlen);
    + cachefiles_hist(cachefiles_lookup_histogram, start);
    + if (IS_ERR(next))
    + goto lookup_error;
    +
    + _debug("next -> %p %s", next, next->d_inode ? "positive" : "negative");
    +
    + if (!key)
    + object->new = !next->d_inode;
    +
    + /* if this element of the path doesn't exist, then the lookup phase
    + * failed, and we can release any readers in the certain knowledge that
    + * there's nothing for them to actually read */
    + if (!next->d_inode)
    + fscache_object_lookup_negative(&object->fscache);
    +
    + /* we need to create the object if it's negative */
    + if (key || object->type == FSCACHE_COOKIE_TYPE_INDEX) {
    + /* index objects and intervening tree levels must be subdirs */
    + if (!next->d_inode) {
    + // TODO advance object state
    + ret = cachefiles_has_space(cache, 1, 0);
    + if (ret < 0)
    + goto create_error;
    +
    + start = jiffies;
    + ret = vfs_mkdir(dir->d_inode, next, 0);
    + cachefiles_hist(cachefiles_mkdir_histogram, start);
    + if (ret < 0)
    + goto create_error;
    +
    + ASSERT(next->d_inode);
    +
    + _debug("mkdir -> %p{%p{ino=%lu}}",
    + next, next->d_inode, next->d_inode->i_ino);
    +
    + } else if (!S_ISDIR(next->d_inode->i_mode)) {
    + kerror("inode %lu is not a directory",
    + next->d_inode->i_ino);
    + ret = -ENOBUFS;
    + goto error;
    + }
    +
    + } else {
    + /* non-index objects start out life as files */
    + if (!next->d_inode) {
    + // TODO advance object state
    + ret = cachefiles_has_space(cache, 1, 0);
    + if (ret < 0)
    + goto create_error;
    +
    + start = jiffies;
    + ret = vfs_create(dir->d_inode, next, S_IFREG, NULL);
    + cachefiles_hist(cachefiles_create_histogram, start);
    + if (ret < 0)
    + goto create_error;
    +
    + ASSERT(next->d_inode);
    +
    + _debug("create -> %p{%p{ino=%lu}}",
    + next, next->d_inode, next->d_inode->i_ino);
    +
    + } else if (!S_ISDIR(next->d_inode->i_mode) &&
    + !S_ISREG(next->d_inode->i_mode)
    + ) {
    + kerror("inode %lu is not a file or directory",
    + next->d_inode->i_ino);
    + ret = -ENOBUFS;
    + goto error;
    + }
    + }
    +
    + /* process the next component */
    + if (key) {
    + _debug("advance");
    + mutex_unlock(&dir->d_inode->i_mutex);
    + dput(dir);
    + dir = next;
    + next = NULL;
    + goto advance;
    + }
    +
    + /* we've found the object we were looking for */
    + object->dentry = next;
    +
    + /* if we've found that the terminal object exists, then we need to
    + * check its attributes and delete it if it's out of date */
    + if (!object->new) {
    + _debug("validate '%*.*s'",
    + next->d_name.len, next->d_name.len, next->d_name.name);
    +
    + ret = cachefiles_check_object_xattr(object, auxdata);
    + if (ret == -ESTALE) {
    + /* delete the object (the deleter drops the directory
    + * mutex) */
    + object->dentry = NULL;
    +
    + ret = cachefiles_bury_object(cache, dir, next);
    + dput(next);
    + next = NULL;
    +
    + if (ret < 0)
    + goto delete_error;
    +
    + _debug("redo lookup");
    + goto lookup_again;
    + }
    + }
    +
    + /* note that we're now using this object */
    + cachefiles_mark_object_active(cache, object);
    +
    + mutex_unlock(&dir->d_inode->i_mutex);
    + dput(dir);
    + dir = NULL;
    +
    + _debug("=== OBTAINED_OBJECT ===");
    +
    + if (object->new) {
    + /* attach data to a newly constructed terminal object */
    + ret = cachefiles_set_object_xattr(object, auxdata);
    + if (ret < 0)
    + goto check_error;
    + } else {
    + /* always update the atime on an object we've just looked up
    + * (this is used to keep track of culling, and atimes are only
    + * updated by read, write and readdir but not lookup or
    + * open) */
    + touch_atime(cache->mnt, next);
    + }
    +
    + /* open a file interface onto a data file */
    + if (object->type != FSCACHE_COOKIE_TYPE_INDEX) {
    + if (S_ISREG(object->dentry->d_inode->i_mode)) {
    + const struct address_space_operations *aops;
    +
    + ret = -EPERM;
    + aops = object->dentry->d_inode->i_mapping->a_ops;
    + if (!aops->bmap ||
    + !aops->prepare_write ||
    + !aops->commit_write ||
    + !aops->write_one_page)
    + goto check_error;
    +
    + object->backer = object->dentry;
    + } else {
    + BUG(); // TODO: open file in data-class subdir
    + }
    + }
    +
    + object->new = 0;
    + fscache_obtained_object(&object->fscache);
    +
    + _leave(" = 0 [%lu]", object->dentry->d_inode->i_ino);
    + return 0;
    +
    +create_error:
    + _debug("create error %d", ret);
    + if (ret == -EIO)
    + cachefiles_io_error(cache, "Create/mkdir failed");
    + goto error;
    +
    +check_error:
    + _debug("check error %d", ret);
    + write_lock(&cache->active_lock);
    + rb_erase(&object->active_node, &cache->active_nodes);
    + write_unlock(&cache->active_lock);
    +
    + dput(object->dentry);
    + object->dentry = NULL;
    + goto error_out;
    +
    +delete_error:
    + _debug("delete error %d", ret);
    + goto error_out2;
    +
    +lookup_error:
    + _debug("lookup error %ld", PTR_ERR(next));
    + ret = PTR_ERR(next);
    + if (ret == -EIO)
    + cachefiles_io_error(cache, "Lookup failed");
    + next = NULL;
    +error:
    + mutex_unlock(&dir->d_inode->i_mutex);
    + dput(next);
    +error_out2:
    + dput(dir);
    +error_out:
    + if (ret == -ENOSPC)
    + ret = -ENOBUFS;
    +
    + _leave(" = error %d", -ret);
    + return ret;
    +}
    +
    +/*
    + * get a subdirectory
    + */
    +struct dentry *cachefiles_get_directory(struct cachefiles_cache *cache,
    + struct dentry *dir,
    + const char *dirname)
    +{
    + struct dentry *subdir;
    + unsigned long start;
    + int ret;
    +
    + _enter(",,%s", dirname);
    +
    + /* search the current directory for the element name */
    + mutex_lock(&dir->d_inode->i_mutex);
    +
    + start = jiffies;
    + subdir = lookup_one_len(dirname, dir, strlen(dirname));
    + cachefiles_hist(cachefiles_lookup_histogram, start);
    + if (IS_ERR(subdir)) {
    + if (PTR_ERR(subdir) == -ENOMEM)
    + goto nomem_d_alloc;
    + goto lookup_error;
    + }
    +
    + _debug("subdir -> %p %s",
    + subdir, subdir->d_inode ? "positive" : "negative");
    +
    + /* we need to create the subdir if it doesn't exist yet */
    + if (!subdir->d_inode) {
    + ret = cachefiles_has_space(cache, 1, 0);
    + if (ret < 0)
    + goto mkdir_error;
    +
    + _debug("attempt mkdir");
    +
    + ret = vfs_mkdir(dir->d_inode, subdir, 0700);
    + if (ret < 0)
    + goto mkdir_error;
    +
    + ASSERT(subdir->d_inode);
    +
    + _debug("mkdir -> %p{%p{ino=%lu}}",
    + subdir,
    + subdir->d_inode,
    + subdir->d_inode->i_ino);
    + }
    +
    + mutex_unlock(&dir->d_inode->i_mutex);
    +
    + /* we need to make sure the subdir is a directory */
    + ASSERT(subdir->d_inode);
    +
    + if (!S_ISDIR(subdir->d_inode->i_mode)) {
    + kerror("%s is not a directory", dirname);
    + ret = -EIO;
    + goto check_error;
    + }
    +
    + ret = -EPERM;
    + if (!subdir->d_inode->i_op ||
    + !subdir->d_inode->i_op->setxattr ||
    + !subdir->d_inode->i_op->getxattr ||
    + !subdir->d_inode->i_op->lookup ||
    + !subdir->d_inode->i_op->mkdir ||
    + !subdir->d_inode->i_op->create ||
    + !subdir->d_inode->i_op->rename ||
    + !subdir->d_inode->i_op->rmdir ||
    + !subdir->d_inode->i_op->unlink)
    + goto check_error;
    +
    + _leave(" = [%lu]", subdir->d_inode->i_ino);
    + return subdir;
    +
    +check_error:
    + dput(subdir);
    + _leave(" = %d [check]", ret);
    + return ERR_PTR(ret);
    +
    +mkdir_error:
    + mutex_unlock(&dir->d_inode->i_mutex);
    + dput(subdir);
    + kerror("mkdir %s failed with error %d", dirname, ret);
    + return ERR_PTR(ret);
    +
    +lookup_error:
    + mutex_unlock(&dir->d_inode->i_mutex);
    + ret = PTR_ERR(subdir);
    + kerror("Lookup %s failed with error %d", dirname, ret);
    + return ERR_PTR(ret);
    +
    +nomem_d_alloc:
    + mutex_unlock(&dir->d_inode->i_mutex);
    + _leave(" = -ENOMEM");
    + return ERR_PTR(-ENOMEM);
    +}
    +
    +/*
    + * find out if an object is in use or not
    + * - if finds object and it's not in use:
    + * - returns a pointer to the object and a reference on it
    + * - returns with the directory locked
    + */
    +static struct dentry *cachefiles_check_active(struct cachefiles_cache *cache,
    + struct dentry *dir,
    + char *filename)
    +{
    + struct cachefiles_object *object;
    + struct rb_node *_n;
    + struct dentry *victim;
    + unsigned long start;
    + int ret;
    +
    + _enter(",%*.*s/,%s",
    + dir->d_name.len, dir->d_name.len, dir->d_name.name, filename);
    +
    + /* look up the victim */
    + mutex_lock_nested(&dir->d_inode->i_mutex, 1);
    +
    + start = jiffies;
    + victim = lookup_one_len(filename, dir, strlen(filename));
    + cachefiles_hist(cachefiles_lookup_histogram, start);
    + if (IS_ERR(victim))
    + goto lookup_error;
    +
    + _debug("victim -> %p %s",
    + victim, victim->d_inode ? "positive" : "negative");
    +
    + /* if the object is no longer there then we probably retired the object
    + * at the netfs's request whilst the cull was in progress
    + */
    + if (!victim->d_inode) {
    + mutex_unlock(&dir->d_inode->i_mutex);
    + dput(victim);
    + _leave(" = -ENOENT [absent]");
    + return ERR_PTR(-ENOENT);
    + }
    +
    + /* check to see if we're using this object */
    + read_lock(&cache->active_lock);
    +
    + _n = cache->active_nodes.rb_node;
    +
    + while (_n) {
    + object = rb_entry(_n, struct cachefiles_object, active_node);
    +
    + if (object->dentry > victim)
    + _n = _n->rb_left;
    + else if (object->dentry < victim)
    + _n = _n->rb_right;
    + else
    + goto object_in_use;
    + }
    +
    + read_unlock(&cache->active_lock);
    +
    + _leave(" = %p", victim);
    + return victim;
    +
    +object_in_use:
    + read_unlock(&cache->active_lock);
    + mutex_unlock(&dir->d_inode->i_mutex);
    + dput(victim);
    + _leave(" = -EBUSY [in use]");
    + return ERR_PTR(-EBUSY);
    +
    +lookup_error:
    + mutex_unlock(&dir->d_inode->i_mutex);
    + ret = PTR_ERR(victim);
    + if (ret == -ENOENT) {
    + /* file or dir now absent - probably retired by netfs */
    + _leave(" = -ESTALE [absent]");
    + return ERR_PTR(-ESTALE);
    + }
    +
    + if (ret == -EIO) {
    + cachefiles_io_error(cache, "Lookup failed");
    + } else if (ret != -ENOMEM) {
    + kerror("Internal error: %d", ret);
    + ret = -EIO;
    + }
    +
    + _leave(" = %d", ret);
    + return ERR_PTR(ret);
    +}
    +
    +/*
    + * cull an object if it's not in use
    + * - called only by cache manager daemon
    + */
    +int cachefiles_cull(struct cachefiles_cache *cache, struct dentry *dir,
    + char *filename)
    +{
    + struct dentry *victim;
    + int ret;
    +
    + _enter(",%*.*s/,%s",
    + dir->d_name.len, dir->d_name.len, dir->d_name.name, filename);
    +
    + victim = cachefiles_check_active(cache, dir, filename);
    + if (IS_ERR(victim))
    + return PTR_ERR(victim);
    +
    + _debug("victim -> %p %s",
    + victim, victim->d_inode ? "positive" : "negative");
    +
    + /* okay... the victim is not being used so we can cull it
    + * - start by marking it as stale
    + */
    + _debug("victim is cullable");
    +
    + ret = cachefiles_remove_object_xattr(cache, victim);
    + if (ret < 0)
    + goto error_unlock;
    +
    + /* actually remove the victim (drops the dir mutex) */
    + _debug("bury");
    +
    + ret = cachefiles_bury_object(cache, dir, victim);
    + if (ret < 0)
    + goto error;
    +
    + dput(victim);
    + _leave(" = 0");
    + return 0;
    +
    +error_unlock:
    + mutex_unlock(&dir->d_inode->i_mutex);
    +error:
    + dput(victim);
    + if (ret == -ENOENT) {
    + /* file or dir now absent - probably retired by netfs */
    + _leave(" = -ESTALE [absent]");
    + return -ESTALE;
    + }
    +
    + if (ret != -ENOMEM) {
    + kerror("Internal error: %d", ret);
    + ret = -EIO;
    + }
    +
    + _leave(" = %d", ret);
    + return ret;
    +}
    +
    +/*
    + * find out if an object is in use or not
    + * - called only by cache manager daemon
    + * - returns -EBUSY or 0 to indicate whether an object is in use or not
    + */
    +int cachefiles_check_in_use(struct cachefiles_cache *cache, struct dentry *dir,
    + char *filename)
    +{
    + struct dentry *victim;
    +
    + _enter(",%*.*s/,%s",
    + dir->d_name.len, dir->d_name.len, dir->d_name.name, filename);
    +
    + victim = cachefiles_check_active(cache, dir, filename);
    + if (IS_ERR(victim))
    + return PTR_ERR(victim);
    +
    + mutex_unlock(&dir->d_inode->i_mutex);
    + dput(victim);
    + _leave(" = 0");
    + return 0;
    +}
    diff --git a/fs/cachefiles/cf-proc.c b/fs/cachefiles/cf-proc.c
    new file mode 100644
    index 0000000..c0d5444
    --- /dev/null
    +++ b/fs/cachefiles/cf-proc.c
    @@ -0,0 +1,166 @@
    +/* CacheFiles statistics
    + *
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    + * Written by David Howells (dhowells@redhat.com)
    + *
    + * This program is free software; you can redistribute it and/or
    + * modify it under the terms of the GNU General Public Licence
    + * as published by the Free Software Foundation; either version
    + * 2 of the Licence, or (at your option) any later version.
    + */
    +
    +#include
    +#include
    +#include
    +#include "cf-internal.h"
    +
    +struct cachefiles_proc {
    + unsigned nlines;
    + const struct seq_operations *ops;
    +};
    +
    +atomic_t cachefiles_lookup_histogram[HZ];
    +atomic_t cachefiles_mkdir_histogram[HZ];
    +atomic_t cachefiles_create_histogram[HZ];
    +
    +static struct proc_dir_entry *proc_cachefiles;
    +
    +static int cachefiles_proc_open(struct inode *inode, struct file *file);
    +static void *cachefiles_proc_start(struct seq_file *m, loff_t *pos);
    +static void cachefiles_proc_stop(struct seq_file *m, void *v);
    +static void *cachefiles_proc_next(struct seq_file *m, void *v, loff_t *pos);
    +static int cachefiles_histogram_show(struct seq_file *m, void *v);
    +
    +static const struct file_operations cachefiles_proc_fops = {
    + .open = cachefiles_proc_open,
    + .read = seq_read,
    + .llseek = seq_lseek,
    + .release = seq_release,
    +};
    +
    +static const struct seq_operations cachefiles_histogram_ops = {
    + .start = cachefiles_proc_start,
    + .stop = cachefiles_proc_stop,
    + .next = cachefiles_proc_next,
    + .show = cachefiles_histogram_show,
    +};
    +
    +static const struct cachefiles_proc cachefiles_histogram = {
    + .nlines = HZ + 1,
    + .ops = &cachefiles_histogram_ops,
    +};
    +
    +/*
    + * initialise the /proc/fs/fscache/cachefiles/ directory
    + */
    +int __init cachefiles_proc_init(void)
    +{
    + struct proc_dir_entry *p;
    +
    + _enter("");
    +
    + proc_cachefiles = proc_mkdir("cachefiles", proc_fscache);
    + if (!proc_cachefiles)
    + goto error_dir;
    + proc_cachefiles->owner = THIS_MODULE;
    +
    + p = create_proc_entry("histogram", 0, proc_cachefiles);
    + if (!p)
    + goto error_histogram;
    + p->proc_fops = &cachefiles_proc_fops;
    + p->owner = THIS_MODULE;
    + p->data = (void *) &cachefiles_histogram;
    +
    + _leave(" = 0");
    + return 0;
    +
    +error_histogram:
    + remove_proc_entry("fs/cachefiles", NULL);
    +error_dir:
    + _leave(" = -ENOMEM");
    + return -ENOMEM;
    +}
    +
    +/*
    + * clean up the /proc/fs/fscache/cachefiles/ directory
    + */
    +void cachefiles_proc_cleanup(void)
    +{
    + remove_proc_entry("histogram", proc_cachefiles);
    + remove_proc_entry("cachefiles", proc_fscache);
    +}
    +
    +/*
    + * open "/proc/fs/fscache/cachefiles/XXX" which provide statistics summaries
    + */
    +static int cachefiles_proc_open(struct inode *inode, struct file *file)
    +{
    + const struct cachefiles_proc *proc = PDE(inode)->data;
    + struct seq_file *m;
    + int ret;
    +
    + ret = seq_open(file, proc->ops);
    + if (ret == 0) {
    + m = file->private_data;
    + m->private = (void *) proc;
    + }
    + return ret;
    +}
    +
    +/*
    + * set up the iterator to start reading from the first line
    + */
    +static void *cachefiles_proc_start(struct seq_file *m, loff_t *_pos)
    +{
    + if (*_pos == 0)
    + *_pos = 1;
    + return (void *)(unsigned long) *_pos;
    +}
    +
    +/*
    + * move to the next line
    + */
    +static void *cachefiles_proc_next(struct seq_file *m, void *v, loff_t *pos)
    +{
    + const struct cachefiles_proc *proc = m->private;
    +
    + (*pos)++;
    + return *pos > proc->nlines ? NULL : (void *)(unsigned long) *pos;
    +}
    +
    +/*
    + * clean up after reading
    + */
    +static void cachefiles_proc_stop(struct seq_file *m, void *v)
    +{
    +}
    +
    +/*
    + * display the time-taken histogram
    + */
    +static int cachefiles_histogram_show(struct seq_file *m, void *v)
    +{
    + unsigned long index;
    + unsigned x, y, z, t;
    +
    + switch ((unsigned long) v) {
    + case 1:
    + seq_puts(m, "JIFS SECS LOOKUPS MKDIRS CREATES\n");
    + return 0;
    + case 2:
    + seq_puts(m, "===== ===== ========= ========= =========\n");
    + return 0;
    + default:
    + index = (unsigned long) v - 3;
    + x = atomic_read(&cachefiles_lookup_histogram[index]);
    + y = atomic_read(&cachefiles_mkdir_histogram[index]);
    + z = atomic_read(&cachefiles_create_histogram[index]);
    + if (x == 0 && y == 0 && z == 0)
    + return 0;
    +
    + t = (index * 1000) / HZ;
    +
    + seq_printf(m, "%4lu 0.%03u %9u %9u %9u\n", index, t, x, y, z);
    + return 0;
    + }
    +}
    diff --git a/fs/cachefiles/cf-rdwr.c b/fs/cachefiles/cf-rdwr.c
    new file mode 100644
    index 0000000..5233477
    --- /dev/null
    +++ b/fs/cachefiles/cf-rdwr.c
    @@ -0,0 +1,849 @@
    +/* Storage object read/write
    + *
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    + * Written by David Howells (dhowells@redhat.com)
    + *
    + * This program is free software; you can redistribute it and/or
    + * modify it under the terms of the GNU General Public Licence
    + * as published by the Free Software Foundation; either version
    + * 2 of the Licence, or (at your option) any later version.
    + */
    +
    +#include "cf-internal.h"
    +
    +/*
    + * detect wake up events generated by the unlocking of pages in which we're
    + * interested
    + * - we use this to detect read completion of backing pages
    + * - the caller holds the waitqueue lock
    + */
    +static int cachefiles_read_waiter(wait_queue_t *wait, unsigned mode,
    + int sync, void *_key)
    +{
    + struct cachefiles_one_read *monitor =
    + container_of(wait, struct cachefiles_one_read, monitor);
    + struct cachefiles_object *object;
    + struct wait_bit_key *key = _key;
    + struct page *page = wait->private;
    +
    + ASSERT(key);
    +
    + _enter("{%lu},%u,%d,{%p,%u}",
    + monitor->netfs_page->index, mode, sync,
    + key->flags, key->bit_nr);
    +
    + if (key->flags != &page->flags ||
    + key->bit_nr != PG_locked)
    + return 0;
    +
    + _debug("--- monitor %p %lx ---", page, page->flags);
    +
    + if (!PageUptodate(page) && !PageError(page))
    + dump_stack();
    +
    + /* remove from the waitqueue */
    + list_del(&wait->task_list);
    +
    + /* move onto the action list and queue for FS-Cache thread pool */
    + ASSERT(monitor->op);
    +
    + object = container_of(monitor->op->op.object,
    + struct cachefiles_object, fscache);
    +
    + spin_lock(&object->work_lock);
    + list_add(&monitor->op_link, &monitor->op->to_do);
    + spin_unlock(&object->work_lock);
    +
    + fscache_enqueue_retrieval(monitor->op);
    + return 0;
    +}
    +
    +/*
    + * copy data from backing pages to netfs pages to complete a read operation
    + * - driven by FS-Cache's thread pool
    + */
    +static void cachefiles_read_copier(struct fscache_operation *_op)
    +{
    + struct cachefiles_one_read *monitor;
    + struct cachefiles_object *object;
    + struct fscache_retrieval *op;
    + struct pagevec pagevec;
    + int error, max;
    +
    + op = container_of(_op, struct fscache_retrieval, op);
    + object = container_of(op->op.object,
    + struct cachefiles_object, fscache);
    +
    + _enter("{ino=%lu}", object->backer->d_inode->i_ino);
    +
    + pagevec_init(&pagevec, 0);
    +
    + max = 8;
    + spin_lock_irq(&object->work_lock);
    +
    + while (!list_empty(&op->to_do)) {
    + monitor = list_entry(op->to_do.next,
    + struct cachefiles_one_read, op_link);
    + list_del(&monitor->op_link);
    +
    + spin_unlock_irq(&object->work_lock);
    +
    + _debug("- copy {%lu}", monitor->back_page->index);
    +
    + error = -EIO;
    + if (PageUptodate(monitor->back_page)) {
    + copy_highpage(monitor->netfs_page, monitor->back_page);
    +
    + pagevec_add(&pagevec, monitor->netfs_page);
    + fscache_mark_pages_cached(monitor->op, &pagevec);
    + error = 0;
    + }
    +
    + if (error)
    + cachefiles_io_error_obj(
    + object,
    + "Readpage failed on backing file %lx",
    + (unsigned long) monitor->back_page->flags);
    +
    + page_cache_release(monitor->back_page);
    +
    + fscache_end_io(op, monitor->netfs_page, error);
    + page_cache_release(monitor->netfs_page);
    + fscache_put_retrieval(op);
    + kfree(monitor);
    +
    + /* let the thread pool have some air occasionally */
    + max--;
    + if (max < 0 || need_resched()) {
    + if (!list_empty(&op->to_do))
    + fscache_enqueue_retrieval(op);
    + _leave(" [maxed out]");
    + return;
    + }
    +
    + spin_lock_irq(&object->work_lock);
    + }
    +
    + spin_unlock_irq(&object->work_lock);
    + _leave("");
    +}
    +
    +/*
    + * read the corresponding page to the given set from the backing file
    + * - an uncertain page is simply discarded, to be tried again another time
    + */
    +static int cachefiles_read_backing_file_one(struct cachefiles_object *object,
    + struct fscache_retrieval *op,
    + struct page *netpage,
    + struct pagevec *pagevec)
    +{
    + struct cachefiles_one_read *monitor;
    + struct address_space *bmapping;
    + struct page *newpage, *backpage;
    + int ret;
    +
    + _enter("");
    +
    + pagevec_reinit(pagevec);
    +
    + _debug("read back %p{%lu,%d}",
    + netpage, netpage->index, page_count(netpage));
    +
    + monitor = kzalloc(sizeof(*monitor), GFP_KERNEL);
    + if (!monitor)
    + goto nomem;
    +
    + monitor->netfs_page = netpage;
    + monitor->op = fscache_get_retrieval(op);
    +
    + init_waitqueue_func_entry(&monitor->monitor, cachefiles_read_waiter);
    +
    + /* attempt to get hold of the backing page */
    + bmapping = object->backer->d_inode->i_mapping;
    + newpage = NULL;
    +
    + for (; {
    + backpage = find_get_page(bmapping, netpage->index);
    + if (backpage)
    + goto backing_page_already_present;
    +
    + if (!newpage) {
    + newpage = page_cache_alloc_cold(bmapping);
    + if (!newpage)
    + goto nomem_monitor;
    + }
    +
    + ret = add_to_page_cache(newpage, bmapping,
    + netpage->index, GFP_KERNEL);
    + if (ret == 0)
    + goto installed_new_backing_page;
    + if (ret != -EEXIST)
    + goto nomem_page;
    + }
    +
    + /* we've installed a new backing page, so now we need to add it
    + * to the LRU list and start it reading */
    +installed_new_backing_page:
    + _debug("- new %p", newpage);
    +
    + backpage = newpage;
    + newpage = NULL;
    +
    + page_cache_get(backpage);
    + pagevec_add(pagevec, backpage);
    + __pagevec_lru_add(pagevec);
    +
    +read_backing_page:
    + ret = bmapping->a_ops->readpage(NULL, backpage);
    + if (ret < 0)
    + goto read_error;
    +
    + /* set the monitor to transfer the data across */
    +monitor_backing_page:
    + _debug("- monitor add");
    +
    + /* install the monitor */
    + page_cache_get(monitor->netfs_page);
    + page_cache_get(backpage);
    + monitor->back_page = backpage;
    + monitor->monitor.private = backpage;
    + add_page_wait_queue(backpage, &monitor->monitor);
    + monitor = NULL;
    +
    + /* but the page may have been read before the monitor was installed, so
    + * the monitor may miss the event - so we have to ensure that we do get
    + * one in such a case */
    + if (!TestSetPageLocked(backpage)) {
    + _debug("jumpstart %p {%lx}", backpage, backpage->flags);
    + unlock_page(backpage);
    + }
    + goto success;
    +
    + /* if the backing page is already present, it can be in one of
    + * three states: read in progress, read failed or read okay */
    +backing_page_already_present:
    + _debug("- present");
    +
    + if (newpage) {
    + page_cache_release(newpage);
    + newpage = NULL;
    + }
    +
    + if (PageError(backpage))
    + goto io_error;
    +
    + if (PageUptodate(backpage))
    + goto backing_page_already_uptodate;
    +
    + if (TestSetPageLocked(backpage))
    + goto monitor_backing_page;
    + _debug("read %p {%lx}", backpage, backpage->flags);
    + goto read_backing_page;
    +
    + /* the backing page is already up to date, attach the netfs
    + * page to the pagecache and LRU and copy the data across */
    +backing_page_already_uptodate:
    + _debug("- uptodate");
    +
    + pagevec_add(pagevec, netpage);
    + fscache_mark_pages_cached(op, pagevec);
    +
    + copy_highpage(netpage, backpage);
    + fscache_end_io(op, netpage, 0);
    +
    +success:
    + _debug("success");
    + ret = 0;
    +
    +out:
    + if (backpage)
    + page_cache_release(backpage);
    + if (monitor) {
    + fscache_put_retrieval(monitor->op);
    + kfree(monitor);
    + }
    + _leave(" = %d", ret);
    + return ret;
    +
    +read_error:
    + _debug("read error %d", ret);
    + if (ret == -ENOMEM)
    + goto out;
    +io_error:
    + cachefiles_io_error_obj(object, "Page read error on backing file");
    + ret = -ENOBUFS;
    + goto out;
    +
    +nomem_page:
    + page_cache_release(newpage);
    +nomem_monitor:
    + fscache_put_retrieval(monitor->op);
    + kfree(monitor);
    +nomem:
    + _leave(" = -ENOMEM");
    + return -ENOMEM;
    +}
    +
    +/*
    + * read a page from the cache or allocate a block in which to store it
    + * - cache withdrawal is prevented by the caller
    + * - returns -EINTR if interrupted
    + * - returns -ENOMEM if ran out of memory
    + * - returns -ENOBUFS if no buffers can be made available
    + * - returns -ENOBUFS if page is beyond EOF
    + * - if the page is backed by a block in the cache:
    + * - a read will be started which will call the callback on completion
    + * - 0 will be returned
    + * - else if the page is unbacked:
    + * - the metadata will be retained
    + * - -ENODATA will be returned
    + */
    +int cachefiles_read_or_alloc_page(struct fscache_retrieval *op,
    + struct page *page,
    + gfp_t gfp)
    +{
    + struct cachefiles_object *object;
    + struct cachefiles_cache *cache;
    + struct pagevec pagevec;
    + struct inode *inode;
    + sector_t block0, block;
    + unsigned shift;
    + int ret;
    +
    + object = container_of(op->op.object,
    + struct cachefiles_object, fscache);
    + cache = container_of(object->fscache.cache,
    + struct cachefiles_cache, cache);
    +
    + _enter("{%p},{%lx},,,", object, page->index);
    +
    + if (!object->backer)
    + return -ENOBUFS;
    +
    + inode = object->backer->d_inode;
    + ASSERT(S_ISREG(inode->i_mode));
    + ASSERT(inode->i_mapping->a_ops->bmap);
    + ASSERT(inode->i_mapping->a_ops->readpages);
    +
    + /* calculate the shift required to use bmap */
    + if (inode->i_sb->s_blocksize > PAGE_SIZE)
    + return -ENOBUFS;
    +
    + shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits;
    +
    + op->op.processor = cachefiles_read_copier;
    +
    + pagevec_init(&pagevec, 0);
    +
    + /* we assume the absence or presence of the first block is a good
    + * enough indication for the page as a whole
    + * - TODO: don't use bmap() for this as it is _not_ actually good
    + * enough for this as it doesn't indicate errors, but it's all we've
    + * got for the moment
    + */
    + block0 = page->index;
    + block0 <<= shift;
    +
    + block = inode->i_mapping->a_ops->bmap(inode->i_mapping, block0);
    + _debug("%llx -> %llx",
    + (unsigned long long) block0,
    + (unsigned long long) block);
    +
    + if (block) {
    + /* submit the apparently valid page to the backing fs to be
    + * read from disk */
    + ret = cachefiles_read_backing_file_one(object, op, page,
    + &pagevec);
    + } else if (cachefiles_has_space(cache, 0, 1) == 0) {
    + /* there's space in the cache we can use */
    + pagevec_add(&pagevec, page);
    + fscache_mark_pages_cached(op, &pagevec);
    + ret = -ENODATA;
    + } else {
    + ret = -ENOBUFS;
    + }
    +
    + _leave(" = %d", ret);
    + return ret;
    +}
    +
    +/*
    + * read the corresponding pages to the given set from the backing file
    + * - any uncertain pages are simply discarded, to be tried again another time
    + */
    +static int cachefiles_read_backing_file(struct cachefiles_object *object,
    + struct fscache_retrieval *op,
    + struct list_head *list,
    + struct pagevec *mark_pvec)
    +{
    + struct cachefiles_one_read *monitor = NULL;
    + struct address_space *bmapping = object->backer->d_inode->i_mapping;
    + struct pagevec lru_pvec;
    + struct page *newpage = NULL, *netpage, *_n, *backpage = NULL;
    + int ret = 0;
    +
    + _enter("");
    +
    + pagevec_init(&lru_pvec, 0);
    +
    + list_for_each_entry_safe(netpage, _n, list, lru) {
    + list_del(&netpage->lru);
    +
    + _debug("read back %p{%lu,%d}",
    + netpage, netpage->index, page_count(netpage));
    +
    + if (!monitor) {
    + monitor = kzalloc(sizeof(*monitor), GFP_KERNEL);
    + if (!monitor)
    + goto nomem;
    +
    + monitor->op = fscache_get_retrieval(op);
    + init_waitqueue_func_entry(&monitor->monitor,
    + cachefiles_read_waiter);
    + }
    +
    + for (; {
    + backpage = find_get_page(bmapping, netpage->index);
    + if (backpage)
    + goto backing_page_already_present;
    +
    + if (!newpage) {
    + newpage = page_cache_alloc_cold(bmapping);
    + if (!newpage)
    + goto nomem;
    + }
    +
    + ret = add_to_page_cache(newpage, bmapping,
    + netpage->index, GFP_KERNEL);
    + if (ret == 0)
    + goto installed_new_backing_page;
    + if (ret != -EEXIST)
    + goto nomem;
    + }
    +
    + /* we've installed a new backing page, so now we need to add it
    + * to the LRU list and start it reading */
    + installed_new_backing_page:
    + _debug("- new %p", newpage);
    +
    + backpage = newpage;
    + newpage = NULL;
    +
    + page_cache_get(backpage);
    + if (!pagevec_add(&lru_pvec, backpage))
    + __pagevec_lru_add(&lru_pvec);
    +
    + reread_backing_page:
    + ret = bmapping->a_ops->readpage(NULL, backpage);
    + if (ret < 0)
    + goto read_error;
    +
    + /* add the netfs page to the pagecache and LRU, and set the
    + * monitor to transfer the data across */
    + monitor_backing_page:
    + _debug("- monitor add");
    +
    + ret = add_to_page_cache(netpage, op->mapping, netpage->index,
    + GFP_KERNEL);
    + if (ret < 0) {
    + if (ret == -EEXIST) {
    + page_cache_release(netpage);
    + continue;
    + }
    + goto nomem;
    + }
    +
    + page_cache_get(netpage);
    + if (!pagevec_add(&lru_pvec, netpage))
    + __pagevec_lru_add(&lru_pvec);
    +
    + /* install a monitor */
    + page_cache_get(netpage);
    + monitor->netfs_page = netpage;
    +
    + page_cache_get(backpage);
    + monitor->back_page = backpage;
    + monitor->monitor.private = backpage;
    + add_page_wait_queue(backpage, &monitor->monitor);
    + monitor = NULL;
    +
    + /* but the page may have been read before the monitor was
    + * installed, so the monitor may miss the event - so we have to
    + * ensure that we do get one in such a case */
    + if (!TestSetPageLocked(backpage)) {
    + _debug("2unlock %p {%lx}", backpage, backpage->flags);
    + unlock_page(backpage);
    + }
    +
    + page_cache_release(backpage);
    + backpage = NULL;
    +
    + page_cache_release(netpage);
    + netpage = NULL;
    + continue;
    +
    + /* if the backing page is already present, it can be in one of
    + * three states: read in progress, read failed or read okay */
    + backing_page_already_present:
    + _debug("- present %p", backpage);
    +
    + if (PageError(backpage))
    + goto io_error;
    +
    + if (PageUptodate(backpage))
    + goto backing_page_already_uptodate;
    +
    + _debug("- not ready %p{%lx}", backpage, backpage->flags);
    +
    + if (TestSetPageLocked(backpage))
    + goto monitor_backing_page;
    +
    + if (PageError(backpage)) {
    + _debug("error %lx", backpage->flags);
    + unlock_page(backpage);
    + goto io_error;
    + }
    +
    + if (PageUptodate(backpage))
    + goto backing_page_already_uptodate_unlock;
    +
    + /* we've locked a page that's neither up to date nor erroneous,
    + * so we need to attempt to read it again */
    + goto reread_backing_page;
    +
    + /* the backing page is already up to date, attach the netfs
    + * page to the pagecache and LRU and copy the data across */
    + backing_page_already_uptodate_unlock:
    + _debug("uptodate %lx", backpage->flags);
    + unlock_page(backpage);
    + backing_page_already_uptodate:
    + _debug("- uptodate");
    +
    + ret = add_to_page_cache(netpage, op->mapping, netpage->index,
    + GFP_KERNEL);
    + if (ret < 0) {
    + if (ret == -EEXIST) {
    + page_cache_release(netpage);
    + continue;
    + }
    + goto nomem;
    + }
    +
    + copy_highpage(netpage, backpage);
    +
    + page_cache_release(backpage);
    + backpage = NULL;
    +
    + if (!pagevec_add(mark_pvec, netpage))
    + fscache_mark_pages_cached(op, mark_pvec);
    +
    + page_cache_get(netpage);
    + if (!pagevec_add(&lru_pvec, netpage))
    + __pagevec_lru_add(&lru_pvec);
    +
    + fscache_end_io(op, netpage, 0);
    + page_cache_release(netpage);
    + netpage = NULL;
    + continue;
    + }
    +
    + netpage = NULL;
    +
    + _debug("out");
    +
    +out:
    + /* tidy up */
    + pagevec_lru_add(&lru_pvec);
    +
    + if (newpage)
    + page_cache_release(newpage);
    + if (netpage)
    + page_cache_release(netpage);
    + if (backpage)
    + page_cache_release(backpage);
    + if (monitor) {
    + fscache_put_retrieval(op);
    + kfree(monitor);
    + }
    +
    + list_for_each_entry_safe(netpage, _n, list, lru) {
    + list_del(&netpage->lru);
    + page_cache_release(netpage);
    + }
    +
    + _leave(" = %d", ret);
    + return ret;
    +
    +nomem:
    + _debug("nomem");
    + ret = -ENOMEM;
    + goto out;
    +
    +read_error:
    + _debug("read error %d", ret);
    + if (ret == -ENOMEM)
    + goto out;
    +io_error:
    + cachefiles_io_error_obj(object, "Page read error on backing file");
    + ret = -ENOBUFS;
    + goto out;
    +}
    +
    +/*
    + * read a list of pages from the cache or allocate blocks in which to store
    + * them
    + */
    +int cachefiles_read_or_alloc_pages(struct fscache_retrieval *op,
    + struct list_head *pages,
    + unsigned *nr_pages,
    + gfp_t gfp)
    +{
    + struct cachefiles_object *object;
    + struct cachefiles_cache *cache;
    + struct list_head backpages;
    + struct pagevec pagevec;
    + struct inode *inode;
    + struct page *page, *_n;
    + unsigned shift, nrbackpages;
    + int ret, ret2, space;
    +
    + object = container_of(op->op.object,
    + struct cachefiles_object, fscache);
    + cache = container_of(object->fscache.cache,
    + struct cachefiles_cache, cache);
    +
    + _enter("{OBJ%x,%d},,%d,,",
    + object->fscache.debug_id, atomic_read(&op->op.usage),
    + *nr_pages);
    +
    + if (!object->backer)
    + return -ENOBUFS;
    +
    + space = 1;
    + if (cachefiles_has_space(cache, 0, *nr_pages) < 0)
    + space = 0;
    +
    + inode = object->backer->d_inode;
    + ASSERT(S_ISREG(inode->i_mode));
    + ASSERT(inode->i_mapping->a_ops->bmap);
    + ASSERT(inode->i_mapping->a_ops->readpages);
    +
    + /* calculate the shift required to use bmap */
    + if (inode->i_sb->s_blocksize > PAGE_SIZE)
    + return -ENOBUFS;
    +
    + shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits;
    +
    + pagevec_init(&pagevec, 0);
    +
    + op->op.processor = cachefiles_read_copier;
    +
    + INIT_LIST_HEAD(&backpages);
    + nrbackpages = 0;
    +
    + ret = space ? -ENODATA : -ENOBUFS;
    + list_for_each_entry_safe(page, _n, pages, lru) {
    + sector_t block0, block;
    +
    + /* we assume the absence or presence of the first block is a
    + * good enough indication for the page as a whole
    + * - TODO: don't use bmap() for this as it is _not_ actually
    + * good enough for this as it doesn't indicate errors, but
    + * it's all we've got for the moment
    + */
    + block0 = page->index;
    + block0 <<= shift;
    +
    + block = inode->i_mapping->a_ops->bmap(inode->i_mapping,
    + block0);
    + _debug("%llx -> %llx",
    + (unsigned long long) block0,
    + (unsigned long long) block);
    +
    + if (block) {
    + /* we have data - add it to the list to give to the
    + * backing fs */
    + list_move(&page->lru, &backpages);
    + (*nr_pages)--;
    + nrbackpages++;
    + } else if (space && pagevec_add(&pagevec, page) == 0) {
    + fscache_mark_pages_cached(op, &pagevec);
    + ret = -ENODATA;
    + }
    + }
    +
    + if (pagevec_count(&pagevec) > 0)
    + fscache_mark_pages_cached(op, &pagevec);
    +
    + if (list_empty(pages))
    + ret = 0;
    +
    + /* submit the apparently valid pages to the backing fs to be read from
    + * disk */
    + if (nrbackpages > 0) {
    + ret2 = cachefiles_read_backing_file(object, op, &backpages,
    + &pagevec);
    + if (ret2 == -ENOMEM || ret2 == -EINTR)
    + ret = ret2;
    + }
    +
    + if (pagevec_count(&pagevec) > 0)
    + fscache_mark_pages_cached(op, &pagevec);
    +
    + _leave(" = %d [nr=%u%s]",
    + ret, *nr_pages, list_empty(pages) ? " empty" : "");
    + return ret;
    +}
    +
    +/*
    + * allocate a block in the cache in which to store a page
    + * - cache withdrawal is prevented by the caller
    + * - returns -EINTR if interrupted
    + * - returns -ENOMEM if ran out of memory
    + * - returns -ENOBUFS if no buffers can be made available
    + * - returns -ENOBUFS if page is beyond EOF
    + * - otherwise:
    + * - the metadata will be retained
    + * - 0 will be returned
    + */
    +int cachefiles_allocate_page(struct fscache_retrieval *op,
    + struct page *page,
    + gfp_t gfp)
    +{
    + struct cachefiles_object *object;
    + struct cachefiles_cache *cache;
    + struct pagevec pagevec;
    + int ret;
    +
    + object = container_of(op->op.object,
    + struct cachefiles_object, fscache);
    + cache = container_of(object->fscache.cache,
    + struct cachefiles_cache, cache);
    +
    + _enter("%p,{%lx},", object, page->index);
    +
    + ret = cachefiles_has_space(cache, 0, 1);
    + if (ret == 0) {
    + pagevec_init(&pagevec, 0);
    + pagevec_add(&pagevec, page);
    + fscache_mark_pages_cached(op, &pagevec);
    + } else {
    + ret = -ENOBUFS;
    + }
    +
    + _leave(" = %d", ret);
    + return ret;
    +}
    +
    +/*
    + * allocate blocks in the cache in which to store a set of pages
    + * - cache withdrawal is prevented by the caller
    + * - returns -EINTR if interrupted
    + * - returns -ENOMEM if ran out of memory
    + * - returns -ENOBUFS if some buffers couldn't be made available
    + * - returns -ENOBUFS if some pages are beyond EOF
    + * - otherwise:
    + * - -ENODATA will be returned
    + * - metadata will be retained for any page marked
    + */
    +int cachefiles_allocate_pages(struct fscache_retrieval *op,
    + struct list_head *pages,
    + unsigned *nr_pages,
    + gfp_t gfp)
    +{
    + struct cachefiles_object *object;
    + struct cachefiles_cache *cache;
    + struct pagevec pagevec;
    + struct page *page;
    + int ret;
    +
    + object = container_of(op->op.object,
    + struct cachefiles_object, fscache);
    + cache = container_of(object->fscache.cache,
    + struct cachefiles_cache, cache);
    +
    + _enter("%p,,,%d,", object, *nr_pages);
    +
    + ret = cachefiles_has_space(cache, 0, *nr_pages);
    + if (ret == 0) {
    + pagevec_init(&pagevec, 0);
    +
    + list_for_each_entry(page, pages, lru) {
    + if (pagevec_add(&pagevec, page) == 0)
    + fscache_mark_pages_cached(op, &pagevec);
    + }
    +
    + if (pagevec_count(&pagevec) > 0)
    + fscache_mark_pages_cached(op, &pagevec);
    + ret = -ENODATA;
    + } else {
    + ret = -ENOBUFS;
    + }
    +
    + _leave(" = %d", ret);
    + return ret;
    +}
    +
    +/*
    + * request a page be stored in the cache
    + * - cache withdrawal is prevented by the caller
    + * - this request may be ignored if there's no cache block available, in which
    + * case -ENOBUFS will be returned
    + * - if the op is in progress, 0 will be returned
    + */
    +int cachefiles_write_page(struct fscache_storage *op, struct page *page)
    +{
    + struct cachefiles_object *object;
    + struct address_space *mapping;
    + int ret;
    +
    + ASSERT(op != NULL);
    + ASSERT(page != NULL);
    +
    + object = container_of(op->op.object,
    + struct cachefiles_object, fscache);
    +
    + _enter("%p,%p{%lx},,,", object, page, page->index);
    +
    + if (!object->backer) {
    + _leave(" = -ENOBUFS");
    + return -ENOBUFS;
    + }
    +
    + ASSERT(S_ISREG(object->backer->d_inode->i_mode));
    +
    + /* copy the page to ext3 and let it store it in its own time */
    + mapping = object->backer->d_inode->i_mapping;
    + ret = -EIO;
    + if (mapping->a_ops->write_one_page)
    + ret = mapping->a_ops->write_one_page(mapping, page->index,
    + page);
    +
    + if (ret != 0) {
    + if (ret == -EIO)
    + cachefiles_io_error_obj(
    + object, "Write page to backing file failed");
    + ret = -ENOBUFS;
    + }
    +
    + _leave(" = %d", ret);
    + return ret;
    +}
    +
    +/*
    + * detach a backing block from a page
    + * - cache withdrawal is prevented by the caller
    + */
    +void cachefiles_uncache_page(struct fscache_object *_object, struct page *page)
    +{
    + struct cachefiles_object *object;
    + struct cachefiles_cache *cache;
    +
    + object = container_of(_object, struct cachefiles_object, fscache);
    + cache = container_of(object->fscache.cache,
    + struct cachefiles_cache, cache);
    +
    + _enter("%p,{%lu}", object, page->index);
    +
    + spin_unlock(&object->fscache.cookie->lock);
    +}
    diff --git a/fs/cachefiles/cf-security.c b/fs/cachefiles/cf-security.c
    new file mode 100644
    index 0000000..65154b8
    --- /dev/null
    +++ b/fs/cachefiles/cf-security.c
    @@ -0,0 +1,94 @@
    +/* CacheFiles security management
    + *
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    + * Written by David Howells (dhowells@redhat.com)
    + *
    + * This program is free software; you can redistribute it and/or
    + * modify it under the terms of the GNU General Public Licence
    + * as published by the Free Software Foundation; either version
    + * 2 of the Licence, or (at your option) any later version.
    + */
    +
    +#include
    +#include "cf-internal.h"
    +
    +/*
    + * determine the security context within which we access the cache from within
    + * the kernel
    + */
    +int cachefiles_get_security_ID(struct cachefiles_cache *cache)
    +{
    + struct cred *cred;
    + int ret;
    +
    + _enter("");
    +
    + cred = get_kernel_cred("cachefiles", current);
    + if (IS_ERR(cred)) {
    + ret = PTR_ERR(cred);
    + goto error;
    + }
    +
    + cache->cache_cred = cred;
    + ret = 0;
    +error:
    + _leave(" = %d", ret);
    + return ret;
    +}
    +
    +/*
    + * check the security details of the on-disk cache
    + * - must be called with security imposed
    + */
    +int cachefiles_determine_cache_secid(struct cachefiles_cache *cache,
    + struct dentry *root)
    +{
    + struct cred *cred, *saved_cred;
    + int ret;
    +
    + _enter("");
    +
    + /* the cache creds are in use already, so we have to alter a copy */
    + cred = dup_cred(cache->cache_cred);
    + if (!cred)
    + return -ENOMEM;
    +
    + /* use the cache root dir's security context as the basis with which
    + * create files */
    + ret = change_create_files_as(cred, root->d_inode);
    + if (ret < 0) {
    + put_cred(cred);
    + _leave(" = %d [cfa]", ret);
    + return ret;
    + }
    +
    + put_cred(cache->cache_cred);
    + cache->cache_cred = cred;
    +
    + /* check that we have permission to create files and directories with
    + * the security ID we've been given */
    + cachefiles_begin_secure(cache, &saved_cred);
    +
    + ret = security_inode_mkdir(root->d_inode, root, 0);
    + if (ret < 0) {
    + printk(KERN_ERR "CacheFiles:"
    + " Security denies permission to make dirs: error %d",
    + ret);
    + goto error;
    + }
    +
    + ret = security_inode_create(root->d_inode, root, 0);
    + if (ret < 0) {
    + printk(KERN_ERR "CacheFiles:"
    + " Security denies permission to create files: error %d",
    + ret);
    + goto error;
    + }
    +
    +error:
    + cachefiles_end_secure(cache, saved_cred);
    + if (ret == -EOPNOTSUPP)
    + ret = 0;
    + _leave(" = %d", ret);
    + return ret;
    +}
    diff --git a/fs/cachefiles/cf-xattr.c b/fs/cachefiles/cf-xattr.c
    new file mode 100644
    index 0000000..9ae9240
    --- /dev/null
    +++ b/fs/cachefiles/cf-xattr.c
    @@ -0,0 +1,292 @@
    +/* CacheFiles extended attribute management
    + *
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    + * Written by David Howells (dhowells@redhat.com)
    + *
    + * This program is free software; you can redistribute it and/or
    + * modify it under the terms of the GNU General Public Licence
    + * as published by the Free Software Foundation; either version
    + * 2 of the Licence, or (at your option) any later version.
    + */
    +
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include
    +#include "cf-internal.h"
    +
    +static const char cachefiles_xattr_cache[] =
    + XATTR_USER_PREFIX "CacheFiles.cache";
    +
    +/*
    + * check the type label on an object
    + * - done using xattrs
    + */
    +int cachefiles_check_object_type(struct cachefiles_object *object)
    +{
    + struct dentry *dentry = object->dentry;
    + char type[3], xtype[3];
    + int ret;
    +
    + ASSERT(dentry);
    + ASSERT(dentry->d_inode);
    +
    + if (!object->fscache.cookie)
    + strcpy(type, "C3");
    + else
    + snprintf(type, 3, "%02x", object->fscache.cookie->def->type);
    +
    + _enter("%p{%s}", object, type);
    +
    + /* attempt to install a type label directly */
    + ret = vfs_setxattr(dentry, cachefiles_xattr_cache, type, 2,
    + XATTR_CREATE);
    + if (ret == 0) {
    + _debug("SET"); /* we succeeded */
    + goto error;
    + }
    +
    + if (ret != -EEXIST) {
    + kerror("Can't set xattr on %*.*s [%lu] (err %d)",
    + dentry->d_name.len, dentry->d_name.len,
    + dentry->d_name.name, dentry->d_inode->i_ino,
    + -ret);
    + goto error;
    + }
    +
    + /* read the current type label */
    + ret = vfs_getxattr(dentry, cachefiles_xattr_cache, xtype, 3);
    + if (ret < 0) {
    + if (ret == -ERANGE)
    + goto bad_type_length;
    +
    + kerror("Can't read xattr on %*.*s [%lu] (err %d)",
    + dentry->d_name.len, dentry->d_name.len,
    + dentry->d_name.name, dentry->d_inode->i_ino,
    + -ret);
    + goto error;
    + }
    +
    + /* check the type is what we're expecting */
    + if (ret != 2)
    + goto bad_type_length;
    +
    + if (xtype[0] != type[0] || xtype[1] != type[1])
    + goto bad_type;
    +
    + ret = 0;
    +
    +error:
    + _leave(" = %d", ret);
    + return ret;
    +
    +bad_type_length:
    + kerror("Cache object %lu type xattr length incorrect",
    + dentry->d_inode->i_ino);
    + ret = -EIO;
    + goto error;
    +
    +bad_type:
    + xtype[2] = 0;
    + kerror("Cache object %*.*s [%lu] type %s not %s",
    + dentry->d_name.len, dentry->d_name.len,
    + dentry->d_name.name, dentry->d_inode->i_ino,
    + xtype, type);
    + ret = -EIO;
    + goto error;
    +}
    +
    +/*
    + * set the state xattr on a cache file
    + */
    +int cachefiles_set_object_xattr(struct cachefiles_object *object,
    + struct cachefiles_xattr *auxdata)
    +{
    + struct dentry *dentry = object->dentry;
    + int ret;
    +
    + ASSERT(object->fscache.cookie);
    + ASSERT(dentry);
    +
    + _enter("%p,#%d", object, auxdata->len);
    +
    + /* attempt to install the cache metadata directly */
    + _debug("SET %s #%u", object->fscache.cookie->def->name, auxdata->len);
    +
    + ret = vfs_setxattr(dentry, cachefiles_xattr_cache,
    + &auxdata->type, auxdata->len,
    + XATTR_CREATE);
    + if (ret < 0 && ret != -ENOMEM)
    + cachefiles_io_error_obj(
    + object,
    + "Failed to set xattr with error %d", ret);
    +
    + _leave(" = %d", ret);
    + return ret;
    +}
    +
    +/*
    + * update the state xattr on a cache file
    + */
    +int cachefiles_update_object_xattr(struct cachefiles_object *object,
    + struct cachefiles_xattr *auxdata)
    +{
    + struct dentry *dentry = object->dentry;
    + int ret;
    +
    + ASSERT(object->fscache.cookie);
    + ASSERT(dentry);
    +
    + _enter("%p,#%d", object, auxdata->len);
    +
    + /* attempt to install the cache metadata directly */
    + _debug("SET %s #%u", object->fscache.cookie->def->name, auxdata->len);
    +
    + ret = vfs_setxattr(dentry, cachefiles_xattr_cache,
    + &auxdata->type, auxdata->len,
    + XATTR_REPLACE);
    + if (ret < 0 && ret != -ENOMEM)
    + cachefiles_io_error_obj(
    + object,
    + "Failed to update xattr with error %d", ret);
    +
    + _leave(" = %d", ret);
    + return ret;
    +}
    +
    +/*
    + * check the state xattr on a cache file
    + * - return -ESTALE if the object should be deleted
    + */
    +int cachefiles_check_object_xattr(struct cachefiles_object *object,
    + struct cachefiles_xattr *auxdata)
    +{
    + struct cachefiles_xattr *auxbuf;
    + struct dentry *dentry = object->dentry;
    + int ret;
    +
    + _enter("%p,#%d", object, auxdata->len);
    +
    + ASSERT(dentry);
    + ASSERT(dentry->d_inode);
    +
    + auxbuf = kmalloc(sizeof(struct cachefiles_xattr) + 512, GFP_KERNEL);
    + if (!auxbuf) {
    + _leave(" = -ENOMEM");
    + return -ENOMEM;
    + }
    +
    + /* read the current type label */
    + ret = vfs_getxattr(dentry, cachefiles_xattr_cache,
    + &auxbuf->type, 512 + 1);
    + if (ret < 0) {
    + if (ret == -ENODATA)
    + goto stale; /* no attribute - power went off
    + * mid-cull? */
    +
    + if (ret == -ERANGE)
    + goto bad_type_length;
    +
    + cachefiles_io_error_obj(object,
    + "Can't read xattr on %lu (err %d)",
    + dentry->d_inode->i_ino, -ret);
    + goto error;
    + }
    +
    + /* check the on-disk object */
    + if (ret < 1)
    + goto bad_type_length;
    +
    + if (auxbuf->type != auxdata->type)
    + goto stale;
    +
    + auxbuf->len = ret;
    +
    + /* consult the netfs */
    + if (object->fscache.cookie->def->check_aux) {
    + fscache_checkaux_t result;
    + unsigned int dlen;
    +
    + dlen = auxbuf->len - 1;
    +
    + _debug("checkaux %s #%u",
    + object->fscache.cookie->def->name, dlen);
    +
    + result = object->fscache.cookie->def->check_aux(
    + object->fscache.cookie->netfs_data,
    + &auxbuf->data, dlen);
    +
    + switch (result) {
    + /* entry okay as is */
    + case FSCACHE_CHECKAUX_OKAY:
    + goto okay;
    +
    + /* entry requires update */
    + case FSCACHE_CHECKAUX_NEEDS_UPDATE:
    + break;
    +
    + /* entry requires deletion */
    + case FSCACHE_CHECKAUX_OBSOLETE:
    + goto stale;
    +
    + default:
    + BUG();
    + }
    +
    + /* update the current label */
    + ret = vfs_setxattr(dentry, cachefiles_xattr_cache,
    + &auxdata->type, auxdata->len,
    + XATTR_REPLACE);
    + if (ret < 0) {
    + cachefiles_io_error_obj(object,
    + "Can't update xattr on %lu"
    + " (error %d)",
    + dentry->d_inode->i_ino, -ret);
    + goto error;
    + }
    + }
    +
    +okay:
    + ret = 0;
    +
    +error:
    + kfree(auxbuf);
    + _leave(" = %d", ret);
    + return ret;
    +
    +bad_type_length:
    + kerror("Cache object %lu xattr length incorrect",
    + dentry->d_inode->i_ino);
    + ret = -EIO;
    + goto error;
    +
    +stale:
    + ret = -ESTALE;
    + goto error;
    +}
    +
    +/*
    + * remove the object's xattr to mark it stale
    + */
    +int cachefiles_remove_object_xattr(struct cachefiles_cache *cache,
    + struct dentry *dentry)
    +{
    + int ret;
    +
    + ret = vfs_removexattr(dentry, cachefiles_xattr_cache);
    + if (ret < 0) {
    + if (ret == -ENOENT || ret == -ENODATA)
    + ret = 0;
    + else if (ret != -ENOMEM)
    + cachefiles_io_error(cache,
    + "Can't remove xattr from %lu"
    + " (error %d)",
    + dentry->d_inode->i_ino, -ret);
    + }
    +
    + _leave(" = %d", ret);
    + return ret;
    +}

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  7. [PATCH 06/24] CRED: Request a credential record for a kernel service

    Request a credential record for the named kernel service. This produces a
    cred struct with appropriate DAC and MAC controls for effecting that service.
    It may be used to override the credentials on a task to do work on that task's
    behalf.

    Signed-off-by: David Howells
    ---

    include/linux/cred.h | 2 +
    include/linux/security.h | 43 +++++++++++++++++++++++++++++
    kernel/cred.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++
    security/dummy.c | 13 +++++++++
    security/selinux/hooks.c | 47 ++++++++++++++++++++++++++++++++
    5 files changed, 173 insertions(+), 0 deletions(-)

    diff --git a/include/linux/cred.h b/include/linux/cred.h
    index 78924d5..b2d0ac9 100644
    --- a/include/linux/cred.h
    +++ b/include/linux/cred.h
    @@ -51,6 +51,8 @@ extern void change_fsgid(struct cred *, gid_t);
    extern void change_groups(struct cred *, struct group_info *);
    extern void change_cap(struct cred *, kernel_cap_t);
    extern struct cred *dup_cred(const struct cred *);
    +extern struct cred *get_kernel_cred(const char *, struct task_struct *);
    +extern int change_create_files_as(struct cred *, struct inode *);

    /**
    * get_cred - Get an extra reference on a credentials record
    diff --git a/include/linux/security.h b/include/linux/security.h
    index 0933333..b7c06c3 100644
    --- a/include/linux/security.h
    +++ b/include/linux/security.h
    @@ -514,6 +514,18 @@ struct request_sock;
    * @cred_destroy:
    * Destroy the credentials attached to a cred structure.
    * @cred points to the credentials structure that is to be destroyed.
    + * @cred_kernel_act_as:
    + * Set the credentials for a kernel service to act as (subjective context).
    + * @cred points to the credentials structure to be filled in.
    + * @service names the service making the request.
    + * @daemon: A userspace daemon to be used as a base for the context.
    + * Return 0 if successful.
    + * @cred_create_files_as:
    + * Set the file creation context in a credentials record to be the same as
    + * the objective context of an inode.
    + * @cred points to the credentials structure to be altered.
    + * @inode points to the inode to use as a reference.
    + * Return 0 if successful.
    *
    * Security hooks for task operations.
    *
    @@ -1275,6 +1287,9 @@ struct security_operations {

    int (*cred_dup)(struct cred *cred);
    void (*cred_destroy)(struct cred *cred);
    + int (*cred_kernel_act_as)(struct cred *cred, const char *service,
    + struct task_struct *daemon);
    + int (*cred_create_files_as)(struct cred *cred, struct inode *inode);

    int (*task_create) (unsigned long clone_flags);
    int (*task_alloc_security) (struct task_struct * p);
    @@ -1894,6 +1909,21 @@ static inline void security_cred_destroy(struct cred *cred)
    return security_ops->cred_destroy(cred);
    }

    +static inline int security_cred_kernel_act_as(struct cred *cred,
    + const char *service,
    + struct task_struct *daemon)
    +{
    + return security_ops->cred_kernel_act_as(cred, service, daemon);
    +}
    +
    +static inline int security_cred_create_files_as(struct cred *cred,
    + struct inode *inode)
    +{
    + if (IS_PRIVATE(inode))
    + return -EINVAL;
    + return security_ops->cred_create_files_as(cred, inode);
    +}
    +
    static inline int security_task_create (unsigned long clone_flags)
    {
    return security_ops->task_create (clone_flags);
    @@ -2586,6 +2616,19 @@ static inline void security_cred_destroy(struct cred *cred)
    {
    }

    +static inline int security_cred_kernel_act_as(struct cred *cred,
    + const char *service,
    + struct task_struct *daemon)
    +{
    + return 0;
    +}
    +
    +static inline int security_cred_create_files_as(struct cred *cred,
    + struct inode *inode)
    +{
    + return 0;
    +}
    +
    static inline int security_task_create (unsigned long clone_flags)
    {
    return 0;
    diff --git a/kernel/cred.c b/kernel/cred.c
    index f545634..294b33a 100644
    --- a/kernel/cred.c
    +++ b/kernel/cred.c
    @@ -210,3 +210,71 @@ void change_cap(struct cred *cred, kernel_cap_t cap)
    }

    EXPORT_SYMBOL(change_cap);
    +
    +/**
    + * get_kernel_cred - Get credentials for a named kernel service
    + * @service: The name of the service
    + * @daemon: A userspace daemon to be used as a base for the context
    + *
    + * Get a set of credentials for a specific kernel service. These can then be
    + * used to override a task's credentials so that work can be done on behalf of
    + * that task.
    + *
    + * @daemon is used to provide a base for the security context, but can be NULL.
    + * If @deamon is supplied, then the cred's uid, gid and groups list will be
    + * derived from that; otherwise they'll be set to 0 and no groups.
    + *
    + * @daemon is also passd to the LSM module as a base from which to initialise
    + * any MAC controls.
    + *
    + * The caller may change these controls afterwards if desired.
    + */
    +struct cred *get_kernel_cred(const char *service,
    + struct task_struct *daemon)
    +{
    + struct cred *cred, *dcred;
    + int ret;
    +
    + cred = kzalloc(sizeof *cred, GFP_KERNEL);
    + if (!cred)
    + return ERR_PTR(-ENOMEM);
    +
    + if (daemon) {
    + rcu_read_lock();
    + dcred = task_cred(daemon);
    + cred->uid = dcred->uid;
    + cred->gid = dcred->gid;
    + cred->group_info = dcred->group_info;
    + atomic_inc(&cred->group_info->usage);
    + rcu_read_unlock();
    + } else {
    + cred->group_info = &init_groups;
    + atomic_inc(&init_groups.usage);
    + }
    +
    + ret = security_cred_kernel_act_as(cred, service, daemon);
    + if (ret < 0) {
    + put_cred(cred);
    + return ERR_PTR(ret);
    + }
    +
    + return cred;
    +}
    +
    +EXPORT_SYMBOL(get_kernel_cred);
    +
    +/**
    + * change_create_files_as - Change the file creation context in a new cred record
    + * @cred: The credential record to alter
    + * @inode: The inode to take the context from
    + *
    + * Change the file creation context in a new credentials record to be the same
    + * as the object context of the specified inode, so that the new inodes have
    + * the same MAC context as that inode.
    + */
    +int change_create_files_as(struct cred *cred, struct inode *inode)
    +{
    + return security_cred_create_files_as(cred, inode);
    +}
    +
    +EXPORT_SYMBOL(change_create_files_as);
    diff --git a/security/dummy.c b/security/dummy.c
    index 7e52156..348c09b 100644
    --- a/security/dummy.c
    +++ b/security/dummy.c
    @@ -488,6 +488,17 @@ static void dummy_cred_destroy(struct cred *cred)
    {
    }

    +static int dummy_cred_kernel_act_as(struct cred *cred, const char *service,
    + struct task_struct *daemon)
    +{
    + return 0;
    +}
    +
    +static int dummy_cred_create_files_as(struct cred *cred, struct inode *inode)
    +{
    + return 0;
    +}
    +
    static int dummy_task_create (unsigned long clone_flags)
    {
    return 0;
    @@ -1064,6 +1075,8 @@ void security_fixup_ops (struct security_operations *ops)
    set_to_dummy_if_null(ops, file_receive);
    set_to_dummy_if_null(ops, cred_dup);
    set_to_dummy_if_null(ops, cred_destroy);
    + set_to_dummy_if_null(ops, cred_kernel_act_as);
    + set_to_dummy_if_null(ops, cred_create_files_as);
    set_to_dummy_if_null(ops, task_create);
    set_to_dummy_if_null(ops, task_alloc_security);
    set_to_dummy_if_null(ops, task_free_security);
    diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
    index 2ee1712..fc4e75d 100644
    --- a/security/selinux/hooks.c
    +++ b/security/selinux/hooks.c
    @@ -2771,6 +2771,51 @@ static void selinux_cred_destroy(struct cred *cred)
    kfree(cred->security);
    }

    +/*
    + * get the credentials for a kernel service, deriving the subjective context
    + * from the credentials of a userspace daemon if one supplied
    + * - all the creation contexts are set to unlabelled
    + */
    +static int selinux_cred_kernel_act_as(struct cred *cred,
    + const char *service,
    + struct task_struct *daemon)
    +{
    + struct task_security_struct *tsec;
    + struct cred_security_struct *csec;
    + u32 ksid;
    + int ret;
    +
    + tsec = daemon ? daemon->security : init_task.security;
    +
    + ret = security_transition_sid(tsec->victim_sid, SECINITSID_KERNEL,
    + SECCLASS_PROCESS, &ksid);
    + if (ret < 0)
    + return ret;
    +
    + csec = kzalloc(sizeof(struct cred_security_struct), GFP_KERNEL);
    + if (!csec)
    + return -ENOMEM;
    +
    + csec->action_sid = ksid;
    + csec->create_sid = SECINITSID_UNLABELED;
    + csec->keycreate_sid = SECINITSID_UNLABELED;
    + csec->sockcreate_sid = SECINITSID_UNLABELED;
    + cred->security = csec;
    + return 0;
    +}
    +
    +/*
    + * set the file creation context in a credentials record to the same as the
    + * objective context of the specified inode
    + */
    +static int selinux_cred_create_files_as(struct cred *cred, struct inode *inode)
    +{
    + struct cred_security_struct *csec = cred->security;
    + struct inode_security_struct *isec = inode->i_security;
    +
    + csec->create_sid = isec->sid;
    + return 0;
    +}

    /* task security operations */

    @@ -4888,6 +4933,8 @@ static struct security_operations selinux_ops = {

    .cred_dup = selinux_cred_dup,
    .cred_destroy = selinux_cred_destroy,
    + .cred_kernel_act_as = selinux_cred_kernel_act_as,
    + .cred_create_files_as = selinux_cred_create_files_as,

    .task_create = selinux_task_create,
    .task_alloc_security = selinux_task_alloc_security,

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  8. [PATCH 07/24] FS-Cache: Release page->private after failed readahead

    The attached patch causes read_cache_pages() to release page-private data on a
    page for which add_to_page_cache() fails or the filler function fails. This
    permits pages with caching references associated with them to be cleaned up.

    The invalidatepage() address space op is called (indirectly) to do the honours.

    Signed-Off-By: David Howells
    ---

    mm/readahead.c | 40 ++++++++++++++++++++++++++++++++++++++--
    1 files changed, 38 insertions(+), 2 deletions(-)

    diff --git a/mm/readahead.c b/mm/readahead.c
    index 39bf45d..12d1378 100644
    --- a/mm/readahead.c
    +++ b/mm/readahead.c
    @@ -15,6 +15,7 @@
    #include
    #include
    #include
    +#include

    void default_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
    {
    @@ -51,6 +52,41 @@ EXPORT_SYMBOL_GPL(file_ra_state_init);

    #define list_to_page(head) (list_entry((head)->prev, struct page, lru))

    +/*
    + * see if a page needs releasing upon read_cache_pages() failure
    + * - the caller of read_cache_pages() may have set PG_private before calling,
    + * such as the NFS fs marking pages that are cached locally on disk, thus we
    + * need to give the fs a chance to clean up in the event of an error
    + */
    +static void read_cache_pages_invalidate_page(struct address_space *mapping,
    + struct page *page)
    +{
    + if (PagePrivate(page)) {
    + if (TestSetPageLocked(page))
    + BUG();
    + page->mapping = mapping;
    + do_invalidatepage(page, 0);
    + page->mapping = NULL;
    + unlock_page(page);
    + }
    + page_cache_release(page);
    +}
    +
    +/*
    + * release a list of pages, invalidating them first if need be
    + */
    +static void read_cache_pages_invalidate_pages(struct address_space *mapping,
    + struct list_head *pages)
    +{
    + struct page *victim;
    +
    + while (!list_empty(pages)) {
    + victim = list_to_page(pages);
    + list_del(&victim->lru);
    + read_cache_pages_invalidate_page(mapping, victim);
    + }
    +}
    +
    /**
    * read_cache_pages - populate an address space with some pages & start reads against them
    * @mapping: the address_space
    @@ -74,14 +110,14 @@ int read_cache_pages(struct address_space *mapping, struct list_head *pages,
    page = list_to_page(pages);
    list_del(&page->lru);
    if (add_to_page_cache(page, mapping, page->index, GFP_KERNEL)) {
    - page_cache_release(page);
    + read_cache_pages_invalidate_page(mapping, page);
    continue;
    }
    ret = filler(data, page);
    if (!pagevec_add(&lru_pvec, page))
    __pagevec_lru_add(&lru_pvec);
    if (ret) {
    - put_pages_list(pages);
    + read_cache_pages_invalidate_pages(mapping, pages);
    break;
    }
    task_io_account_read(PAGE_CACHE_SIZE);

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  9. [PATCH 20/24] AFS: Add a function to excise a rejected write from the pagecache

    Add a function - cancel_rejected_write() - to excise a rejected write from the
    pagecache. This function is related to the truncation family of routines. It
    permits the pages modified by a network filesystem client (such as AFS) to be
    excised and discarded from the pagecache if the attempt to write them back to
    the server fails.

    The dirty and writeback states of the afflicted pages are cancelled and the
    pages themselves are detached for recycling. All PTEs referring to those
    pages are removed.

    Note that the locking is tricky as it's very easy to deadlock against
    truncate() and other routines once the pages have been unlocked as part of the
    writeback process. To this end, the PG_error flag is set, then the
    PG_writeback flag is cleared, and only *then* can lock_page() be called.

    Signed-off-by: David Howells
    ---

    include/linux/mm.h | 5 ++-
    mm/truncate.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++++++ ++
    2 files changed, 86 insertions(+), 2 deletions(-)

    diff --git a/include/linux/mm.h b/include/linux/mm.h
    index 1692dd6..49863df 100644
    --- a/include/linux/mm.h
    +++ b/include/linux/mm.h
    @@ -1091,12 +1091,13 @@ extern int do_munmap(struct mm_struct *, unsigned long, size_t);

    extern unsigned long do_brk(unsigned long, unsigned long);

    -/* filemap.c */
    -extern unsigned long page_unuse(struct page *);
    +/* truncate.c */
    extern void truncate_inode_pages(struct address_space *, loff_t);
    extern void truncate_inode_pages_range(struct address_space *,
    loff_t lstart, loff_t lend);
    +extern void cancel_rejected_write(struct address_space *, pgoff_t, pgoff_t);

    +/* filemap.c */
    /* generic vm_area_ops exported for stackable file systems */
    extern int filemap_fault(struct vm_area_struct *, struct vm_fault *);

    diff --git a/mm/truncate.c b/mm/truncate.c
    index 5555cb0..92a68f7 100644
    --- a/mm/truncate.c
    +++ b/mm/truncate.c
    @@ -462,3 +462,86 @@ int invalidate_inode_pages2(struct address_space *mapping)
    return invalidate_inode_pages2_range(mapping, 0, -1);
    }
    EXPORT_SYMBOL_GPL(invalidate_inode_pages2);
    +
    +/*
    + * Cancel that part of a rejected write that affects a particular page
    + */
    +static void cancel_rejected_page(struct address_space *mapping,
    + struct page *page, pgoff_t *_next)
    +{
    + if (!TestSetPageError(page)) {
    + /* can't lock the page until we've cleared PG_writeback lest we
    + * deadlock with truncate (amongst other things) */
    + end_page_writeback(page);
    + if (page->mapping == mapping) {
    + lock_page(page);
    + if (page->mapping == mapping) {
    + truncate_complete_page(mapping, page);
    + *_next = page->index + 1;
    + }
    + unlock_page(page);
    + }
    + } else if (PageWriteback(page) || PageDirty(page)) {
    + BUG();
    + }
    +}
    +
    +/**
    + * cancel_rejected_write - Cancel a write on a contiguous set of pages
    + * @mapping: mapping affected
    + * @start: first page in set
    + * @end: last page in set
    + *
    + * Cancel a write of a contiguous set of pages when the writeback was rejected
    + * by the target medium or server.
    + *
    + * The pages in question are detached and discarded from the pagecache, and the
    + * writeback and dirty states are cleared prior to invalidation. The caller
    + * must make sure that all the pages in the range are present in the pagecache,
    + * and the caller must hold PG_writeback on each of them. NOTE! All the pages
    + * are locked and unlocked as part of this process, so the caller must take
    + * care to avoid deadlock.
    + *
    + * The PTEs pointing to those pages are also cleared, leading to the PTEs being
    + * reset when new pages are allocated and the contents reloaded.
    + */
    +void cancel_rejected_write(struct address_space *mapping,
    + pgoff_t start, pgoff_t end)
    +{
    + struct pagevec pvec;
    + pgoff_t n;
    + int i;
    +
    + BUG_ON(mapping->nrpages < end - start + 1);
    +
    + /* dispose of any PTEs pointing to the affected pages */
    + unmap_mapping_range(mapping,
    + (loff_t)start << PAGE_CACHE_SHIFT,
    + (loff_t)(end - start + 1) << PAGE_CACHE_SHIFT,
    + 0);
    +
    + pagevec_init(&pvec, 0);
    + do {
    + cond_resched();
    + n = end - start + 1;
    + if (n > PAGEVEC_SIZE)
    + n = PAGEVEC_SIZE;
    + n = pagevec_lookup(&pvec, mapping, start, n);
    + for (i = 0; i < n; i++) {
    + struct page *page = pvec.pages[i];
    +
    + if (page->index < start || page->index > end)
    + continue;
    + start++;
    + cancel_rejected_page(mapping, page, &start);
    + }
    + pagevec_release(&pvec);
    + } while (start - 1 < end);
    +
    + /* dispose of any new PTEs pointing to the affected pages */
    + unmap_mapping_range(mapping,
    + (loff_t)start << PAGE_CACHE_SHIFT,
    + (loff_t)(end - start + 1) << PAGE_CACHE_SHIFT,
    + 0);
    +}
    +EXPORT_SYMBOL_GPL(cancel_rejected_write);

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  10. [PATCH 23/24] AF_RXRPC: Save the operation ID for debugging

    Save the operation ID to be used with a call that we're making for display
    through /proc/net/rxrpc_calls. This helps debugging stuck operations as we
    then know what they are.

    Signed-off-by: David Howells
    ---

    include/net/af_rxrpc.h | 1 +
    net/rxrpc/af_rxrpc.c | 3 +++
    net/rxrpc/ar-internal.h | 1 +
    net/rxrpc/ar-proc.c | 7 ++++---
    4 files changed, 9 insertions(+), 3 deletions(-)

    diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h
    index 00c2eaa..7e99733 100644
    --- a/include/net/af_rxrpc.h
    +++ b/include/net/af_rxrpc.h
    @@ -38,6 +38,7 @@ extern void rxrpc_kernel_intercept_rx_messages(struct socket *,
    extern struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *,
    struct sockaddr_rxrpc *,
    struct key *,
    + u32,
    unsigned long,
    gfp_t);
    extern int rxrpc_kernel_send_data(struct rxrpc_call *, struct msghdr *,
    diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
    index c58fa0d..621c1dd 100644
    --- a/net/rxrpc/af_rxrpc.c
    +++ b/net/rxrpc/af_rxrpc.c
    @@ -251,6 +251,7 @@ static struct rxrpc_transport *rxrpc_name_to_transport(struct socket *sock,
    * @sock: The socket on which to make the call
    * @srx: The address of the peer to contact (defaults to socket setting)
    * @key: The security context to use (defaults to socket setting)
    + * @operation_ID: The operation ID for this call (debugging only)
    * @user_call_ID: The ID to use
    *
    * Allow a kernel service to begin a call on the nominated socket. This just
    @@ -263,6 +264,7 @@ static struct rxrpc_transport *rxrpc_name_to_transport(struct socket *sock,
    struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
    struct sockaddr_rxrpc *srx,
    struct key *key,
    + u32 operation_ID,
    unsigned long user_call_ID,
    gfp_t gfp)
    {
    @@ -311,6 +313,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
    call = rxrpc_get_client_call(rx, trans, bundle, user_call_ID, true,
    gfp);
    rxrpc_put_bundle(trans, bundle);
    + call->op_id = operation_ID;
    out:
    rxrpc_put_transport(trans);
    release_sock(&rx->sk);
    diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
    index 58aaf89..f362e7e 100644
    --- a/net/rxrpc/ar-internal.h
    +++ b/net/rxrpc/ar-internal.h
    @@ -367,6 +367,7 @@ struct rxrpc_call {
    RXRPC_CALL_DEAD, /* - call is dead */
    } state;
    int debug_id; /* debug ID for printks */
    + u32 op_id; /* operation ID (for debugging only) */
    u8 channel; /* connection channel occupied by this call */

    /* transmission-phase ACK management */
    diff --git a/net/rxrpc/ar-proc.c b/net/rxrpc/ar-proc.c
    index 2e83ce3..521b826 100644
    --- a/net/rxrpc/ar-proc.c
    +++ b/net/rxrpc/ar-proc.c
    @@ -53,8 +53,8 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
    if (v == &rxrpc_calls) {
    seq_puts(seq,
    "Proto Local Remote "
    - " SvID ConnID CallID End Use State Abort "
    - " UserID\n");
    + " SvID ConnID CallID OpID End Use State "
    + " Abort UserID\n");
    return 0;
    }

    @@ -70,13 +70,14 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
    ntohs(trans->peer->srx.transport.sin.sin_port));

    seq_printf(seq,
    - "UDP %-22.22s %-22.22s %4x %08x %08x %s %3u"
    + "UDP %-22.22s %-22.22s %4x %08x %08x %08x %s %3u"
    " %-8.8s %08x %lx\n",
    lbuff,
    rbuff,
    ntohs(call->conn->service_id),
    ntohl(call->conn->cid),
    ntohl(call->call_id),
    + call->op_id,
    call->conn->in_clientflag ? "Svc" : "Clt",
    atomic_read(&call->usage),
    rxrpc_call_states[call->state],

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  11. [PATCH 24/24] FS-Cache: Make kAFS use FS-Cache

    The attached patch makes the kAFS filesystem in fs/afs/ use FS-Cache, and
    through it any attached caches. The kAFS filesystem will use caching
    automatically if it's available.

    Signed-Off-By: David Howells
    ---

    fs/Kconfig | 8 +
    fs/afs/Makefile | 3
    fs/afs/cache.c | 505 ++++++++++++++++++++++++++++++++++------------------
    fs/afs/cache.h | 15 --
    fs/afs/cell.c | 16 +-
    fs/afs/file.c | 212 +++++++++++++---------
    fs/afs/fsclient.c | 32 ++-
    fs/afs/inode.c | 25 +--
    fs/afs/internal.h | 53 ++---
    fs/afs/main.c | 27 +--
    fs/afs/mntpt.c | 4
    fs/afs/rxrpc.c | 1
    fs/afs/vlclient.c | 2
    fs/afs/vlocation.c | 23 +-
    fs/afs/volume.c | 14 -
    fs/afs/write.c | 6 -
    16 files changed, 563 insertions(+), 383 deletions(-)

    diff --git a/fs/Kconfig b/fs/Kconfig
    index ebc7341..158a8d8 100644
    --- a/fs/Kconfig
    +++ b/fs/Kconfig
    @@ -2059,6 +2059,14 @@ config AFS_DEBUG

    If unsure, say N.

    +config AFS_FSCACHE
    + bool "Provide AFS client caching support (EXPERIMENTAL)"
    + depends on EXPERIMENTAL
    + depends on AFS_FS=m && FSCACHE || AFS_FS=y && FSCACHE=y
    + help
    + Say Y here if you want AFS data to be cached locally on disk through
    + the generic filesystem cache manager
    +
    config 9P_FS
    tristate "Plan 9 Resource Sharing Support (9P2000) (Experimental)"
    depends on INET && NET_9P && EXPERIMENTAL
    diff --git a/fs/afs/Makefile b/fs/afs/Makefile
    index a666710..4f64b95 100644
    --- a/fs/afs/Makefile
    +++ b/fs/afs/Makefile
    @@ -2,7 +2,10 @@
    # Makefile for Red Hat Linux AFS client.
    #

    +afs-cache-$(CONFIG_AFS_FSCACHE) := cache.o
    +
    kafs-objs := \
    + $(afs-cache-y) \
    callback.o \
    cell.o \
    cmservice.o \
    diff --git a/fs/afs/cache.c b/fs/afs/cache.c
    index de0d7de..a5d6a70 100644
    --- a/fs/afs/cache.c
    +++ b/fs/afs/cache.c
    @@ -9,248 +9,399 @@
    * 2 of the License, or (at your option) any later version.
    */

    -#ifdef AFS_CACHING_SUPPORT
    -static cachefs_match_val_t afs_cell_cache_match(void *target,
    - const void *entry);
    -static void afs_cell_cache_update(void *source, void *entry);
    -
    -struct cachefs_index_def afs_cache_cell_index_def = {
    - .name = "cell_ix",
    - .data_size = sizeof(struct afs_cache_cell),
    - .keys[0] = { CACHEFS_INDEX_KEYS_ASCIIZ, 64 },
    - .match = afs_cell_cache_match,
    - .update = afs_cell_cache_update,
    +#include
    +#include
    +#include "internal.h"
    +
    +static uint16_t afs_cell_cache_get_key(const void *cookie_netfs_data,
    + void *buffer, uint16_t buflen);
    +static uint16_t afs_cell_cache_get_aux(const void *cookie_netfs_data,
    + void *buffer, uint16_t buflen);
    +static fscache_checkaux_t afs_cell_cache_check_aux(void *cookie_netfs_data,
    + const void *buffer,
    + uint16_t buflen);
    +
    +static uint16_t afs_vlocation_cache_get_key(const void *cookie_netfs_data,
    + void *buffer, uint16_t buflen);
    +static uint16_t afs_vlocation_cache_get_aux(const void *cookie_netfs_data,
    + void *buffer, uint16_t buflen);
    +static fscache_checkaux_t afs_vlocation_cache_check_aux(void *cookie_netfs_data,
    + const void *buffer,
    + uint16_t buflen);
    +
    +static uint16_t afs_volume_cache_get_key(const void *cookie_netfs_data,
    + void *buffer, uint16_t buflen);
    +
    +static uint16_t afs_vnode_cache_get_key(const void *cookie_netfs_data,
    + void *buffer, uint16_t buflen);
    +static void afs_vnode_cache_get_attr(const void *cookie_netfs_data,
    + uint64_t *size);
    +static uint16_t afs_vnode_cache_get_aux(const void *cookie_netfs_data,
    + void *buffer, uint16_t buflen);
    +static fscache_checkaux_t afs_vnode_cache_check_aux(void *cookie_netfs_data,
    + const void *buffer,
    + uint16_t buflen);
    +static void afs_vnode_cache_now_uncached(void *cookie_netfs_data);
    +
    +static struct fscache_netfs_operations afs_cache_ops = {
    +};
    +
    +struct fscache_netfs afs_cache_netfs = {
    + .name = "afs",
    + .version = 0,
    + .ops = &afs_cache_ops,
    +};
    +
    +struct fscache_cookie_def afs_cell_cache_index_def = {
    + .name = "AFS.cell",
    + .type = FSCACHE_COOKIE_TYPE_INDEX,
    + .get_key = afs_cell_cache_get_key,
    + .get_aux = afs_cell_cache_get_aux,
    + .check_aux = afs_cell_cache_check_aux,
    +};
    +
    +struct fscache_cookie_def afs_vlocation_cache_index_def = {
    + .name = "AFS.vldb",
    + .type = FSCACHE_COOKIE_TYPE_INDEX,
    + .get_key = afs_vlocation_cache_get_key,
    + .get_aux = afs_vlocation_cache_get_aux,
    + .check_aux = afs_vlocation_cache_check_aux,
    +};
    +
    +struct fscache_cookie_def afs_volume_cache_index_def = {
    + .name = "AFS.volume",
    + .type = FSCACHE_COOKIE_TYPE_INDEX,
    + .get_key = afs_volume_cache_get_key,
    +};
    +
    +struct fscache_cookie_def afs_vnode_cache_index_def = {
    + .name = "AFS.vnode",
    + .type = FSCACHE_COOKIE_TYPE_DATAFILE,
    + .get_key = afs_vnode_cache_get_key,
    + .get_attr = afs_vnode_cache_get_attr,
    + .get_aux = afs_vnode_cache_get_aux,
    + .check_aux = afs_vnode_cache_check_aux,
    + .now_uncached = afs_vnode_cache_now_uncached,
    };
    -#endif

    /*
    - * match a cell record obtained from the cache
    + * set the key for the index entry
    */
    -#ifdef AFS_CACHING_SUPPORT
    -static cachefs_match_val_t afs_cell_cache_match(void *target,
    - const void *entry)
    +static uint16_t afs_cell_cache_get_key(const void *cookie_netfs_data,
    + void *buffer, uint16_t bufmax)
    {
    - const struct afs_cache_cell *ccell = entry;
    - struct afs_cell *cell = target;
    + const struct afs_cell *cell = cookie_netfs_data;
    + uint16_t klen;

    - _enter("{%s},{%s}", ccell->name, cell->name);
    + _enter("%p,%p,%u", cell, buffer, bufmax);

    - if (strncmp(ccell->name, cell->name, sizeof(ccell->name)) == 0) {
    - _leave(" = SUCCESS");
    - return CACHEFS_MATCH_SUCCESS;
    - }
    + klen = strlen(cell->name);
    + if (klen > bufmax)
    + return 0;

    - _leave(" = FAILED");
    - return CACHEFS_MATCH_FAILED;
    + memcpy(buffer, cell->name, klen);
    + return klen;
    }
    -#endif

    /*
    - * update a cell record in the cache
    + * provide new auxilliary cache data
    */
    -#ifdef AFS_CACHING_SUPPORT
    -static void afs_cell_cache_update(void *source, void *entry)
    +static uint16_t afs_cell_cache_get_aux(const void *cookie_netfs_data,
    + void *buffer, uint16_t bufmax)
    {
    - struct afs_cache_cell *ccell = entry;
    - struct afs_cell *cell = source;
    + const struct afs_cell *cell = cookie_netfs_data;
    + uint16_t dlen;

    - _enter("%p,%p", source, entry);
    + _enter("%p,%p,%u", cell, buffer, bufmax);

    - strncpy(ccell->name, cell->name, sizeof(ccell->name));
    + dlen = cell->vl_naddrs * sizeof(cell->vl_addrs[0]);
    + dlen = min(dlen, bufmax);
    + dlen &= ~(sizeof(cell->vl_addrs[0]) - 1);

    - memcpy(ccell->vl_servers,
    - cell->vl_addrs,
    - min(sizeof(ccell->vl_servers), sizeof(cell->vl_addrs)));
    + memcpy(buffer, cell->vl_addrs, dlen);
    + return dlen;
    +}

    +/*
    + * check that the auxilliary data indicates that the entry is still valid
    + */
    +static fscache_checkaux_t afs_cell_cache_check_aux(void *cookie_netfs_data,
    + const void *buffer,
    + uint16_t buflen)
    +{
    + _leave(" = OKAY");
    + return FSCACHE_CHECKAUX_OKAY;
    }
    -#endif
    -
    -#ifdef AFS_CACHING_SUPPORT
    -static cachefs_match_val_t afs_vlocation_cache_match(void *target,
    - const void *entry);
    -static void afs_vlocation_cache_update(void *source, void *entry);
    -
    -struct cachefs_index_def afs_vlocation_cache_index_def = {
    - .name = "vldb",
    - .data_size = sizeof(struct afs_cache_vlocation),
    - .keys[0] = { CACHEFS_INDEX_KEYS_ASCIIZ, 64 },
    - .match = afs_vlocation_cache_match,
    - .update = afs_vlocation_cache_update,
    -};
    -#endif

    +/************************************************** ***************************/
    /*
    - * match a VLDB record stored in the cache
    - * - may also load target from entry
    + * set the key for the index entry
    */
    -#ifdef AFS_CACHING_SUPPORT
    -static cachefs_match_val_t afs_vlocation_cache_match(void *target,
    - const void *entry)
    +static uint16_t afs_vlocation_cache_get_key(const void *cookie_netfs_data,
    + void *buffer, uint16_t bufmax)
    {
    - const struct afs_cache_vlocation *vldb = entry;
    - struct afs_vlocation *vlocation = target;
    + const struct afs_vlocation *vlocation = cookie_netfs_data;
    + uint16_t klen;

    - _enter("{%s},{%s}", vlocation->vldb.name, vldb->name);
    + _enter("{%s},%p,%u", vlocation->vldb.name, buffer, bufmax);

    - if (strncmp(vlocation->vldb.name, vldb->name, sizeof(vldb->name)) == 0
    - ) {
    - if (!vlocation->valid ||
    - vlocation->vldb.rtime == vldb->rtime
    + klen = strnlen(vlocation->vldb.name, sizeof(vlocation->vldb.name));
    + if (klen > bufmax)
    + return 0;
    +
    + memcpy(buffer, vlocation->vldb.name, klen);
    +
    + _leave(" = %u", klen);
    + return klen;
    +}
    +
    +/*
    + * provide new auxilliary cache data
    + */
    +static uint16_t afs_vlocation_cache_get_aux(const void *cookie_netfs_data,
    + void *buffer, uint16_t bufmax)
    +{
    + const struct afs_vlocation *vlocation = cookie_netfs_data;
    + uint16_t dlen;
    +
    + _enter("{%s},%p,%u", vlocation->vldb.name, buffer, bufmax);
    +
    + dlen = sizeof(struct afs_cache_vlocation);
    + dlen -= offsetof(struct afs_cache_vlocation, nservers);
    + if (dlen > bufmax)
    + return 0;
    +
    + memcpy(buffer, (uint8_t *)&vlocation->vldb.nservers, dlen);
    +
    + _leave(" = %u", dlen);
    + return dlen;
    +}
    +
    +/*
    + * check that the auxilliary data indicates that the entry is still valid
    + */
    +static fscache_checkaux_t afs_vlocation_cache_check_aux(void *cookie_netfs_data,
    + const void *buffer,
    + uint16_t buflen)
    +{
    + const struct afs_cache_vlocation *cvldb;
    + struct afs_vlocation *vlocation = cookie_netfs_data;
    + uint16_t dlen;
    +
    + _enter("{%s},%p,%u", vlocation->vldb.name, buffer, buflen);
    +
    + /* check the size of the data is what we're expecting */
    + dlen = sizeof(struct afs_cache_vlocation);
    + dlen -= offsetof(struct afs_cache_vlocation, nservers);
    + if (dlen != buflen)
    + return FSCACHE_CHECKAUX_OBSOLETE;
    +
    + cvldb = container_of(buffer, struct afs_cache_vlocation, nservers);
    +
    + /* if what's on disk is more valid than what's in memory, then use the
    + * VL record from the cache */
    + if (!vlocation->valid || vlocation->vldb.rtime == cvldb->rtime) {
    + memcpy((uint8_t *)&vlocation->vldb.nservers, buffer, dlen);
    + vlocation->valid = 1;
    + _leave(" = SUCCESS [c->m]");
    + return FSCACHE_CHECKAUX_OKAY;
    + }
    +
    + /* need to update the cache if the cached info differs */
    + if (memcmp(&vlocation->vldb, buffer, dlen) != 0) {
    + /* delete if the volume IDs for this name differ */
    + if (memcmp(&vlocation->vldb.vid, &cvldb->vid,
    + sizeof(cvldb->vid)) != 0
    ) {
    - vlocation->vldb = *vldb;
    - vlocation->valid = 1;
    - _leave(" = SUCCESS [c->m]");
    - return CACHEFS_MATCH_SUCCESS;
    - } else if (memcmp(&vlocation->vldb, vldb, sizeof(*vldb)) != 0) {
    - /* delete if VIDs for this name differ */
    - if (memcmp(&vlocation->vldb.vid,
    - &vldb->vid,
    - sizeof(vldb->vid)) != 0) {
    - _leave(" = DELETE");
    - return CACHEFS_MATCH_SUCCESS_DELETE;
    - }
    -
    - _leave(" = UPDATE");
    - return CACHEFS_MATCH_SUCCESS_UPDATE;
    - } else {
    - _leave(" = SUCCESS");
    - return CACHEFS_MATCH_SUCCESS;
    + _leave(" = OBSOLETE");
    + return FSCACHE_CHECKAUX_OBSOLETE;
    }
    +
    + _leave(" = UPDATE");
    + return FSCACHE_CHECKAUX_NEEDS_UPDATE;
    }

    - _leave(" = FAILED");
    - return CACHEFS_MATCH_FAILED;
    + _leave(" = OKAY");
    + return FSCACHE_CHECKAUX_OKAY;
    }
    -#endif

    +/************************************************** ***************************/
    /*
    - * update a VLDB record stored in the cache
    + * set the key for the volume index entry
    */
    -#ifdef AFS_CACHING_SUPPORT
    -static void afs_vlocation_cache_update(void *source, void *entry)
    +static uint16_t afs_volume_cache_get_key(const void *cookie_netfs_data,
    + void *buffer, uint16_t bufmax)
    {
    - struct afs_cache_vlocation *vldb = entry;
    - struct afs_vlocation *vlocation = source;
    + const struct afs_volume *volume = cookie_netfs_data;
    + uint16_t klen;
    +
    + _enter("{%u},%p,%u", volume->type, buffer, bufmax);

    - _enter("");
    + klen = sizeof(volume->type);
    + if (klen > bufmax)
    + return 0;
    +
    + memcpy(buffer, &volume->type, sizeof(volume->type));
    +
    + _leave(" = %u", klen);
    + return klen;

    - *vldb = vlocation->vldb;
    }
    -#endif
    -
    -#ifdef AFS_CACHING_SUPPORT
    -static cachefs_match_val_t afs_volume_cache_match(void *target,
    - const void *entry);
    -static void afs_volume_cache_update(void *source, void *entry);
    -
    -struct cachefs_index_def afs_volume_cache_index_def = {
    - .name = "volume",
    - .data_size = sizeof(struct afs_cache_vhash),
    - .keys[0] = { CACHEFS_INDEX_KEYS_BIN, 1 },
    - .keys[1] = { CACHEFS_INDEX_KEYS_BIN, 1 },
    - .match = afs_volume_cache_match,
    - .update = afs_volume_cache_update,
    -};
    -#endif

    +/************************************************** ***************************/
    /*
    - * match a volume hash record stored in the cache
    + * set the key for the index entry
    */
    -#ifdef AFS_CACHING_SUPPORT
    -static cachefs_match_val_t afs_volume_cache_match(void *target,
    - const void *entry)
    +static uint16_t afs_vnode_cache_get_key(const void *cookie_netfs_data,
    + void *buffer, uint16_t bufmax)
    {
    - const struct afs_cache_vhash *vhash = entry;
    - struct afs_volume *volume = target;
    + const struct afs_vnode *vnode = cookie_netfs_data;
    + uint16_t klen;

    - _enter("{%u},{%u}", volume->type, vhash->vtype);
    + _enter("{%x,%x,%llx},%p,%u",
    + vnode->fid.vnode, vnode->fid.unique, vnode->status.data_version,
    + buffer, bufmax);

    - if (volume->type == vhash->vtype) {
    - _leave(" = SUCCESS");
    - return CACHEFS_MATCH_SUCCESS;
    - }
    + klen = sizeof(vnode->fid.vnode);
    + if (klen > bufmax)
    + return 0;

    - _leave(" = FAILED");
    - return CACHEFS_MATCH_FAILED;
    + memcpy(buffer, &vnode->fid.vnode, sizeof(vnode->fid.vnode));
    +
    + _leave(" = %u", klen);
    + return klen;
    }
    -#endif

    /*
    - * update a volume hash record stored in the cache
    + * provide updated file attributes
    */
    -#ifdef AFS_CACHING_SUPPORT
    -static void afs_volume_cache_update(void *source, void *entry)
    +static void afs_vnode_cache_get_attr(const void *cookie_netfs_data,
    + uint64_t *size)
    {
    - struct afs_cache_vhash *vhash = entry;
    - struct afs_volume *volume = source;
    + const struct afs_vnode *vnode = cookie_netfs_data;

    - _enter("");
    + _enter("{%x,%x,%llx},",
    + vnode->fid.vnode, vnode->fid.unique,
    + vnode->status.data_version);

    - vhash->vtype = volume->type;
    + *size = vnode->status.size;
    }
    -#endif
    -
    -#ifdef AFS_CACHING_SUPPORT
    -static cachefs_match_val_t afs_vnode_cache_match(void *target,
    - const void *entry);
    -static void afs_vnode_cache_update(void *source, void *entry);
    -
    -struct cachefs_index_def afs_vnode_cache_index_def = {
    - .name = "vnode",
    - .data_size = sizeof(struct afs_cache_vnode),
    - .keys[0] = { CACHEFS_INDEX_KEYS_BIN, 4 },
    - .match = afs_vnode_cache_match,
    - .update = afs_vnode_cache_update,
    -};
    -#endif

    /*
    - * match a vnode record stored in the cache
    + * provide new auxilliary cache data
    */
    -#ifdef AFS_CACHING_SUPPORT
    -static cachefs_match_val_t afs_vnode_cache_match(void *target,
    - const void *entry)
    +static uint16_t afs_vnode_cache_get_aux(const void *cookie_netfs_data,
    + void *buffer, uint16_t bufmax)
    {
    - const struct afs_cache_vnode *cvnode = entry;
    - struct afs_vnode *vnode = target;
    -
    - _enter("{%x,%x,%Lx},{%x,%x,%Lx}",
    - vnode->fid.vnode,
    - vnode->fid.unique,
    - vnode->status.version,
    - cvnode->vnode_id,
    - cvnode->vnode_unique,
    - cvnode->data_version);
    -
    - if (vnode->fid.vnode != cvnode->vnode_id) {
    - _leave(" = FAILED");
    - return CACHEFS_MATCH_FAILED;
    + const struct afs_vnode *vnode = cookie_netfs_data;
    + uint16_t dlen;
    +
    + _enter("{%x,%x,%Lx},%p,%u",
    + vnode->fid.vnode, vnode->fid.unique, vnode->status.data_version,
    + buffer, bufmax);
    +
    + dlen = sizeof(vnode->fid.unique) + sizeof(vnode->status.data_version);
    + if (dlen > bufmax)
    + return 0;
    +
    + memcpy(buffer, &vnode->fid.unique, sizeof(vnode->fid.unique));
    + buffer += sizeof(vnode->fid.unique);
    + memcpy(buffer, &vnode->status.data_version,
    + sizeof(vnode->status.data_version));
    +
    + _leave(" = %u", dlen);
    + return dlen;
    +}
    +
    +/*
    + * check that the auxilliary data indicates that the entry is still valid
    + */
    +static fscache_checkaux_t afs_vnode_cache_check_aux(void *cookie_netfs_data,
    + const void *buffer,
    + uint16_t buflen)
    +{
    + struct afs_vnode *vnode = cookie_netfs_data;
    + uint16_t dlen;
    +
    + _enter("{%x,%x,%llx},%p,%u",
    + vnode->fid.vnode, vnode->fid.unique, vnode->status.data_version,
    + buffer, buflen);
    +
    + /* check the size of the data is what we're expecting */
    + dlen = sizeof(vnode->fid.unique) + sizeof(vnode->status.data_version);
    + if (dlen != buflen) {
    + _leave(" = OBSOLETE [len %hx != %hx]", dlen, buflen);
    + return FSCACHE_CHECKAUX_OBSOLETE;
    }

    - if (vnode->fid.unique != cvnode->vnode_unique ||
    - vnode->status.version != cvnode->data_version) {
    - _leave(" = DELETE");
    - return CACHEFS_MATCH_SUCCESS_DELETE;
    + if (memcmp(buffer,
    + &vnode->fid.unique,
    + sizeof(vnode->fid.unique)
    + ) != 0) {
    + unsigned unique;
    +
    + memcpy(&unique, buffer, sizeof(unique));
    +
    + _leave(" = OBSOLETE [uniq %x != %x]",
    + unique, vnode->fid.unique);
    + return FSCACHE_CHECKAUX_OBSOLETE;
    + }
    +
    + if (memcmp(buffer + sizeof(vnode->fid.unique),
    + &vnode->status.data_version,
    + sizeof(vnode->status.data_version)
    + ) != 0) {
    + afs_dataversion_t version;
    +
    + memcpy(&version, buffer + sizeof(vnode->fid.unique),
    + sizeof(version));
    +
    + _leave(" = OBSOLETE [vers %llx != %llx]",
    + version, vnode->status.data_version);
    + return FSCACHE_CHECKAUX_OBSOLETE;
    }

    _leave(" = SUCCESS");
    - return CACHEFS_MATCH_SUCCESS;
    + return FSCACHE_CHECKAUX_OKAY;
    }
    -#endif

    /*
    - * update a vnode record stored in the cache
    + * indication the cookie is no longer uncached
    + * - this function is called when the backing store currently caching a cookie
    + * is removed
    + * - the netfs should use this to clean up any markers indicating cached pages
    + * - this is mandatory for any object that may have data
    */
    -#ifdef AFS_CACHING_SUPPORT
    -static void afs_vnode_cache_update(void *source, void *entry)
    +static void afs_vnode_cache_now_uncached(void *cookie_netfs_data)
    {
    - struct afs_cache_vnode *cvnode = entry;
    - struct afs_vnode *vnode = source;
    + struct afs_vnode *vnode = cookie_netfs_data;
    + struct pagevec pvec;
    + pgoff_t first;
    + int loop, nr_pages;
    +
    + _enter("{%x,%x,%Lx}",
    + vnode->fid.vnode, vnode->fid.unique, vnode->status.data_version);
    +
    + pagevec_init(&pvec, 0);
    + first = 0;
    +
    + for (; {
    + /* grab a bunch of pages to clean */
    + nr_pages = pagevec_lookup(&pvec, vnode->vfs_inode.i_mapping,
    + first,
    + PAGEVEC_SIZE - pagevec_count(&pvec));
    + if (!nr_pages)
    + break;

    - _enter("");
    + for (loop = 0; loop < nr_pages; loop++)
    + ClearPageFsCache(pvec.pages[loop]);
    +
    + first = pvec.pages[nr_pages - 1]->index + 1;
    +
    + pvec.nr = nr_pages;
    + pagevec_release(&pvec);
    + cond_resched();
    + }

    - cvnode->vnode_id = vnode->fid.vnode;
    - cvnode->vnode_unique = vnode->fid.unique;
    - cvnode->data_version = vnode->status.version;
    + _leave("");
    }
    -#endif
    diff --git a/fs/afs/cache.h b/fs/afs/cache.h
    index 36a3642..b985052 100644
    --- a/fs/afs/cache.h
    +++ b/fs/afs/cache.h
    @@ -1,6 +1,6 @@
    /* AFS local cache management interface
    *
    - * Copyright (C) 2002 Red Hat, Inc. All Rights Reserved.
    + * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
    * Written by David Howells (dhowells@redhat.com)
    *
    * This program is free software; you can redistribute it and/or
    @@ -9,15 +9,4 @@
    * 2 of the License, or (at your option) any later version.
    */

    -#ifndef AFS_CACHE_H
    -#define AFS_CACHE_H
    -
    -#undef AFS_CACHING_SUPPORT
    -
    -#include
    -#ifdef AFS_CACHING_SUPPORT
    -#include
    -#endif
    -#include "types.h"
    -
    -#endif /* AFS_CACHE_H */
    +#include
    diff --git a/fs/afs/cell.c b/fs/afs/cell.c
    index 175a567..950df56 100644
    --- a/fs/afs/cell.c
    +++ b/fs/afs/cell.c
    @@ -145,12 +145,11 @@ struct afs_cell *afs_cell_create(const char *name, char *vllist)
    if (ret < 0)
    goto error;

    -#ifdef AFS_CACHING_SUPPORT
    - /* put it up for caching */
    - cachefs_acquire_cookie(afs_cache_netfs.primary_ind ex,
    - &afs_vlocation_cache_index_def,
    - cell,
    - &cell->cache);
    +#ifdef CONFIG_AFS_FSCACHE
    + /* put it up for caching (this never returns an error) */
    + cell->cache = fscache_acquire_cookie(afs_cache_netfs.primary_ind ex,
    + &afs_cell_cache_index_def,
    + cell);
    #endif

    /* add to the cell lists */
    @@ -353,10 +352,7 @@ static void afs_cell_destroy(struct afs_cell *cell)
    list_del_init(&cell->proc_link);
    up_write(&afs_proc_cells_sem);

    -#ifdef AFS_CACHING_SUPPORT
    - cachefs_relinquish_cookie(cell->cache, 0);
    -#endif
    -
    + fscache_relinquish_cookie(cell->cache, 0);
    key_put(cell->anonymous_key);
    kfree(cell);

    diff --git a/fs/afs/file.c b/fs/afs/file.c
    index 1323df4..276ed86 100644
    --- a/fs/afs/file.c
    +++ b/fs/afs/file.c
    @@ -24,6 +24,9 @@ static int afs_releasepage(struct page *page, gfp_t gfp_flags);
    static int afs_launder_page(struct page *page);
    static int afs_mmap(struct file *file, struct vm_area_struct *vma);

    +static int afs_readpages(struct file *filp, struct address_space *mapping,
    + struct list_head *pages, unsigned nr_pages);
    +
    const struct file_operations afs_file_operations = {
    .open = afs_open,
    .release = afs_release,
    @@ -47,6 +50,7 @@ const struct inode_operations afs_file_inode_operations = {

    const struct address_space_operations afs_fs_aops = {
    .readpage = afs_readpage,
    + .readpages = afs_readpages,
    .set_page_dirty = afs_set_page_dirty,
    .launder_page = afs_launder_page,
    .releasepage = afs_releasepage,
    @@ -107,37 +111,18 @@ int afs_release(struct inode *inode, struct file *file)
    /*
    * deal with notification that a page was read from the cache
    */
    -#ifdef AFS_CACHING_SUPPORT
    -static void afs_readpage_read_complete(void *cookie_data,
    - struct page *page,
    - void *data,
    - int error)
    +static void afs_file_readpage_read_complete(struct page *page,
    + void *data,
    + int error)
    {
    - _enter("%p,%p,%p,%d", cookie_data, page, data, error);
    + _enter("%p,%p,%d", page, data, error);

    - if (error)
    - SetPageError(page);
    - else
    + /* if the read completes with an error, we just unlock the page and let
    + * the VM reissue the readpage */
    + if (!error)
    SetPageUptodate(page);
    unlock_page(page);
    -
    -}
    -#endif
    -
    -/*
    - * deal with notification that a page was written to the cache
    - */
    -#ifdef AFS_CACHING_SUPPORT
    -static void afs_readpage_write_complete(void *cookie_data,
    - struct page *page,
    - void *data,
    - int error)
    -{
    - _enter("%p,%p,%p,%d", cookie_data, page, data, error);
    -
    - unlock_page(page);
    }
    -#endif

    /*
    * AFS read page from file, directory or symlink
    @@ -167,30 +152,27 @@ static int afs_readpage(struct file *file, struct page *page)
    if (test_bit(AFS_VNODE_DELETED, &vnode->flags))
    goto error;

    -#ifdef AFS_CACHING_SUPPORT
    /* is it cached? */
    - ret = cachefs_read_or_alloc_page(vnode->cache,
    + ret = fscache_read_or_alloc_page(vnode->cache,
    page,
    afs_file_readpage_read_complete,
    NULL,
    GFP_KERNEL);
    -#else
    - ret = -ENOBUFS;
    -#endif
    -
    switch (ret) {
    - /* read BIO submitted and wb-journal entry found */
    - case 1:
    - BUG(); // TODO - handle wb-journal match
    -
    /* read BIO submitted (page in cache) */
    case 0:
    break;

    - /* no page available in cache */
    - case -ENOBUFS:
    + /* page not yet cached */
    case -ENODATA:
    + _debug("cache said ENODATA");
    + goto go_on;
    +
    + /* page will not be cached */
    + case -ENOBUFS:
    + _debug("cache said ENOBUFS");
    default:
    + go_on:
    offset = page->index << PAGE_CACHE_SHIFT;
    len = min_t(size_t, i_size_read(inode) - offset, PAGE_SIZE);

    @@ -204,27 +186,21 @@ static int afs_readpage(struct file *file, struct page *page)
    set_bit(AFS_VNODE_DELETED, &vnode->flags);
    ret = -ESTALE;
    }
    -#ifdef AFS_CACHING_SUPPORT
    - cachefs_uncache_page(vnode->cache, page);
    -#endif
    +
    + fscache_uncache_page(vnode->cache, page);
    + BUG_ON(PageFsCache(page));
    goto error;
    }

    SetPageUptodate(page);

    -#ifdef AFS_CACHING_SUPPORT
    - if (cachefs_write_page(vnode->cache,
    - page,
    - afs_file_readpage_write_complete,
    - NULL,
    - GFP_KERNEL) != 0
    - ) {
    - cachefs_uncache_page(vnode->cache, page);
    - unlock_page(page);
    + /* send the page to the cache */
    + if (PageFsCache(page) &&
    + fscache_write_page(vnode->cache, page, GFP_KERNEL) != 0) {
    + fscache_uncache_page(vnode->cache, page);
    + BUG_ON(PageFsCache(page));
    }
    -#else
    unlock_page(page);
    -#endif
    }

    _leave(" = 0");
    @@ -238,34 +214,55 @@ error:
    }

    /*
    - * invalidate part or all of a page
    + * read a set of pages
    */
    -static void afs_invalidatepage(struct page *page, unsigned long offset)
    +static int afs_readpages(struct file *file, struct address_space *mapping,
    + struct list_head *pages, unsigned nr_pages)
    {
    - int ret = 1;
    + struct afs_vnode *vnode;
    + int ret = 0;

    - _enter("{%lu},%lu", page->index, offset);
    + _enter(",{%lu},,%d", mapping->host->i_ino, nr_pages);

    - BUG_ON(!PageLocked(page));
    + vnode = AFS_FS_I(mapping->host);
    + if (vnode->flags & AFS_VNODE_DELETED) {
    + _leave(" = -ESTALE");
    + return -ESTALE;
    + }

    - if (PagePrivate(page)) {
    - /* We release buffers only if the entire page is being
    - * invalidated.
    - * The get_block cached value has been unconditionally
    - * invalidated, so real IO is not possible anymore.
    - */
    - if (offset == 0) {
    - BUG_ON(!PageLocked(page));
    -
    - ret = 0;
    - if (!PageWriteback(page))
    - ret = page->mapping->a_ops->releasepage(page,
    - 0);
    - /* possibly should BUG_ON(!ret); - neilb */
    - }
    + /* attempt to read as many of the pages as possible */
    + ret = fscache_read_or_alloc_pages(vnode->cache,
    + mapping,
    + pages,
    + &nr_pages,
    + afs_file_readpage_read_complete,
    + NULL,
    + mapping_gfp_mask(mapping));
    +
    + switch (ret) {
    + /* all pages are being read from the cache */
    + case 0:
    + BUG_ON(!list_empty(pages));
    + BUG_ON(nr_pages != 0);
    + _leave(" = 0 [reading all]");
    + return 0;
    +
    + /* there were pages that couldn't be read from the cache */
    + case -ENODATA:
    + case -ENOBUFS:
    + break;
    +
    + /* other error */
    + default:
    + _leave(" = %d", ret);
    + return ret;
    }

    - _leave(" = %d", ret);
    + /* load the missing pages from the network */
    + ret = read_cache_pages(mapping, pages, (void *) afs_readpage, file);
    +
    + _leave(" = %d [netting]", ret);
    + return ret;
    }

    /*
    @@ -279,27 +276,80 @@ static int afs_launder_page(struct page *page)
    }

    /*
    - * release a page and cleanup its private data
    + * invalidate part or all of a page
    + * - release a page and clean up its private data if offset is 0 (indicating
    + * the entire page)
    + */
    +static void afs_invalidatepage(struct page *page, unsigned long offset)
    +{
    + struct afs_writeback *wb = (struct afs_writeback *) page_private(page);
    + struct afs_vnode *vnode = AFS_FS_I(page->mapping->host);
    +
    + _enter("{%lu},%lu", page->index, offset);
    +
    + BUG_ON(!PageLocked(page));
    +
    + /* we clean up only if the entire page is being invalidated */
    + if (offset == 0) {
    + if (PageFsCache(page)) {
    + wait_on_page_fscache_write(page);
    + fscache_uncache_page(vnode->cache, page);
    + ClearPageFsCache(page);
    + }
    +
    + if (PagePrivate(page)) {
    + if (wb && !PageWriteback(page)) {
    + set_page_private(page, 0);
    + afs_put_writeback(wb);
    + }
    +
    + if (!page_private(page))
    + ClearPagePrivate(page);
    + }
    + }
    +
    + _leave("");
    +}
    +
    +/*
    + * release a page and clean up its private state if it's not busy
    + * - return true if the page can now be released, false if not
    */
    static int afs_releasepage(struct page *page, gfp_t gfp_flags)
    {
    + struct afs_writeback *wb = (struct afs_writeback *) page_private(page);
    struct afs_vnode *vnode = AFS_FS_I(page->mapping->host);
    - struct afs_writeback *wb;

    _enter("{{%x:%u}[%lu],%lx},%x",
    vnode->fid.vid, vnode->fid.vnode, page->index, page->flags,
    gfp_flags);

    + /* deny if page is being written to the cache and the caller hasn't
    + * elected to wait */
    + if (PageFsCache(page)) {
    + if (PageFsCacheWrite(page)) {
    + if (!(gfp_flags & __GFP_WAIT)) {
    + _leave(" = F [cache busy]");
    + return 0;
    + }
    + wait_on_page_fscache_write(page);
    + }
    +
    + fscache_uncache_page(vnode->cache, page);
    + ClearPageFsCache(page);
    + }
    +
    if (PagePrivate(page)) {
    - wb = (struct afs_writeback *) page_private(page);
    - ASSERT(wb != NULL);
    - set_page_private(page, 0);
    + if (wb) {
    + set_page_private(page, 0);
    + afs_put_writeback(wb);
    + }
    ClearPagePrivate(page);
    - afs_put_writeback(wb);
    }

    - _leave(" = 0");
    - return 0;
    + /* indicate that the page can be released */
    + _leave(" = T");
    + return 1;
    }

    /*
    diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
    index 04584c0..a468f2d 100644
    --- a/fs/afs/fsclient.c
    +++ b/fs/afs/fsclient.c
    @@ -287,6 +287,7 @@ int afs_fs_fetch_file_status(struct afs_server *server,
    call->reply2 = volsync;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    + call->operation_ID = htonl(FSFETCHSTATUS);

    /* marshall the parameters */
    bp = call->request;
    @@ -316,7 +317,7 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call,
    case 0:
    call->offset = 0;
    call->unmarshall++;
    - if (call->operation_ID != FSFETCHDATA64) {
    + if (call->operation_ID != htonl(FSFETCHDATA64)) {
    call->unmarshall++;
    goto no_msw;
    }
    @@ -464,7 +465,7 @@ static int afs_fs_fetch_data64(struct afs_server *server,
    call->reply3 = buffer;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    - call->operation_ID = FSFETCHDATA64;
    + call->operation_ID = htonl(FSFETCHDATA64);

    /* marshall the parameters */
    bp = call->request;
    @@ -509,7 +510,7 @@ int afs_fs_fetch_data(struct afs_server *server,
    call->reply3 = buffer;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    - call->operation_ID = FSFETCHDATA;
    + call->operation_ID = htonl(FSFETCHDATA);

    /* marshall the parameters */
    bp = call->request;
    @@ -577,6 +578,7 @@ int afs_fs_give_up_callbacks(struct afs_server *server,

    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    + call->operation_ID = htonl(FSGIVEUPCALLBACKS);

    /* marshall the parameters */
    bp = call->request;
    @@ -683,10 +685,11 @@ int afs_fs_create(struct afs_server *server,
    call->reply4 = newcb;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    + call->operation_ID = htonl(S_ISDIR(mode) ? FSMAKEDIR : FSCREATEFILE);

    /* marshall the parameters */
    bp = call->request;
    - *bp++ = htonl(S_ISDIR(mode) ? FSMAKEDIR : FSCREATEFILE);
    + *bp++ = call->operation_ID;
    *bp++ = htonl(vnode->fid.vid);
    *bp++ = htonl(vnode->fid.vnode);
    *bp++ = htonl(vnode->fid.unique);
    @@ -772,10 +775,11 @@ int afs_fs_remove(struct afs_server *server,
    call->reply = vnode;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    + call->operation_ID = htonl(isdir ? FSREMOVEDIR : FSREMOVEFILE);

    /* marshall the parameters */
    bp = call->request;
    - *bp++ = htonl(isdir ? FSREMOVEDIR : FSREMOVEFILE);
    + *bp++ = call->operation_ID;
    *bp++ = htonl(vnode->fid.vid);
    *bp++ = htonl(vnode->fid.vnode);
    *bp++ = htonl(vnode->fid.unique);
    @@ -857,6 +861,7 @@ int afs_fs_link(struct afs_server *server,
    call->reply2 = vnode;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    + call->operation_ID = htonl(FSLINK);

    /* marshall the parameters */
    bp = call->request;
    @@ -954,6 +959,7 @@ int afs_fs_symlink(struct afs_server *server,
    call->reply3 = newstatus;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    + call->operation_ID = htonl(FSSYMLINK);

    /* marshall the parameters */
    bp = call->request;
    @@ -1062,6 +1068,7 @@ int afs_fs_rename(struct afs_server *server,
    call->reply2 = new_dvnode;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    + call->operation_ID = htonl(FSRENAME);

    /* marshall the parameters */
    bp = call->request;
    @@ -1178,6 +1185,7 @@ static int afs_fs_store_data64(struct afs_server *server,
    call->last_to = to;
    call->send_pages = true;
    call->store_version = vnode->status.data_version + 1;
    + call->operation_ID = htonl(FSSTOREDATA64);

    /* marshall the parameters */
    bp = call->request;
    @@ -1255,6 +1263,7 @@ int afs_fs_store_data(struct afs_server *server, struct afs_writeback *wb,
    call->last_to = to;
    call->send_pages = true;
    call->store_version = vnode->status.data_version + 1;
    + call->operation_ID = htonl(FSSTOREDATA);

    /* marshall the parameters */
    bp = call->request;
    @@ -1303,7 +1312,8 @@ static int afs_deliver_fs_store_status(struct afs_call *call,

    /* unmarshall the reply once we've received all of it */
    store_version = NULL;
    - if (call->operation_ID == FSSTOREDATA)
    + if (call->operation_ID == htonl(FSSTOREDATA) ||
    + call->operation_ID == htonl(FSSTOREDATA64))
    store_version = &call->store_version;

    bp = call->buffer;
    @@ -1365,7 +1375,7 @@ static int afs_fs_setattr_size64(struct afs_server *server, struct key *key,
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    call->store_version = vnode->status.data_version + 1;
    - call->operation_ID = FSSTOREDATA;
    + call->operation_ID = htonl(FSSTOREDATA64);

    /* marshall the parameters */
    bp = call->request;
    @@ -1416,7 +1426,7 @@ static int afs_fs_setattr_size(struct afs_server *server, struct key *key,
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    call->store_version = vnode->status.data_version + 1;
    - call->operation_ID = FSSTOREDATA;
    + call->operation_ID = htonl(FSSTOREDATA);

    /* marshall the parameters */
    bp = call->request;
    @@ -1462,7 +1472,7 @@ int afs_fs_setattr(struct afs_server *server, struct key *key,
    call->reply = vnode;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    - call->operation_ID = FSSTORESTATUS;
    + call->operation_ID = htonl(FSSTORESTATUS);

    /* marshall the parameters */
    bp = call->request;
    @@ -1742,6 +1752,7 @@ int afs_fs_get_volume_status(struct afs_server *server,
    call->reply3 = tmpbuf;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    + call->operation_ID = htonl(FSGETVOLUMESTATUS);

    /* marshall the parameters */
    bp = call->request;
    @@ -1828,6 +1839,7 @@ int afs_fs_set_lock(struct afs_server *server,
    call->reply = vnode;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    + call->operation_ID = htonl(FSSETLOCK);

    /* marshall the parameters */
    bp = call->request;
    @@ -1861,6 +1873,7 @@ int afs_fs_extend_lock(struct afs_server *server,
    call->reply = vnode;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    + call->operation_ID = htonl(FSEXTENDLOCK);

    /* marshall the parameters */
    bp = call->request;
    @@ -1893,6 +1906,7 @@ int afs_fs_release_lock(struct afs_server *server,
    call->reply = vnode;
    call->service_id = FS_SERVICE;
    call->port = htons(AFS_FS_PORT);
    + call->operation_ID = htonl(FSRELEASELOCK);

    /* marshall the parameters */
    bp = call->request;
    diff --git a/fs/afs/inode.c b/fs/afs/inode.c
    index d196840..0f22d56 100644
    --- a/fs/afs/inode.c
    +++ b/fs/afs/inode.c
    @@ -61,6 +61,9 @@ static int afs_inode_map_status(struct afs_vnode *vnode, struct key *key)
    return -EBADMSG;
    }

    + if (vnode->status.size != inode->i_size)
    + fscache_attr_changed(vnode->cache);
    +
    inode->i_nlink = vnode->status.nlink;
    inode->i_uid = vnode->status.owner;
    inode->i_gid = 0;
    @@ -149,15 +152,6 @@ struct inode *afs_iget(struct super_block *sb, struct key *key,
    return inode;
    }

    -#ifdef AFS_CACHING_SUPPORT
    - /* set up caching before reading the status, as fetch-status reads the
    - * first page of symlinks to see if they're really mntpts */
    - cachefs_acquire_cookie(vnode->volume->cache,
    - NULL,
    - vnode,
    - &vnode->cache);
    -#endif
    -
    if (!status) {
    /* it's a remotely extant inode */
    set_bit(AFS_VNODE_CB_BROKEN, &vnode->flags);
    @@ -183,6 +177,13 @@ struct inode *afs_iget(struct super_block *sb, struct key *key,
    }
    }

    + /* set up caching before mapping the status, as map-status reads the
    + * first page of symlinks to see if they're really mountpoints */
    + inode->i_size = vnode->status.size;
    + vnode->cache = fscache_acquire_cookie(vnode->volume->cache,
    + &afs_vnode_cache_index_def,
    + vnode);
    +
    ret = afs_inode_map_status(vnode, key);
    if (ret < 0)
    goto bad_inode;
    @@ -196,6 +197,8 @@ struct inode *afs_iget(struct super_block *sb, struct key *key,

    /* failure */
    bad_inode:
    + fscache_relinquish_cookie(vnode->cache, 0);
    + vnode->cache = NULL;
    make_bad_inode(inode);
    unlock_new_inode(inode);
    iput(inode);
    @@ -342,10 +345,8 @@ void afs_clear_inode(struct inode *inode)
    ASSERT(list_empty(&vnode->writebacks));
    ASSERT(!vnode->cb_promised);

    -#ifdef AFS_CACHING_SUPPORT
    - cachefs_relinquish_cookie(vnode->cache, 0);
    + fscache_relinquish_cookie(vnode->cache, 0);
    vnode->cache = NULL;
    -#endif

    mutex_lock(&vnode->permits_lock);
    permits = vnode->permits;
    diff --git a/fs/afs/internal.h b/fs/afs/internal.h
    index 12afccc..65d307a 100644
    --- a/fs/afs/internal.h
    +++ b/fs/afs/internal.h
    @@ -21,6 +21,7 @@

    #include "afs.h"
    #include "afs_vl.h"
    +#include "cache.h"

    #define AFS_CELL_MAX_ADDRS 15

    @@ -194,9 +195,7 @@ struct afs_cell {
    struct key *anonymous_key; /* anonymous user key for this cell */
    struct list_head proc_link; /* /proc cell list link */
    struct proc_dir_entry *proc_dir; /* /proc dir for this cell */
    -#ifdef AFS_CACHING_SUPPORT
    - struct cachefs_cookie *cache; /* caching cookie */
    -#endif
    + struct fscache_cookie *cache; /* caching cookie */

    /* server record management */
    rwlock_t servers_lock; /* active server list lock */
    @@ -250,9 +249,7 @@ struct afs_vlocation {
    struct list_head grave; /* link in master graveyard list */
    struct list_head update; /* link in master update list */
    struct afs_cell *cell; /* cell to which volume belongs */
    -#ifdef AFS_CACHING_SUPPORT
    - struct cachefs_cookie *cache; /* caching cookie */
    -#endif
    + struct fscache_cookie *cache; /* caching cookie */
    struct afs_cache_vlocation vldb; /* volume information DB record */
    struct afs_volume *vols[3]; /* volume access record pointer (index by type) */
    wait_queue_head_t waitq; /* status change waitqueue */
    @@ -303,9 +300,7 @@ struct afs_volume {
    atomic_t usage;
    struct afs_cell *cell; /* cell to which belongs (unrefd ptr) */
    struct afs_vlocation *vlocation; /* volume location */
    -#ifdef AFS_CACHING_SUPPORT
    - struct cachefs_cookie *cache; /* caching cookie */
    -#endif
    + struct fscache_cookie *cache; /* caching cookie */
    afs_volid_t vid; /* volume ID */
    afs_voltype_t type; /* type of volume */
    char type_force; /* force volume type (suppress R/O -> R/W) */
    @@ -334,9 +329,7 @@ struct afs_vnode {
    struct afs_server *server; /* server currently supplying this file */
    struct afs_fid fid; /* the file identifier for this inode */
    struct afs_file_status status; /* AFS status info for this file */
    -#ifdef AFS_CACHING_SUPPORT
    - struct cachefs_cookie *cache; /* caching cookie */
    -#endif
    + struct fscache_cookie *cache; /* caching cookie */
    struct afs_permits *permits; /* cache of permits so far obtained */
    struct mutex permits_lock; /* lock for altering permits list */
    struct mutex validate_lock; /* lock for validating this vnode */
    @@ -429,6 +422,22 @@ struct afs_uuid {

    /************************************************** ***************************/
    /*
    + * cache.c
    + */
    +#ifdef CONFIG_AFS_FSCACHE
    +extern struct fscache_netfs afs_cache_netfs;
    +extern struct fscache_cookie_def afs_cell_cache_index_def;
    +extern struct fscache_cookie_def afs_vlocation_cache_index_def;
    +extern struct fscache_cookie_def afs_volume_cache_index_def;
    +extern struct fscache_cookie_def afs_vnode_cache_index_def;
    +#else
    +#define afs_cell_cache_index_def (*(struct fscache_cookie_def *) NULL)
    +#define afs_vlocation_cache_index_def (*(struct fscache_cookie_def *) NULL)
    +#define afs_volume_cache_index_def (*(struct fscache_cookie_def *) NULL)
    +#define afs_vnode_cache_index_def (*(struct fscache_cookie_def *) NULL)
    +#endif
    +
    +/*
    * callback.c
    */
    extern void afs_init_callback_state(struct afs_server *);
    @@ -447,9 +456,6 @@ extern void afs_callback_update_kill(void);
    */
    extern struct rw_semaphore afs_proc_cells_sem;
    extern struct list_head afs_proc_cells;
    -#ifdef AFS_CACHING_SUPPORT
    -extern struct cachefs_index_def afs_cache_cell_index_def;
    -#endif

    #define afs_get_cell(C) do { atomic_inc(&(C)->usage); } while(0)
    extern int afs_cell_init(char *);
    @@ -557,9 +563,6 @@ extern void afs_clear_inode(struct inode *);
    * main.c
    */
    extern struct afs_uuid afs_uuid;
    -#ifdef AFS_CACHING_SUPPORT
    -extern struct cachefs_netfs afs_cache_netfs;
    -#endif

    /*
    * misc.c
    @@ -642,10 +645,6 @@ extern int afs_get_MAC_address(u8 *, size_t);
    /*
    * vlclient.c
    */
    -#ifdef AFS_CACHING_SUPPORT
    -extern struct cachefs_index_def afs_vlocation_cache_index_def;
    -#endif
    -
    extern int afs_vl_get_entry_by_name(struct in_addr *, struct key *,
    const char *, struct afs_cache_vlocation *,
    const struct afs_wait_mode *);
    @@ -669,12 +668,6 @@ extern void afs_vlocation_purge(void);
    /*
    * vnode.c
    */
    -#ifdef AFS_CACHING_SUPPORT
    -extern struct cachefs_index_def afs_vnode_cache_index_def;
    -#endif
    -
    -extern struct afs_timer_ops afs_vnode_cb_timed_out_ops;
    -
    static inline struct afs_vnode *AFS_FS_I(struct inode *inode)
    {
    return container_of(inode, struct afs_vnode, vfs_inode);
    @@ -716,10 +709,6 @@ extern int afs_vnode_release_lock(struct afs_vnode *, struct key *);
    /*
    * volume.c
    */
    -#ifdef AFS_CACHING_SUPPORT
    -extern struct cachefs_index_def afs_volume_cache_index_def;
    -#endif
    -
    #define afs_get_volume(V) do { atomic_inc(&(V)->usage); } while(0)

    extern void afs_put_volume(struct afs_volume *);
    diff --git a/fs/afs/main.c b/fs/afs/main.c
    index 0f60f6b..f04b838 100644
    --- a/fs/afs/main.c
    +++ b/fs/afs/main.c
    @@ -1,6 +1,6 @@
    /* AFS client file system
    *
    - * Copyright (C) 2002 Red Hat, Inc. All Rights Reserved.
    + * Copyright (C) 2002,5 Red Hat, Inc. All Rights Reserved.
    * Written by David Howells (dhowells@redhat.com)
    *
    * This program is free software; you can redistribute it and/or
    @@ -29,18 +29,6 @@ static char *rootcell;
    module_param(rootcell, charp, 0);
    MODULE_PARM_DESC(rootcell, "root AFS cell name and VL server IP addr list");

    -#ifdef AFS_CACHING_SUPPORT
    -static struct cachefs_netfs_operations afs_cache_ops = {
    - .get_page_cookie = afs_cache_get_page_cookie,
    -};
    -
    -struct cachefs_netfs afs_cache_netfs = {
    - .name = "afs",
    - .version = 0,
    - .ops = &afs_cache_ops,
    -};
    -#endif
    -
    struct afs_uuid afs_uuid;

    /*
    @@ -104,10 +92,9 @@ static int __init afs_init(void)
    if (ret < 0)
    return ret;

    -#ifdef AFS_CACHING_SUPPORT
    +#ifdef CONFIG_AFS_FSCACHE
    /* we want to be able to cache */
    - ret = cachefs_register_netfs(&afs_cache_netfs,
    - &afs_cache_cell_index_def);
    + ret = fscache_register_netfs(&afs_cache_netfs);
    if (ret < 0)
    goto error_cache;
    #endif
    @@ -142,8 +129,8 @@ error_fs:
    error_open_socket:
    error_vl_update_init:
    error_cell_init:
    -#ifdef AFS_CACHING_SUPPORT
    - cachefs_unregister_netfs(&afs_cache_netfs);
    +#ifdef CONFIG_AFS_FSCACHE
    + fscache_unregister_netfs(&afs_cache_netfs);
    error_cache:
    #endif
    afs_callback_update_kill();
    @@ -175,8 +162,8 @@ static void __exit afs_exit(void)
    afs_vlocation_purge();
    flush_scheduled_work();
    afs_cell_purge();
    -#ifdef AFS_CACHING_SUPPORT
    - cachefs_unregister_netfs(&afs_cache_netfs);
    +#ifdef CONFIG_AFS_FSCACHE
    + fscache_unregister_netfs(&afs_cache_netfs);
    #endif
    afs_proc_cleanup();
    rcu_barrier();
    diff --git a/fs/afs/mntpt.c b/fs/afs/mntpt.c
    index 6f8c96f..589d158 100644
    --- a/fs/afs/mntpt.c
    +++ b/fs/afs/mntpt.c
    @@ -173,9 +173,9 @@ static struct vfsmount *afs_mntpt_do_automount(struct dentry *mntpt)
    if (PageError(page))
    goto error;

    - buf = kmap(page);
    + buf = kmap_atomic(page, KM_USER0);
    memcpy(devname, buf, size);
    - kunmap(page);
    + kunmap_atomic(buf, KM_USER0);
    page_cache_release(page);
    page = NULL;

    diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
    index 8ccee9e..080e285 100644
    --- a/fs/afs/rxrpc.c
    +++ b/fs/afs/rxrpc.c
    @@ -335,6 +335,7 @@ int afs_make_call(struct in_addr *addr, struct afs_call *call, gfp_t gfp,

    /* create a call */
    rxcall = rxrpc_kernel_begin_call(afs_socket, &srx, call->key,
    + ntohl(call->operation_ID),
    (unsigned long) call, gfp);
    call->key = NULL;
    if (IS_ERR(rxcall)) {
    diff --git a/fs/afs/vlclient.c b/fs/afs/vlclient.c
    index 36c1306..60ddd4f 100644
    --- a/fs/afs/vlclient.c
    +++ b/fs/afs/vlclient.c
    @@ -170,6 +170,7 @@ int afs_vl_get_entry_by_name(struct in_addr *addr,
    call->reply = entry;
    call->service_id = VL_SERVICE;
    call->port = htons(AFS_VL_PORT);
    + call->operation_ID = htonl(VLGETENTRYBYNAME);

    /* marshall the parameters */
    bp = call->request;
    @@ -206,6 +207,7 @@ int afs_vl_get_entry_by_id(struct in_addr *addr,
    call->reply = entry;
    call->service_id = VL_SERVICE;
    call->port = htons(AFS_VL_PORT);
    + call->operation_ID = htonl(VLGETENTRYBYID);

    /* marshall the parameters */
    bp = call->request;
    diff --git a/fs/afs/vlocation.c b/fs/afs/vlocation.c
    index 09e3ad0..f9ec766 100644
    --- a/fs/afs/vlocation.c
    +++ b/fs/afs/vlocation.c
    @@ -281,10 +281,7 @@ static void afs_vlocation_apply_update(struct afs_vlocation *vl,

    vl->vldb = *vldb;

    -#ifdef AFS_CACHING_SUPPORT
    - /* update volume entry in local cache */
    - cachefs_update_cookie(vl->cache);
    -#endif
    + fscache_update_cookie(vl->cache);
    }

    /*
    @@ -304,12 +301,8 @@ static int afs_vlocation_fill_in_record(struct afs_vlocation *vl,
    memset(&vldb, 0, sizeof(vldb));

    /* see if we have an in-cache copy (will set vl->valid if there is) */
    -#ifdef AFS_CACHING_SUPPORT
    - cachefs_acquire_cookie(cell->cache,
    - &afs_volume_cache_index_def,
    - vlocation,
    - &vl->cache);
    -#endif
    + vl->cache = fscache_acquire_cookie(vl->cell->cache,
    + &afs_vlocation_cache_index_def, vl);

    if (vl->valid) {
    /* try to update a known volume in the cell VL databases by
    @@ -420,6 +413,9 @@ fill_in_record:
    spin_unlock(&vl->lock);
    wake_up(&vl->waitq);

    + /* update volume entry in local cache */
    + fscache_update_cookie(vl->cache);
    +
    /* schedule for regular updates */
    afs_vlocation_queue_for_updates(vl);
    goto success;
    @@ -465,7 +461,7 @@ found_in_memory:
    spin_unlock(&vl->lock);

    success:
    - _leave(" = %p",vl);
    + _leave(" = %p", vl);
    return vl;

    error_abandon:
    @@ -523,10 +519,7 @@ static void afs_vlocation_destroy(struct afs_vlocation *vl)
    {
    _enter("%p", vl);

    -#ifdef AFS_CACHING_SUPPORT
    - cachefs_relinquish_cookie(vl->cache, 0);
    -#endif
    -
    + fscache_relinquish_cookie(vl->cache, 0);
    afs_put_cell(vl->cell);
    kfree(vl);
    }
    diff --git a/fs/afs/volume.c b/fs/afs/volume.c
    index 8bab0e3..2cc3dab 100644
    --- a/fs/afs/volume.c
    +++ b/fs/afs/volume.c
    @@ -124,13 +124,9 @@ struct afs_volume *afs_volume_lookup(struct afs_mount_params *params)
    }

    /* attach the cache and volume location */
    -#ifdef AFS_CACHING_SUPPORT
    - cachefs_acquire_cookie(vlocation->cache,
    - &afs_vnode_cache_index_def,
    - volume,
    - &volume->cache);
    -#endif
    -
    + volume->cache = fscache_acquire_cookie(vlocation->cache,
    + &afs_volume_cache_index_def,
    + volume);
    afs_get_vlocation(vlocation);
    volume->vlocation = vlocation;

    @@ -194,9 +190,7 @@ void afs_put_volume(struct afs_volume *volume)
    up_write(&vlocation->cell->vl_sem);

    /* finish cleaning up the volume */
    -#ifdef AFS_CACHING_SUPPORT
    - cachefs_relinquish_cookie(volume->cache, 0);
    -#endif
    + fscache_relinquish_cookie(volume->cache, 0);
    afs_put_vlocation(vlocation);

    for (loop = volume->nservers - 1; loop >= 0; loop--)
    diff --git a/fs/afs/write.c b/fs/afs/write.c
    index dd471f0..c5ce221 100644
    --- a/fs/afs/write.c
    +++ b/fs/afs/write.c
    @@ -261,7 +261,6 @@ flush_conflicting_wb:
    _debug("reuse");
    afs_put_writeback(wb);
    set_page_private(page, 0);
    - ClearPagePrivate(page);
    goto try_again;
    }

    @@ -694,7 +693,6 @@ void afs_pages_written_back(struct afs_vnode *vnode, struct afs_call *call)
    end_page_writeback(page);
    if (page_private(page) == (unsigned long) wb) {
    set_page_private(page, 0);
    - ClearPagePrivate(page);
    wb->usage--;
    }
    }
    @@ -854,6 +852,10 @@ int afs_page_mkwrite(struct vm_area_struct *vma, struct page *page)
    _enter("{{%x:%u},%x},{%lx}",
    vnode->fid.vid, vnode->fid.vnode, key_serial(key), page->index);

    + /* wait for the page to be written to the cache before we allow it to
    + * be modified */
    + wait_on_page_fscache_write(page);
    +
    do {
    lock_page(page);
    if (page->mapping == vma->vm_file->f_mapping)

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  12. [PATCH 21/24] AFS: Improve handling of a rejected writeback

    Improve the handling of the case of a server rejecting an attempt to write back
    a cached write. AFS operates a write-back cache, so the following sequence of
    events can theoretically occur:

    CLIENT 1 CLIENT 2
    ======================= =======================
    cat data >/the/file
    (sits in pagecache)
    fs setacl -dir /the/dir/of/the/file \
    -acl system:administrators rlidka
    (write permission removed for client 1)
    sync
    (writeback attempt fails)

    The way AFS attempts to handle this is:

    (1) The affected region will be excised and discarded on the basis that it
    can't be written back, yet we don't want it lurking in the page cache
    either. The contents of the affected region will be reread from the
    server when called for again.

    (2) The EOF size will be set to the current server-based file size - usually
    that which it was before the affected write was made - assuming no
    conflicting write has been appended, and assuming the affected write
    extended the file.


    This patch makes the following changes:

    (1) Zero-length short reads don't produce EBADMSG now just because the OpenAFS
    server puts a silly value as the size of the returned data. This prevents
    excised pages beyond the revised EOF being reinstantiated with a surprise
    PG_error.

    (2) Writebacks can now be put into a 'rejected' state in which all further
    attempts to write them back will result in excision of the affected pages
    instead.

    (3) Preparing a page for overwriting now reads the whole page instead of just
    those parts of it that aren't to be covered by the copy to be made. This
    handles the possibility that the copy might fail on EFAULT. Corollary to
    this, PG_update can now be set by afs_prepare_page() on behalf of
    afs_prepare_write() rather than setting it in afs_commit_write().

    (4) In the case of a conflicting write, afs_prepare_write() will attempt to
    flush the write to the server, and will then wait for PG_writeback to go
    away - after unlocking the page. This helps prevent deadlock against the
    writeback-rejection handler. AOP_TRUNCATED_PAGE is then returned to the
    caller to signify that the page has been unlocked, and that it should be
    revalidated.

    (5) The writeback-rejection handler now calls cancel_rejected_write() added by
    the previous patch to excise the affected pages rather than clearing the
    PG_uptodate flag on all the pages.

    Signed-off-by: David Howells
    ---

    fs/afs/fsclient.c | 4 +
    fs/afs/internal.h | 1
    fs/afs/write.c | 154 ++++++++++++++++++++++++++++-------------------------
    3 files changed, 85 insertions(+), 74 deletions(-)

    diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
    index 023b95b..04584c0 100644
    --- a/fs/afs/fsclient.c
    +++ b/fs/afs/fsclient.c
    @@ -353,7 +353,9 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call,

    call->count = ntohl(call->tmp);
    _debug("DATA length: %u", call->count);
    - if (call->count > PAGE_SIZE)
    + if ((s32) call->count < 0)
    + call->count = 0; /* access completely beyond EOF */
    + else if (call->count > PAGE_SIZE)
    return -EBADMSG;
    call->offset = 0;
    call->unmarshall++;
    diff --git a/fs/afs/internal.h b/fs/afs/internal.h
    index 6306438..e1bcce0 100644
    --- a/fs/afs/internal.h
    +++ b/fs/afs/internal.h
    @@ -156,6 +156,7 @@ struct afs_writeback {
    AFS_WBACK_PENDING, /* write pending */
    AFS_WBACK_CONFLICTING, /* conflicting writes posted */
    AFS_WBACK_WRITING, /* writing back */
    + AFS_WBACK_REJECTED, /* the writeback was rejected */
    AFS_WBACK_COMPLETE /* the writeback record has been unlinked */
    } state __attribute__((packed));
    };
    diff --git a/fs/afs/write.c b/fs/afs/write.c
    index a03b92a..ac621e8 100644
    --- a/fs/afs/write.c
    +++ b/fs/afs/write.c
    @@ -81,18 +81,16 @@ void afs_put_writeback(struct afs_writeback *wb)
    }

    /*
    - * partly or wholly fill a page that's under preparation for writing
    + * fill a page that's under preparation for writing
    */
    static int afs_fill_page(struct afs_vnode *vnode, struct key *key,
    - unsigned start, unsigned len, struct page *page)
    + unsigned len, struct page *page)
    {
    int ret;

    - _enter(",,%u,%u", start, len);
    + _enter(",,%u,", len);

    - ASSERTCMP(start + len, <=, PAGE_SIZE);
    -
    - ret = afs_vnode_fetch_data(vnode, key, start, len, page);
    + ret = afs_vnode_fetch_data(vnode, key, 0, len, page);
    if (ret < 0) {
    if (ret == -ENOENT) {
    _debug("got NOENT from server"
    @@ -110,18 +108,15 @@ static int afs_fill_page(struct afs_vnode *vnode, struct key *key,
    * prepare a page for being written to
    */
    static int afs_prepare_page(struct afs_vnode *vnode, struct page *page,
    - struct key *key, unsigned offset, unsigned to)
    + struct key *key)
    {
    - unsigned eof, tail, start, stop, len;
    + unsigned len;
    loff_t i_size, pos;
    void *p;
    int ret;

    _enter("");

    - if (offset == 0 && to == PAGE_SIZE)
    - return 0;
    -
    p = kmap_atomic(page, KM_USER0);

    i_size = i_size_read(&vnode->vfs_inode);
    @@ -129,10 +124,7 @@ static int afs_prepare_page(struct afs_vnode *vnode, struct page *page,
    if (pos >= i_size) {
    /* partial write, page beyond EOF */
    _debug("beyond");
    - if (offset > 0)
    - memset(p, 0, offset);
    - if (to < PAGE_SIZE)
    - memset(p + to, 0, PAGE_SIZE - to);
    + memset(p, 0, PAGE_SIZE);
    kunmap_atomic(p, KM_USER0);
    return 0;
    }
    @@ -140,31 +132,20 @@ static int afs_prepare_page(struct afs_vnode *vnode, struct page *page,
    if (i_size - pos >= PAGE_SIZE) {
    /* partial write, page entirely before EOF */
    _debug("before");
    - tail = eof = PAGE_SIZE;
    + len = PAGE_SIZE;
    } else {
    /* partial write, page overlaps EOF */
    - eof = i_size - pos;
    - _debug("overlap %u", eof);
    - tail = max(eof, to);
    - if (tail < PAGE_SIZE)
    - memset(p + tail, 0, PAGE_SIZE - tail);
    - if (offset > eof)
    - memset(p + eof, 0, PAGE_SIZE - eof);
    + len = i_size - pos;
    + _debug("overlap %u", len);
    + ASSERTRANGE(0, <, len, <, PAGE_SIZE);
    + memset(p + len, 0, PAGE_SIZE - len);
    }

    kunmap_atomic(p, KM_USER0);

    - ret = 0;
    - if (offset > 0 || eof > to) {
    - /* need to fill one or two bits that aren't going to be written
    - * (cover both fillers in one read if there are two) */
    - start = (offset > 0) ? 0 : to;
    - stop = (eof > to) ? eof : offset;
    - len = stop - start;
    - _debug("wr=%u-%u av=0-%u rd=%u@%u",
    - offset, to, eof, start, len);
    - ret = afs_fill_page(vnode, key, start, len, page);
    - }
    + ret = afs_fill_page(vnode, key, len, page);
    + if (ret == 0)
    + SetPageUptodate(page);

    _leave(" = %d", ret);
    return ret;
    @@ -187,6 +168,8 @@ int afs_prepare_write(struct file *file, struct page *page,
    _enter("{%x:%u},{%lx},%u,%u",
    vnode->fid.vid, vnode->fid.vnode, page->index, offset, to);

    + BUG_ON(PageError(page));
    +
    candidate = kzalloc(sizeof(*candidate), GFP_KERNEL);
    if (!candidate)
    return -ENOMEM;
    @@ -200,7 +183,7 @@ int afs_prepare_write(struct file *file, struct page *page,

    if (!PageUptodate(page)) {
    _debug("not up to date");
    - ret = afs_prepare_page(vnode, page, key, offset, to);
    + ret = afs_prepare_page(vnode, page, key);
    if (ret < 0) {
    kfree(candidate);
    _leave(" = %d [prep]", ret);
    @@ -269,21 +252,41 @@ flush_conflicting_wb:
    _debug("flush conflict");
    if (wb->state == AFS_WBACK_PENDING)
    wb->state = AFS_WBACK_CONFLICTING;
    + wb->usage++;
    spin_unlock(&vnode->writeback_lock);
    - if (PageDirty(page)) {
    + if (!PageDirty(page) && !PageWriteback(page)) {
    + /* no change outstanding - just reuse the page */
    + _debug("reuse");
    + afs_put_writeback(wb);
    + set_page_private(page, 0);
    + ClearPagePrivate(page);
    + goto try_again;
    + }
    +
    + kfree(candidate);
    +
    + /* if we're busy writing back a conflicting write, then unlock the page
    + * and wait for the writeback to complete - this lets the process doing
    + * the write-out handle rejection without deadlock */
    + if (PageWriteback(page)) {
    + _debug("wait wb");
    + unlock_page(page);
    + } else {
    + /* there's a conflicting modification we have to write back and
    + * wait for before letting the next one proceed */
    + _debug("dirty");
    ret = afs_write_back_from_locked_page(wb, page);
    if (ret < 0) {
    - afs_put_writeback(candidate);
    + afs_put_writeback(wb);
    _leave(" = %d", ret);
    return ret;
    }
    }

    - /* the page holds a ref on the writeback record */
    afs_put_writeback(wb);
    - set_page_private(page, 0);
    - ClearPagePrivate(page);
    - goto try_again;
    + wait_on_page_writeback(page);
    + _leave(" = A_T_P");
    + return AOP_TRUNCATED_PAGE;
    }

    /*
    @@ -310,7 +313,6 @@ int afs_commit_write(struct file *file, struct page *page,
    spin_unlock(&vnode->writeback_lock);
    }

    - SetPageUptodate(page);
    set_page_dirty(page);
    if (PageDirty(page))
    _debug("dirtied");
    @@ -319,38 +321,35 @@ int afs_commit_write(struct file *file, struct page *page,
    }

    /*
    - * kill all the pages in the given range
    + * note the failure of a write, either due to an error or to a permission
    + * failure
    + * - all the pages in the affected range must have PG_writeback set
    + * - the caller must be responsible for the pages: no-one else should be trying
    + * to note rejection
    */
    -static void afs_kill_pages(struct afs_vnode *vnode, bool error,
    - pgoff_t first, pgoff_t last)
    +static void afs_write_rejected(struct afs_writeback *wb, bool error,
    + pgoff_t first, pgoff_t last)
    {
    - struct pagevec pv;
    - unsigned count, loop;
    + struct afs_vnode *vnode = wb->vnode;
    + loff_t i_size;

    _enter("{%x:%u},%lx-%lx",
    vnode->fid.vid, vnode->fid.vnode, first, last);

    - pagevec_init(&pv, 0);
    -
    - do {
    - _debug("kill %lx-%lx", first, last);
    -
    - count = last - first + 1;
    - if (count > PAGEVEC_SIZE)
    - count = PAGEVEC_SIZE;
    - pv.nr = find_get_pages_contig(vnode->vfs_inode.i_mapping,
    - first, count, pv.pages);
    - ASSERTCMP(pv.nr, ==, count);
    -
    - for (loop = 0; loop < count; loop++) {
    - ClearPageUptodate(pv.pages[loop]);
    - if (error)
    - SetPageError(pv.pages[loop]);
    - end_page_writeback(pv.pages[loop]);
    - }
    + spin_lock(&vnode->writeback_lock);
    + wb->state = AFS_WBACK_REJECTED;
    +
    + /* wind back the file size if this write extended the file, and wasn't
    + * followed by a conflicting write */
    + i_size = ((loff_t) wb->last) << PAGE_SHIFT;
    + i_size += wb->to_last;
    + if (i_size_read(&vnode->vfs_inode) == i_size) {
    + _debug("shorten");
    + i_size_write(&vnode->vfs_inode, vnode->status.size);
    + }

    - __pagevec_release(&pv);
    - } while (first < last);
    + spin_unlock(&vnode->writeback_lock);
    + cancel_rejected_write(vnode->vfs_inode.i_mapping, first, last);

    _leave("");
    }
    @@ -358,6 +357,7 @@ static void afs_kill_pages(struct afs_vnode *vnode, bool error,
    /*
    * synchronously write back the locked page and any subsequent non-locked dirty
    * pages also covered by the same writeback record
    + * - all pages written will be unlocked prior to returning
    */
    static int afs_write_back_from_locked_page(struct afs_writeback *wb,
    struct page *primary_page)
    @@ -407,6 +407,7 @@ static int afs_write_back_from_locked_page(struct afs_writeback *wb,
    if (TestSetPageLocked(page))
    break;
    if (!PageDirty(page) ||
    + PageWriteback(page) ||
    page_private(page) != (unsigned long) wb) {
    unlock_page(page);
    break;
    @@ -430,14 +431,22 @@ static int afs_write_back_from_locked_page(struct afs_writeback *wb,

    no_more:
    /* we now have a contiguous set of dirty pages, each with writeback set
    - * and the dirty mark cleared; the first page is locked and must remain
    - * so, all the rest are unlocked */
    + * and the dirty mark cleared; all the pages barring the first are now
    + * unlocked */
    first = primary_page->index;
    last = first + count - 1;
    + unlock_page(primary_page);

    offset = (first == wb->first) ? wb->offset_first : 0;
    to = (last == wb->last) ? wb->to_last : PAGE_SIZE;

    + if (wb->state == AFS_WBACK_REJECTED) {
    + cancel_rejected_write(wb->vnode->vfs_inode.i_mapping,
    + first, last);
    + ret = 0;
    + goto out;
    + }
    +
    _debug("write back %lx[%u..] to %lx[..%u]", first, offset, last, to);

    ret = afs_vnode_store_data(wb, first, last, offset, to);
    @@ -455,7 +464,7 @@ no_more:
    case -ENOENT:
    case -ENOMEDIUM:
    case -ENXIO:
    - afs_kill_pages(wb->vnode, true, first, last);
    + afs_write_rejected(wb, true, first, last);
    set_bit(AS_EIO, &wb->vnode->vfs_inode.i_mapping->flags);
    break;
    case -EACCES:
    @@ -464,7 +473,7 @@ no_more:
    case -EKEYEXPIRED:
    case -EKEYREJECTED:
    case -EKEYREVOKED:
    - afs_kill_pages(wb->vnode, false, first, last);
    + afs_write_rejected(wb, false, first, last);
    break;
    default:
    break;
    @@ -473,6 +482,7 @@ no_more:
    ret = count;
    }

    +out:
    _leave(" = %d", ret);
    return ret;
    }
    @@ -493,7 +503,6 @@ int afs_writepage(struct page *page, struct writeback_control *wbc)
    ASSERT(wb != NULL);

    ret = afs_write_back_from_locked_page(wb, page);
    - unlock_page(page);
    if (ret < 0) {
    _leave(" = %d", ret);
    return 0;
    @@ -565,7 +574,6 @@ int afs_writepages_region(struct address_space *mapping,
    spin_unlock(&wb->vnode->writeback_lock);

    ret = afs_write_back_from_locked_page(wb, page);
    - unlock_page(page);
    page_cache_release(page);
    if (ret < 0) {
    _leave(" = %d", ret);

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  13. [PATCH 22/24] AFS: Implement shared-writable mmap

    Implement shared-writable mmap for AFS.

    The key with which to access the file is obtained from the VMA at the point
    where the PTE is made writable by the page_mkwrite() VMA op and cached in the
    affected page.

    If there's an outstanding write on the page made with a different key, then
    page_mkwrite() will flush it before attaching a record of the new key.

    Signed-off-by: David Howells
    ---

    fs/afs/file.c | 20 +++++++++++++++++++-
    fs/afs/internal.h | 1 +
    fs/afs/write.c | 35 +++++++++++++++++++++++++++++++++++
    3 files changed, 55 insertions(+), 1 deletions(-)

    diff --git a/fs/afs/file.c b/fs/afs/file.c
    index 525f7c5..1323df4 100644
    --- a/fs/afs/file.c
    +++ b/fs/afs/file.c
    @@ -22,6 +22,7 @@ static int afs_readpage(struct file *file, struct page *page);
    static void afs_invalidatepage(struct page *page, unsigned long offset);
    static int afs_releasepage(struct page *page, gfp_t gfp_flags);
    static int afs_launder_page(struct page *page);
    +static int afs_mmap(struct file *file, struct vm_area_struct *vma);

    const struct file_operations afs_file_operations = {
    .open = afs_open,
    @@ -31,7 +32,7 @@ const struct file_operations afs_file_operations = {
    .write = do_sync_write,
    .aio_read = generic_file_aio_read,
    .aio_write = afs_file_write,
    - .mmap = generic_file_readonly_mmap,
    + .mmap = afs_mmap,
    .splice_read = generic_file_splice_read,
    .fsync = afs_fsync,
    .lock = afs_lock,
    @@ -56,6 +57,11 @@ const struct address_space_operations afs_fs_aops = {
    .writepages = afs_writepages,
    };

    +static struct vm_operations_struct afs_file_vm_ops = {
    + .fault = filemap_fault,
    + .page_mkwrite = afs_page_mkwrite,
    +};
    +
    /*
    * open an AFS file or directory and attach a key to it
    */
    @@ -295,3 +301,15 @@ static int afs_releasepage(struct page *page, gfp_t gfp_flags)
    _leave(" = 0");
    return 0;
    }
    +
    +/*
    + * memory map part of an AFS file
    + */
    +static int afs_mmap(struct file *file, struct vm_area_struct *vma)
    +{
    + _enter("");
    +
    + file_accessed(file);
    + vma->vm_ops = &afs_file_vm_ops;
    + return 0;
    +}
    diff --git a/fs/afs/internal.h b/fs/afs/internal.h
    index e1bcce0..12afccc 100644
    --- a/fs/afs/internal.h
    +++ b/fs/afs/internal.h
    @@ -743,6 +743,7 @@ extern ssize_t afs_file_write(struct kiocb *, const struct iovec *,
    unsigned long, loff_t);
    extern int afs_writeback_all(struct afs_vnode *);
    extern int afs_fsync(struct file *, struct dentry *, int);
    +extern int afs_page_mkwrite(struct vm_area_struct *, struct page *);


    /************************************************** ***************************/
    diff --git a/fs/afs/write.c b/fs/afs/write.c
    index ac621e8..dd471f0 100644
    --- a/fs/afs/write.c
    +++ b/fs/afs/write.c
    @@ -155,6 +155,8 @@ static int afs_prepare_page(struct afs_vnode *vnode, struct page *page,
    * prepare to perform part of a write to a page
    * - the caller holds the page locked, preventing it from being written out or
    * modified by anyone else
    + * - may be called from afs_page_mkwrite() to set up a page for modification
    + * through shared-writable mmap
    */
    int afs_prepare_write(struct file *file, struct page *page,
    unsigned offset, unsigned to)
    @@ -833,3 +835,36 @@ int afs_fsync(struct file *file, struct dentry *dentry, int datasync)
    _leave(" = %d", ret);
    return ret;
    }
    +
    +/*
    + * notification that a previously read-only page is about to become writable
    + * - if it returns an error, the caller will deliver a bus error signal
    + *
    + * we use this to make a record of the key with which the writeback should be
    + * performed and to flush any outstanding writes made with a different key
    + *
    + * the key to be used is attached to the struct file pinned by the VMA
    + */
    +int afs_page_mkwrite(struct vm_area_struct *vma, struct page *page)
    +{
    + struct afs_vnode *vnode = AFS_FS_I(vma->vm_file->f_mapping->host);
    + struct key *key = vma->vm_file->private_data;
    + int ret;
    +
    + _enter("{{%x:%u},%x},{%lx}",
    + vnode->fid.vid, vnode->fid.vnode, key_serial(key), page->index);
    +
    + do {
    + lock_page(page);
    + if (page->mapping == vma->vm_file->f_mapping)
    + ret = afs_prepare_write(vma->vm_file, page, 0,
    + PAGE_SIZE);
    + else
    + ret = 0; /* seems there was interference - let the
    + * caller deal with it */
    + unlock_page(page);
    + } while (ret == AOP_TRUNCATED_PAGE);
    +
    + _leave(" = %d", ret);
    + return ret;
    +}

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  14. Re: [PATCH 01/24] CRED: Introduce a COW credentials record

    On Wed, Sep 26, 2007 at 03:21:05PM +0100, David Howells wrote:
    > To alter the credentials record, a copy must be made. This copy may then be
    > altered and then the pointer in the task_struct redirected to it. From that
    > point on the new record should be considered immutable.


    Umm... Perhaps a better primitive would be "make sure that our cred is
    not shared with anybody, creating a copy and redirecting reference to
    it if needed".


    > In addition, the default setting of i_uid and i_gid to fsuid and fsgid has been
    > moved from the callers of new_inode() into new_inode() itself.


    I don't think it's safe; better do something trivial like
    own_inode(inode)
    that would set these (and that's a goot splitup candidate, to go in front
    of the series).


    FWIW, the main weakness here is the need of update_current_cred()
    splattered all over the entry points. Two problems:
    a) it's a bug source (somebody adds a syscall and forgets to
    add that call / somebody modifies syscall guts and doesn't notice that
    it needs to be added).
    b) it's almost always doing noting, so being lazier would be
    better (event numbers checked in the inlined part, perhaps?)


    The former would be more robust if it had been closer to the places where
    we get to passing current->cred to functions. The latter... When do
    we actually step into this kind of situation (somebody changing keys on
    us) and what's the right semantics here? E.g. if it happens in the middle
    of long read(), do we want to keep using the original keys?

    Comments?
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  15. Re: [PATCH 01/24] CRED: Introduce a COW credentials record

    Al Viro wrote:

    > Umm... Perhaps a better primitive would be "make sure that our cred is
    > not shared with anybody, creating a copy and redirecting reference to
    > it if needed".


    I wanted to make the point that once a cred record was made live - i.e. exposed
    to the rest of the system - it should not be changed. I'll think about
    rewording that. Also "making sure that our cred is not shared" does not work
    for cachefiles where we actually want to create a new set of creds.

    Al Viro wrote:

    > > In addition, the default setting of i_uid and i_gid to fsuid and fsgid has
    > > been moved from the callers of new_inode() into new_inode() itself.

    >
    > I don't think it's safe; better do something trivial like
    > own_inode(inode)
    > that would set these (and that's a goot splitup candidate, to go in front
    > of the series).


    I think you're probably right. I commented on this at the bottom of the cover
    note. One thing I could do is provide a variant on own_inode() that takes a
    parent dir inode pointer and does the sticky GID thing - something that several
    filesystems do.

    > FWIW, the main weakness here is the need of update_current_cred() splattered
    > all over the entry points.


    Yeah. I'm not keen on that, but I'm even less keen on sticking something in
    everywhere that the cred struct is consulted. I don't like the idea of making
    it implicit in the dereference of current->cred either, and neither is Linus.

    > Two problems:
    > a) it's a bug source (somebody adds a syscall and forgets to
    > add that call / somebody modifies syscall guts and doesn't notice that
    > it needs to be added).


    It's simpler to check for its existence at the beginning of a syscall.

    > b) it's almost always doing noting, so being lazier would be
    > better (event numbers checked in the inlined part, perhaps?)


    Linus is against having an inlined part:-/

    > The former would be more robust if it had been closer to the places where
    > we get to passing current->cred to functions.


    You can't do it there because there may be an override in effect. Or, rather,
    if you do do it there, you have to not do it if there's an override set.

    > The latter... When do we actually step into this kind of situation (somebody
    > changing keys on us)


    There are four cases:

    (1) The request_key() upcall forces us to create a thread keyring.

    (2) The request_key() upcall forces us to create a process keyring.

    (3) A sibling thread instantiates our common process keyring.

    (4) A sibling thread replaces our common session keyring.

    The first three could be trivially avoidable by creating the thread and process
    keyrings in advance, (1) and (2) at request_key() time, (3) at clone time. It
    eats extra resources, but it's easy.

    The fourth is more tricky. A sibling thread can replace our common session
    keyring on us at any time. I suppose we could decree that you can't replace
    your session keyring if you've got multiple threads. That ought to be simple
    enough, and I suspect won't impact particularly.

    The alternatives are (b) not to include the keyrings in the cred stuff, though
    they are relevant; and (c) to make it possible for sibling threads to change
    each other's creds. I'm really not keen on (c) as that means you can't just
    dereference your own creds directly without taking locks and stuff.

    > and what's the right semantics here? E.g. if it happens
    > in the middle of long read(), do we want to keep using the original keys?


    If you're in the middle of a long read(), you should be using the cred struct
    attached to file->f_cred, not current->cred, and so that problem should not
    arise.

    As for long ops that aren't I/O operations on file descriptors, I think it's
    reasonable for you to do the entire op with the creds you started off doing it
    with.

    Don't forget that there's also the cap_effective stuff, which appears that it
    can be changed by someone other than the target process.

    David
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread