Re: i/o bandwidth controller infrastructure - Kernel

This is a discussion on Re: i/o bandwidth controller infrastructure - Kernel ; Divyesh Shah wrote: >> This is the core io-throttle kernel infrastructure. It creates the >> basic >> interfaces to cgroups and implements the I/O measurement and >> throttling >> functions. > > I am not sure if throttling an application's ...

+ Reply to Thread
Results 1 to 3 of 3

Thread: Re: i/o bandwidth controller infrastructure

  1. Re: i/o bandwidth controller infrastructure

    Divyesh Shah wrote:
    >> This is the core io-throttle kernel infrastructure. It creates the
    >> basic
    >> interfaces to cgroups and implements the I/O measurement and
    >> throttling
    >> functions.

    >
    > I am not sure if throttling an application's cpu usage by explicitly
    > putting it to sleep
    > in order to restrain it from making more IO requests is the way to go
    > here (though I can't think
    > of anything better right now).
    > With this bandwidth controller, a cpu-intensive job which otherwise
    > does not care about its IO
    > performance needs to be pin-point accurate about IO bandwidth
    > required in order to not suffer
    > from cpu-throttling. IMHO, if a cgroup is exceeding its limit for a
    > given resource, the throttling
    > should be done _only_ for that resource.
    >
    > -Divyesh


    Divyesh,

    I understand your point of view. It would be nice if we could just
    "disable" the i/o for a cgroup that exceeds its limit, instead of
    scheduling some sleep()s, so the tasks running in this cgroup would be
    able to continue their non-i/o operations as usual.

    However, how to do if the tasks continue to perform i/o ops under this
    condition? we could just cache the i/o in memory and at the same time
    reduce the i/o priority of those tasks' requests, but this would require
    a lot of memory, more space in the page cache, and probably could lead
    to potential OOM conditions. A safer approach IMHO is to force the tasks
    to wait synchronously on each operation that directly or indirectly
    generates i/o. The last one is the solution implemented by this
    bandwidth controller.

    We could collect additional statistics, or implement some heuristics to
    predict the tasks' i/o patterns in order to not penalize cpu-bound jobs
    too much, but the basic concept is the same.

    Anyway, I agree there must be a better solution, but this is the best
    I've found right now... nice ideas are welcome.

    -Andrea
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. Re: i/o bandwidth controller infrastructure

    On Tue, 17 Jun 2008, Andrea Righi wrote:
    > > With this bandwidth controller, a cpu-intensive job which otherwise does
    > > not care about its IO
    > > performance needs to be pin-point accurate about IO bandwidth required in
    > > order to not suffer
    > > from cpu-throttling. IMHO, if a cgroup is exceeding its limit for a given
    > > resource, the throttling
    > > should be done _only_ for that resource.

    >
    > I understand your point of view. It would be nice if we could just
    > "disable" the i/o for a cgroup that exceeds its limit, instead of
    > scheduling some sleep()s, so the tasks running in this cgroup would be
    > able to continue their non-i/o operations as usual.
    >
    > However, how to do if the tasks continue to perform i/o ops under this
    > condition? we could just cache the i/o in memory and at the same time
    > reduce the i/o priority of those tasks' requests, but this would require
    > a lot of memory, more space in the page cache, and probably could lead
    > to potential OOM conditions. A safer approach IMHO is to force the tasks
    > to wait synchronously on each operation that directly or indirectly
    > generates i/o. The last one is the solution implemented by this
    > bandwidth controller.


    What about AIO? Is this approach going to make the task sleep as well?
    Would it better to return from aio_write()/_read() with EAGAIN?

    Thanks.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. Re: i/o bandwidth controller infrastructure

    Eric Rannaud wrote:
    > On Tue, 17 Jun 2008, Andrea Righi wrote:
    >>> With this bandwidth controller, a cpu-intensive job which otherwise does
    >>> not care about its IO
    >>> performance needs to be pin-point accurate about IO bandwidth required in
    >>> order to not suffer
    >>> from cpu-throttling. IMHO, if a cgroup is exceeding its limit for a given
    >>> resource, the throttling
    >>> should be done _only_ for that resource.

    >> I understand your point of view. It would be nice if we could just
    >> "disable" the i/o for a cgroup that exceeds its limit, instead of
    >> scheduling some sleep()s, so the tasks running in this cgroup would be
    >> able to continue their non-i/o operations as usual.
    >>
    >> However, how to do if the tasks continue to perform i/o ops under this
    >> condition? we could just cache the i/o in memory and at the same time
    >> reduce the i/o priority of those tasks' requests, but this would require
    >> a lot of memory, more space in the page cache, and probably could lead
    >> to potential OOM conditions. A safer approach IMHO is to force the tasks
    >> to wait synchronously on each operation that directly or indirectly
    >> generates i/o. The last one is the solution implemented by this
    >> bandwidth controller.

    >
    > What about AIO? Is this approach going to make the task sleep as well?
    > Would it better to return from aio_write()/_read() with EAGAIN?


    Good point. I should check, but it seems sleeps are incorrectly
    performed also for AIO requests. I agree the correct behaviour would be
    to return EAGAIN instead, as you suggested. I'll look at it if nobody
    comes up with a solution.

    Thanks,
    -Andrea
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread