Re: [9fans] 9p over high-latency
> On Fri, Sep 12, 2008 at 7:47 PM, erik quanstrom <firstname.lastname@example.org> wrote:[color=blue][color=green]
> > as an aside: i don't think 9p itself limits plan 9 performance
> > over high-latency links. the limitations have more to do with
> > the number of outstanding messages, which is 1 in the mnt
> > driver.[/color]
> Hm, but what's the alternative here? Readahead seems somewhat
> attractive, if difficult (I worry about blocking reads and timing
> sensitive file systems). But there's one problem I can't resolve - how
> do you know what offset to Tread without consulting the previous
> Rread's count?
> Actually, I understand there has been discussion about grouping tags
> to allow for things like Twalk/Topen batching without waiting for
> Rwalk (which sounds like a great idea), maybe that would work here
the fundamental problem is that it becomes very difficult to
implement fileservers which don't serve up regular files.
you might make perminant changes to something stored on
a disk with readahead.
since one of the main points of plan 9 is to get rid of special
files, ioctl's and whatnot, read ahead seems unattactive.
i'll admit that i don't understand the point of batching walks.
i'm not sure why one would set up a case where you know you'll
have a long network and where you know you'll need to execute
a lot of walks. most applications that do most i/o in a particular
directory set . to that directory to avoid the walks.
i'm not sure that octopus wouldn't be better off optimizing
latency by running many more threads. but that's just an ignorant
Re: [9fans] 9p over high-latency
On Thu, Sep 18, 2008 at 6:51 AM, erik quanstrom <email@example.com> wrote:[color=blue][color=green]
>> On Fri, Sep 12, 2008 at 7:47 PM, erik quanstrom <firstname.lastname@example.org> wrote:
> the fundamental problem is that it becomes very difficult to
> implement fileservers which don't serve up regular files.
> you might make perminant changes to something stored on
> a disk with readahead.
My experience is that there are a couple of different scenarios here
-- there's dealing with synthetic file systems, dealing with regular
files, and then there is dealing with both. Latency can effect all
three situations -- my understanding was that Op was actually
developed to deal with latency problems in dealing with the deep
hierarchies of the Octopus synthetic file systems.
There are likely a number of optimizations possible when dealing with
regular files -- but we currently don't give many/any hints in the
protocol as to what kind of optimizations are valid on a particular
fid -- and with things like batching walk/open its even more difficult
as you may cross mount points which invalidate the type of
optimization you think you can do.
Of course, if these were dealt with in a single protocol one approach
would be to return Error when attempting an invalid optimization
allowing clients to fall-back to a safer set of operations. I do tend
to agree with Uriel that extensions, such as Op, may be better done in
a complimentary op-code space to make this sort of negotitation
possible. Unfortunately this can add quite a bit of complexity to the
client and servers, so its not clear to me that its a definite win.
If you know you are dealing exclusively with regular files, I would
suggest starting with something like cfs(4) and play with different
potential optimizations there such as read-ahead, loose caches,
directory caches, temporal caches, etc. Most of these techniques are
things you'd never want to look at with a synthetic file service, but
should provide a route for most of the optimizations you might want in
a wide-area-file-system -- particularly if you have exclusive access
and aren't worried about coherency.
If you are worried about coherency, you probably don't want to be
doing any of these optimizations. There have been some conversations
about how to approach coherent caching, and I think some folks have
started working on it, but nothing is available yet.