[9fans] extending xen to allow driver development in Plan 9 - Plan9

This is a discussion on [9fans] extending xen to allow driver development in Plan 9 - Plan9 ; This is mostly for Richard Miller but I don't have his email. But if you are interested in Xen, read along. We have an ok xen environment going. Why are we doing this? Per a certain person at xyz.com, we ...

+ Reply to Thread
Results 1 to 3 of 3

Thread: [9fans] extending xen to allow driver development in Plan 9

  1. [9fans] extending xen to allow driver development in Plan 9

    This is mostly for Richard Miller but I don't have his email. But if
    you are interested in Xen, read along.

    We have an ok xen environment going. Why are we doing this? Per a
    certain person at xyz.com, we are looking at giving people a usable
    xen-based plan 9 environment, and at the same time letting them do
    driver work from Plan 9 by "poking holes" in Xen to let Plan 9 at the
    real hardware. Xen supports this, we think, although we have not got
    it going yet ...

    I already like the situation thus far, as Plan 9 under Xen is a ton
    faster than Plan 9 under qemu. You have to see it to believe it; if
    anything, the Xen advantage is better than it used to be. I was
    surprised.

    to get to the point of poking holes in Xen, it turned out I need
    pcifront. For pcifront I need xenbus. for xenbus I need xenstore.

    There is xenstore support in Plan 9 already, but ...

    The xenstore sez: "incomplete". What would it take to complete it?
    conservative use of locks in the short term as a hack for really doing
    it right in the long term? The comment is this:

    * XXX This is incomplete - needs multiplexing of request/response protocol
    * and locking between driver and kernel-only xenstore_read/write interface.

    Should we set up queues for request/response? The locking seems simple
    enough, is there something I'm missing?

    ron

  2. Re: [9fans] extending xen to allow driver development in Plan 9

    > Should we set up queues for request/response?

    Not necessarily a queue because I don't think there's a guarantee that
    responses come in the same order as requests. Maybe a hash table of
    requests awaiting responses?

    > The locking seems simple
    > enough, is there something I'm missing?


    No, it should be simple. I just hadn't got around to it yet.


  3. Again: (self)hosted Plan9? Was: [9fans] extending xen to allow driverdevelopment in Plan 9

    "ron minnich" writes:
    ....
    > We have an ok xen environment going. Why are we doing this? Per a
    > certain person at xyz.com, we are looking at giving people a usable
    > xen-based plan 9 environment, and at the same time letting them do
    > driver work from Plan 9 by "poking holes" in Xen to let Plan 9 at the
    > real hardware. Xen supports this, we think, although we have not got
    > it going yet ...
    >
    > I already like the situation thus far, as Plan 9 under Xen is a ton
    > faster than Plan 9 under qemu. You have to see it to believe it; if
    > anything, the Xen advantage is better than it used to be. I was
    > surprised.

    ....

    I have a similar situation:

    - Xen helps me run several Plan9's one the same hardware

    - I can give my users a Plan9 environment without taking away the OS
    they are used to work with

    - Xen is much faster then Qemu, ok for production use

    - as Richard Miller said: ".. the whole point of xen is that physical
    devices become Somebody Else's Problem."

    However I think that the same goals could be achieved more natural,
    even faster, more stable and more generally aplicable if Plan9 could
    be run (self)hosted.

    The Hurd can be run as a user space process inside The Hurd. Made
    feasable because of its multi-server nature: the Kernel almost does
    not do I/O. Thus The Hurd allegedly can be debugged and developed
    more easily.

    I guess the Plan9 Kernel could be separated in two layers, the upper
    one just doing "high-level" and 9P-protocol stuff, and a lower one,
    providing the #-channel interfaces to the upper layer and doing I/O.

    The lower layer could either be comprised of hardware drivers for the
    real hardware, or a hosting layer which intermediates between the
    block devices and memory managment operations of a certain hosting
    operating system and the #-channel interface to the upper layer.

    Maybe this approach could also clean up the duplication of code
    between 9loader and kernel I have read about in some Plan9 document.

    Hardware driver development could also be eased by this approach,
    since it is probably easier to pass certain hardware through to a
    Linux process (the hosted Plan9 instance), than to go through the
    complexities of Xen-Hypervisor - dom0 Linux - domU Plan9 interaction.

    And: I know that this approach probably would increase complexity and
    reduce performance with respect to the current Plan9 kernel.

    Initially I have started to browse the Plan9 kernel source code, Linux
    kernel docs, x86 assembler manuals etc., but I realized very fast,
    that my spare time will never be sufficient to spot out all required
    points to get anywhere with such a project. However maybe there are
    some folks out there who like the idea and have the knowledge to do
    it.

    Best Regards,

    Jorge-León

+ Reply to Thread