Re: how to get rid of XFree in the longterm (just a thought) - Xwindows

This is a discussion on Re: how to get rid of XFree in the longterm (just a thought) - Xwindows ; Oops! Ian Bell was seen spray-painting on a wall: > Its primary performance limiter is the creation of a GUI via the > x-protocol which is predicated upon the need to provide a network > transparent graphics system. Remove the ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 38

Thread: Re: how to get rid of XFree in the longterm (just a thought)

  1. Re: how to get rid of XFree in the longterm (just a thought)

    Oops! Ian Bell was seen spray-painting on a wall:
    > Its primary performance limiter is the creation of a GUI via the
    > x-protocol which is predicated upon the need to provide a network
    > transparent graphics system. Remove the need for network
    > transparancy and significant performance improvements for desktop
    > apps are possible. Work is already going on to do just this.


    Have you got any actual benchmarks that demonstrate that the
    communications layer _actually_ is the bottleneck?

    Or is this simply the usual nonsense about X being bloated because of
    "network support?"

    Remember: Most of us are using X via UDS, which is NOT "network
    transparent," and which eliminates usage of network stack.

    If X is slow even when using UDS, then the problem _can't_ simply be
    that of the cost of network transparency.
    --
    output = reverse("gro.mca" "@" "enworbbc")
    http://www.ntlug.org/~cbbrowne/xbloat.html
    "A cynic is a man who knows the price of everything, and the value of
    nothing." -- Oscar Wilde

  2. Re: how to get rid of XFree in the longterm (just a thought)

    : Its primary performance limiter is the creation of a GUI via the
    : x-protocol which is predicated upon the need to provide a network
    : transparent graphics system. Remove the need for network transparancy
    : and significant performance improvements for desktop apps are
    : possible. Work is already going on to do just this.

    The primary performance limiter of school faculty is the use of
    manual labor for student instruction, which is predicated upon the
    need for a low student/teacher ratio. Remove the need for teachers
    to interact with students and significant increases in paperwork
    volume are possible. Work is already going on to do just this.

    In short, if your X replacement ain't got network transparency,
    then it ain't got squat.


    Wayne Throop throopw@sheol.org http://sheol.org/throopw

  3. Re: how to get rid of XFree in the longterm (just a thought)

    Christopher Browne wrote:
    > Oops! Ian Bell was seen spray-painting on a wall:
    >
    >>Its primary performance limiter is the creation of a GUI via the
    >>x-protocol which is predicated upon the need to provide a network
    >>transparent graphics system. Remove the need for network
    >>transparancy and significant performance improvements for desktop
    >>apps are possible. Work is already going on to do just this.

    >
    >
    > Have you got any actual benchmarks that demonstrate that the
    > communications layer _actually_ is the bottleneck?
    >
    > Or is this simply the usual nonsense about X being bloated because of
    > "network support?"
    >
    > Remember: Most of us are using X via UDS, which is NOT "network
    > transparent," and which eliminates usage of network stack.


    There is no difference in using unix sockets or 'normal' network sockets...
    In both cases you need to use a protocol, so I don't agree with you that
    there is no 'network transparency' with UDS.

    >
    > If X is slow even when using UDS, then the problem _can't_ simply be
    > that of the cost of network transparency.



    --
    wereHamster a.k.a. Tom Carnecky Emmen, Switzerland

    (GC 3.1) GIT d+ s+: a--- C++ UL++ P L++ E- W++ N++ !o !K w ?O ?M
    ?V PS PE ?Y PGP t ?5 X R- tv b+ ?DI D+ G++ e-- h! !r !y+


  4. Re: how to get rid of XFree in the longterm (just a thought)

    tom wrote:
    > Christopher Browne wrote:
    >
    >> Oops! Ian Bell was seen spray-painting on a wall:
    >>
    >>> Its primary performance limiter is the creation of a GUI via the
    >>> x-protocol which is predicated upon the need to provide a network
    >>> transparent graphics system. Remove the need for network
    >>> transparancy and significant performance improvements for desktop
    >>> apps are possible. Work is already going on to do just this.

    >>
    >>
    >>
    >> Have you got any actual benchmarks that demonstrate that the
    >> communications layer _actually_ is the bottleneck?
    >>
    >> Or is this simply the usual nonsense about X being bloated because of
    >> "network support?"
    >>
    >> Remember: Most of us are using X via UDS, which is NOT "network
    >> transparent," and which eliminates usage of network stack.

    >
    >
    > There is no difference in using unix sockets or 'normal' network sockets...
    > In both cases you need to use a protocol, so I don't agree with you that
    > there is no 'network transparency' with UDS.


    Agreed that both Unix sockets and TCP/IP sockets share a number of 'layers' of
    API (specifically, some of the support, and most of the public API that goes
    into the "sockets" API).

    However, I don't agree that there is a "protocol" with Unix Sockets. They
    typically are implemented as simple, local-memory buffers that are written by
    one socket call, and read by another. OTOH, TCP/IP sockets /does/ have a
    multi-layered "protocol" that involves prepending control data, moving the data
    between various buffers, moving the data 'over the wire', and managing the data
    integrity (including fragments, acknowlegments, etc.).

    There is no 'protocol' per se in Unix sockets, and there is much less overhead
    with them as compared to TCP/IP sockets.



    --
    Lew Pitcher

    Master Codewright and JOAT-in-training
    Registered Linux User #112576 (http://counter.li.org/)
    Slackware - Because I know what I'm doing.


  5. Re: how to get rid of XFree in the longterm (just a thought)

    In an attempt to throw the authorities off his trail, Lew Pitcher transmitted:
    > There is no 'protocol' per se in Unix sockets, and there is much less
    > overhead with them as compared to TCP/IP sockets.


    No, but running X involves throwing protocol requests between clients
    and servers, irrespective of the existence of a network.

    And the fact that I pluralized "clients" points to its necessity.

    The magical "holy grail" of the X alternative has long involved
    ignoring the need for a multiplexing layer that will accept requests
    from potentially numerous client applications. The assumption seems
    to be that that layer is costless. Nobody really knows for certain
    since no comprehensive X alternative has been implemented...
    --
    output = ("cbbrowne" "@" "acm.org")
    http://www3.sympatico.ca/cbbrowne/oses.html
    thorfinn@netizen.com.au
    Millihelen, adj:
    The amount of beauty required to launch one ship.

  6. Re: how to get rid of XFree in the longterm (just a thought)

    : Christopher Browne
    : The magical "holy grail" of the X alternative has long involved
    : ignoring the need for a multiplexing layer that will accept requests
    : from potentially numerous client applications. The assumption seems
    : to be that that layer is costless.

    Wrong. The assumption is that the cost is worth it.
    And that assumption is correct.


    Wayne Throop throopw@sheol.org http://sheol.org/throopw

  7. Re: how to get rid of XFree in the longterm (just a thought)

    Lew Pitcher wrote:
    > tom wrote:
    >
    >> Christopher Browne wrote:
    >>
    >> There is no difference in using unix sockets or 'normal' network
    >> sockets...
    >> In both cases you need to use a protocol, so I don't agree with you
    >> that there is no 'network transparency' with UDS.

    >
    >
    > Agreed that both Unix sockets and TCP/IP sockets share a number of
    > 'layers' of API (specifically, some of the support, and most of the
    > public API that goes into the "sockets" API).
    >
    > However, I don't agree that there is a "protocol" with Unix Sockets.
    > They typically are implemented as simple, local-memory buffers that are
    > written by one socket call, and read by another. OTOH, TCP/IP sockets
    > /does/ have a multi-layered "protocol" that involves prepending control
    > data, moving the data between various buffers, moving the data 'over the
    > wire', and managing the data integrity (including fragments,
    > acknowlegments, etc.).
    >
    > There is no 'protocol' per se in Unix sockets, and there is much less
    > overhead with them as compared to TCP/IP sockets.
    >
    >


    Misunderstanding... I thought about the X protocol and not TCP or IP.
    The applications using the sockes have to define some rules/packets how
    they communicated between each other.. I call this a protocol.. the X
    protocol in this case... maybe I'm wrong here.
    If you use any IPC, you have to have a sort of a 'protocol', even with
    message queues or shared memory..

    --
    wereHamster a.k.a. Tom Carnecky Emmen, Switzerland

    (GC 3.1) GIT d+ s+: a--- C++ UL++ P L++ E- W++ N++ !o !K w ?O ?M
    ?V PS PE ?Y PGP t ?5 X R- tv b+ ?DI D+ G++ e-- h! !r !y+

  8. Re: how to get rid of XFree in the longterm (just a thought)

    On 2004-02-26, tom wrote:

    > Misunderstanding... I thought about the X protocol and not TCP
    > or IP. The applications using the sockes have to define some
    > rules/packets how they communicated between each other.. I
    > call this a protocol.. the X protocol in this case... maybe
    > I'm wrong here. If you use any IPC, you have to have a sort of
    > a 'protocol', even with message queues or shared memory..


    And how do you propose eliminating such a protocol? The client
    can't just 'wish' something onto the display. The client have
    to prepare the request in a pre-defined format and submit it in
    a pre-defined manner. The latter is a protocol, isnt' it?

    --
    Grant Edwards grante Yow! I'LL get it!! It's
    at probably a FEW of my
    visi.com ITALIAN GIRL-FRIENDS!!

  9. Re: how to get rid of XFree in the longterm (just a thought)

    : tom
    : Misunderstanding... I thought about the X protocol and not TCP or IP.
    : The applications using the sockes have to define some rules/packets how
    : they communicated between each other.. I call this a protocol.. the X
    : protocol in this case... maybe I'm wrong here.
    : If you use any IPC, you have to have a sort of a 'protocol', even with
    : message queues or shared memory..

    Right. The thing is, X already uses shared memory when it can, so that
    lots of the information is not crammed into a stream and then unpacked
    to do everything. When client and server are on the same machine,
    lower-cost communications are used, and when they are not, network and
    packing/unpacking overheads are paid in order to get the ability to run
    apps heterogeneously across a net, AKA "network transparency".

    Arguably, the protocols used to get network transparency can be tuned
    to reduce the number of context swaps, or operating that wait for an
    acknowledgement, and so on. But X already does the simple accelerations,
    and performs better than most folks seem to give it credit for.

    But, if a tuned up keyboard/display/mouse services protocol or API lacks
    network transparency, it's a non-starter. I use network transparency
    *extensively*, and it would be a royal pain to do without it. The
    "computer" I'm using is actually a stack of 3 cmputers, 2 displays, and
    one keyboard'n mouse, networked together. If I couldn't integrate it
    all together mix and match where apps run and where they display, and
    control the whole thing with one keyboard'n mouse, well, that would be a
    Bad Thing. That's not even including the fact that I require intermixed
    GUI access to processes running on multiple computers from work. And
    I'm not alone; people who have been exploiting network transparency all
    along are addicted to it, and more and more people are starting to do
    it, and so starting a project to intentionally take it away is
    ill-conceived from the git-go.


    Wayne Throop throopw@sheol.org http://sheol.org/throopw

  10. Re: how to get rid of XFree in the longterm (just a thought)

    Wayne Throop wrote:
    > : tom
    > : Misunderstanding... I thought about the X protocol and not TCP or IP.
    > : The applications using the sockes have to define some rules/packets how
    > : they communicated between each other.. I call this a protocol.. the X
    > : protocol in this case... maybe I'm wrong here.
    > : If you use any IPC, you have to have a sort of a 'protocol', even with
    > : message queues or shared memory..
    >
    > Right. The thing is, X already uses shared memory when it can, so that
    > lots of the information is not crammed into a stream and then unpacked
    > to do everything. When client and server are on the same machine,
    > lower-cost communications are used, and when they are not, network and
    > packing/unpacking overheads are paid in order to get the ability to run
    > apps heterogeneously across a net, AKA "network transparency".


    Even more, if app and Xserver are on the same machine, the app can
    directly access to the hardware using DRI/DRM.

    >
    > Arguably, the protocols used to get network transparency can be tuned
    > to reduce the number of context swaps, or operating that wait for an
    > acknowledgement, and so on. But X already does the simple accelerations,
    > and performs better than most folks seem to give it credit for.
    >
    > But, if a tuned up keyboard/display/mouse services protocol or API lacks
    > network transparency, it's a non-starter. I use network transparency
    > *extensively*, and it would be a royal pain to do without it. The
    > "computer" I'm using is actually a stack of 3 cmputers, 2 displays, and
    > one keyboard'n mouse, networked together.


    That surely is an exception.. not everyone has three computers and two
    screens at home...

    > If I couldn't integrate it
    > all together mix and match where apps run and where they display, and
    > control the whole thing with one keyboard'n mouse, well, that would be a
    > Bad Thing. That's not even including the fact that I require intermixed
    > GUI access to processes running on multiple computers from work. And
    > I'm not alone; people who have been exploiting network transparency all
    > along are addicted to it, and more and more people are starting to do
    > it, and so starting a project to intentionally take it away is
    > ill-conceived from the git-go.
    >


    I use the network transparency, too, even from a windows machine
    (XWin32), and it's really a cool thing...
    I've never said that I would remove the network transparency from a
    Xserver..
    I just would design it in another way.. hard to explain here.. I'm
    working on a website about how I think a Xserver should be designed.


    --
    wereHamster a.k.a. Tom Carnecky Emmen, Switzerland

    (GC 3.1) GIT d+ s+: a--- C++ UL++ P L++ E- W++ N++ !o !K w ?O ?M
    ?V PS PE ?Y PGP t ?5 X R- tv b+ ?DI D+ G++ e-- h! !r !y+

  11. Re: how to get rid of XFree in the longterm (just a thought)

    tom wrote:
    > I just would design it in another way.. hard to explain here.. I'm
    > working on a website about how I think a Xserver should be designed.
    >


    Here it is: http://www.dbservice.com/tom/system.html


    --
    wereHamster a.k.a. Tom Carnecky Emmen, Switzerland

    (GC 3.1) GIT d+ s+: a--- C++ UL++ P L++ E- W++ N++ !o !K w ?O ?M
    ?V PS PE ?Y PGP t ?5 X R- tv b+ ?DI D+ G++ e-- h! !r !y+

  12. Re: how to get rid of XFree in the longterm (just a thought)

    :: But, if a tuned up keyboard/display/mouse services protocol or API
    :: lacks network transparency, it's a non-starter. I use network
    :: transparency *extensively*, and it would be a royal pain to do
    :: without it. The "computer" I'm using is actually a stack of 3
    :: cmputers, 2 displays, and one keyboard'n mouse, networked together.

    : tom
    : That surely is an exception.. not everyone has three computers and
    : two screens at home...

    Over time, I think it will become the rule rather than the exception.
    As devices get smarter, and they all want to display information mutiple
    places at once, it'll get more crucial. Consider a media server and
    multiple distributed consoles. Sure, you could make the server handle
    only the streaming protocols and run all apps locally on smarter
    terminals. It doesn't *have* to be the display protocol that goes over
    the net. But you'll still want to be able to talk to some app at one
    location, and then pick up where you left off even after you want across
    the house to another console; or start doing something on a pocket
    console, and decide that you'd like to move what you are doing to a wall
    screen; or run an app which only runs on one box (requires special
    hardware, or extra memory resources, etc etc) but you don't happen to be
    at that box's main screen; or you have two people wanting to use such
    resources at once; or any of tens of other useful applications of
    consuming memory and cpu resouces here, and displaying there.

    I think such uses would already be fairly commonplace in many homes
    with multiple computers (which are becoming more and more common)
    if it weren't for the fact that people have gotten used to it
    being impossible.

    IIRC, even microsoft fielded a product to allow a second special-purpose
    display console to borrow resources from another machine; their solution
    was clunky and un-necessarily expensive, but the thing is, people are
    starting to need to do such things. I'm not that far ahead of
    the trend. Some, but not that far.

    : http://www.dbservice.com/tom/system.html

    Some promising concepts perhaps, but I think the issue of
    separation of concerns isn't made clear, and is a potential infelicity.
    For example, application launching and session control are part
    of the job of network transparent distribution of gui apps.
    But fusing them together into a single "system", a single API,
    or parts of a single API, is IMO a mistake. There are more reasons
    one might want to start remote apps than just GUI apps. So, a model
    where distribution is handled one way (eg ssh), sessions another
    way (eg, xdm), and actual graphics ops yet another (X protocol)
    seem to me to be a feature, not a bug. The point being, using
    one doesn't force you to use, and pay the overheads for, the others.

    Not that the way it's done now is unflawed or un-improveable. For
    one example, dealing with frame buffers (ie, pixels, ala VNC), and
    sound and keyboards and mice isn't really done well or modularly
    enough IMO. But one thing that isn't desirable is a monolithic API
    that tries to do everything there is to be done. A suite of
    possibly-cooperating but fundamentally independent protocols
    is, IMO, superior.




    Wayne Throop throopw@sheol.org http://sheol.org/throopw

  13. Re: how to get rid of XFree in the longterm (just a thought)

    On 2004-02-26, tom wrote:
    > tom wrote:
    >> I just would design it in another way.. hard to explain here.. I'm
    >> working on a website about how I think a Xserver should be designed.
    >>

    >
    > Here it is: http://www.dbservice.com/tom/system.html
    >



    You might take some ideas from NeWS, a dead Sun product that I
    (shortly) described in
    http://starynkevitch.net/Basile/NeWS..._oct_1993.html


    You might also look into http://fresco.org/

    And http://tunes.org might give you additional ideas.

    --
    Basile STARYNKEVITCH http://starynkevitch.net/Basile/
    email: basilestarynkevitchnet
    aliases: basiletunesorg = bstarynknerimnet
    8, rue de la Fa´encerie, 92340 Bourg La Reine, France

  14. Re: how to get rid of XFree in the longterm (just a thought)

    Wayne Throop wrote:
    >
    > Over time, I think it will become the rule rather than the exception.
    > As devices get smarter, and they all want to display information mutiple
    > places at once, it'll get more crucial. Consider a media server and
    > multiple distributed consoles. Sure, you could make the server handle
    > only the streaming protocols and run all apps locally on smarter
    > terminals. It doesn't *have* to be the display protocol that goes over
    > the net. But you'll still want to be able to talk to some app at one
    > location, and then pick up where you left off even after you want across
    > the house to another console; or start doing something on a pocket
    > console, and decide that you'd like to move what you are doing to a wall
    > screen; or run an app which only runs on one box (requires special
    > hardware, or extra memory resources, etc etc) but you don't happen to be
    > at that box's main screen; or you have two people wanting to use such
    > resources at once; or any of tens of other useful applications of
    > consuming memory and cpu resouces here, and displaying there.
    >
    > I think such uses would already be fairly commonplace in many homes
    > with multiple computers (which are becoming more and more common)
    > if it weren't for the fact that people have gotten used to it
    > being impossible.
    >
    > IIRC, even microsoft fielded a product to allow a second special-purpose
    > display console to borrow resources from another machine; their solution
    > was clunky and un-necessarily expensive, but the thing is, people are
    > starting to need to do such things. I'm not that far ahead of
    > the trend. Some, but not that far.
    >
    > : http://www.dbservice.com/tom/system.html
    >
    > Some promising concepts perhaps, but I think the issue of
    > separation of concerns isn't made clear, and is a potential infelicity.
    > For example, application launching and session control are part
    > of the job of network transparent distribution of gui apps.
    > But fusing them together into a single "system", a single API,
    > or parts of a single API, is IMO a mistake. There are more reasons
    > one might want to start remote apps than just GUI apps. So, a model
    > where distribution is handled one way (eg ssh), sessions another
    > way (eg, xdm), and actual graphics ops yet another (X protocol)
    > seem to me to be a feature, not a bug. The point being, using
    > one doesn't force you to use, and pay the overheads for, the others.
    >
    > Not that the way it's done now is unflawed or un-improveable. For
    > one example, dealing with frame buffers (ie, pixels, ala VNC), and
    > sound and keyboards and mice isn't really done well or modularly
    > enough IMO. But one thing that isn't desirable is a monolithic API
    > that tries to do everything there is to be done. A suite of
    > possibly-cooperating but fundamentally independent protocols
    > is, IMO, superior.
    >


    I haven't written it on the page, but I thought that these four
    components are independent, stand-alone applications.
    And the protocols don't have to be the same. For example the app server
    could have a simple protocol which could be extended by modules, e.g to
    add some sort of security to it (ssh or others). I didn't try to make a
    monolythic API, I didn't even define an API, this page only reflects how
    I think a system of GUI applications and display servers should look
    like. I agree with you that making things more modular, or even split
    things totally off could help. Making the four parts of the system
    independent could help to bring some competition to the market.


    --
    wereHamster a.k.a. Tom Carnecky Emmen, Switzerland

    (GC 3.1) GIT d+ s+: a--- C++ UL++ P L++ E- W++ N++ !o !K w ?O ?M
    ?V PS PE ?Y PGP t ?5 X R- tv b+ ?DI D+ G++ e-- h! !r !y+

  15. Re: how to get rid of XFree in the longterm (just a thought)

    Clinging to sanity, throopw@sheol.org (Wayne Throop) mumbled into her beard:
    > Arguably, the protocols used to get network transparency can be tuned
    > to reduce the number of context swaps, or operating that wait for an
    > acknowledgement, and so on. But X already does the simple accelerations,
    > and performs better than most folks seem to give it credit for.


    That's not merely "arguably;" I heard the design of X given as an
    express example of an attempt to minimize the number of context swaps.
    --
    If this was helpful, rate me
    http://cbbrowne.com/info/x.html
    "Windows has detected that a gnat has farted near your computer.
    Press any key to reboot." --- Simon Oke in the scary devil monastery

  16. Re: how to get rid of XFree in the longterm (just a thought)

    In comp.windows.x, Grant Edwards

    wrote
    on 26 Feb 2004 15:14:35 GMT
    <403e0d5b$0$41292$a1866201@newsreader.visi.com>:
    > On 2004-02-26, tom wrote:
    >
    >> Misunderstanding... I thought about the X protocol and not TCP
    >> or IP. The applications using the sockes have to define some
    >> rules/packets how they communicated between each other.. I
    >> call this a protocol.. the X protocol in this case... maybe
    >> I'm wrong here. If you use any IPC, you have to have a sort of
    >> a 'protocol', even with message queues or shared memory..

    >
    > And how do you propose eliminating such a protocol? The client
    > can't just 'wish' something onto the display. The client have
    > to prepare the request in a pre-defined format and submit it in
    > a pre-defined manner. The latter is a protocol, isnt' it?
    >


    I'll admit to wondering about this protocol myself. The protocol
    inherently serializes everything, which means that the X server
    can only draw to one window at a time. (It tends to buffer things
    internally, as does the XClient, so as to optimize the drawing,
    though; it's more efficient for example to call XDrawLines() once
    than to call XDrawLine() many times. Of course that's not the best
    of examples since it turns out one is converted into the other.)

    Win32, for its part, has an application procedure interface
    (so does X: Xlib) which, among other things, allows for the
    simultaneous drawing into two or more (locked) rectangles
    on the screen, if the system has more than one processor.
    At least, such is my understanding. Whether this increases
    performance at all is not clear to me.

    The vast majority of X developers code to the API, so
    it doesn't really matter whether X has a true protocol,
    or not, in a way. But X's protocol makes for some truly
    tasty hacks; X + ssh for example makes secure GUIs over
    high-speed Internet simple. (Even on low-speed it's
    not too difficult, although it depends on which tool
    the user invokes.) One can also sniff the protocol by
    routing everything through a proxy; some diagnostic tools
    are available that do exactly that.

    Conversion of Win32's API to a protocol is possible (VNC is
    probably the best known) but it's not quite as clean.

    --
    #191, ewill3@earthlink.net
    It's still legal to go .sigless.

  17. Re: how to get rid of XFree in the longterm (just a thought)

    In article , The Ghost In The Machine wrote:

    >> And how do you propose eliminating such a protocol? The client
    >> can't just 'wish' something onto the display. The client have
    >> to prepare the request in a pre-defined format and submit it in
    >> a pre-defined manner. The latter is a protocol, isnt' it?

    >
    > I'll admit to wondering about this protocol myself. The protocol
    > inherently serializes everything,


    How so?

    I see no reason why the protocol would prevent you from writing
    a multi-threaded server.

    > which means that the X server can only draw to one window at a
    > time.


    Most of us only have one CPU, so no matter how you design
    things, the server can only write to one window at a time.

    > Win32, for its part, has an application procedure interface
    > (so does X: Xlib) which, among other things, allows for the
    > simultaneous drawing into two or more (locked) rectangles
    > on the screen, if the system has more than one processor.
    > At least, such is my understanding. Whether this increases
    > performance at all is not clear to me.


    I agree it would be easier to do a multithreading server with a
    procedural or system-call sort of interface.

    --
    Grant Edwards grante Yow! I'm wearing PAMPERS!!
    at
    visi.com

  18. Re: how to get rid of XFree in the longterm (just a thought)

    Wayne Throop wrote:

    > : http://www.dbservice.com/tom/system.html
    >
    > Some promising concepts perhaps, but I think the issue of
    > separation of concerns isn't made clear, and is a potential infelicity.
    > For example, application launching and session control are part
    > of the job of network transparent distribution of gui apps.
    > But fusing them together into a single "system", a single API,
    > or parts of a single API, is IMO a mistake. There are more reasons
    > one might want to start remote apps than just GUI apps.


    What other reasons?
    I use ssh to start remote applications, but this hasn't anything to do
    with this 'system', it only handles GUI apps. You still can start xterm
    or any terminal and start ssh and login into a remote machine and then
    start an application.

    One thing about sessions: There are no sessions in my 'system', there
    are only applications. The user shouldn't care about where the apps are
    running, he should only care about _which_ applications are running
    under his account (and where on the network). One could add session
    managment, you just save the names and locations of all running
    applications and wake up (or start) them at the next login.

    > So, a model
    > where distribution is handled one way (eg ssh), sessions another
    > way (eg, xdm), and actual graphics ops yet another (X protocol)
    > seem to me to be a feature, not a bug. The point being, using
    > one doesn't force you to use, and pay the overheads for, the others.



    --
    wereHamster a.k.a. Tom Carnecky Emmen, Switzerland

    (GC 3.1) GIT d+ s+: a--- C++ UL++ P L++ E- W++ N++ !o !K w ?O ?M
    ?V PS PE ?Y PGP t ?5 X R- tv b+ ?DI D+ G++ e-- h! !r !y+

  19. Re: how to get rid of XFree in the longterm (just a thought)

    In comp.windows.x, Grant Edwards

    wrote
    on 27 Feb 2004 05:13:03 GMT
    <403ed1df$0$41283$a1866201@newsreader.visi.com>:
    > In article , The Ghost In The Machine wrote:
    >
    >>> And how do you propose eliminating such a protocol? The client
    >>> can't just 'wish' something onto the display. The client have
    >>> to prepare the request in a pre-defined format and submit it in
    >>> a pre-defined manner. The latter is a protocol, isnt' it?

    >>
    >> I'll admit to wondering about this protocol myself. The protocol
    >> inherently serializes everything,

    >
    > How so?
    >
    > I see no reason why the protocol would prevent you from writing
    > a multi-threaded server.


    Hm...you may be right; I wasn't thinking of multiple clients for
    some bizarre reason.

    I'd have to look, but in light of your comment below making X
    multithreaded appears to merely complicate the design to no purpose.

    >
    >> which means that the X server can only draw to one window at a
    >> time.

    >
    > Most of us only have one CPU, so no matter how you design
    > things, the server can only write to one window at a time.


    True.

    >
    >> Win32, for its part, has an application procedure interface
    >> (so does X: Xlib) which, among other things, allows for the
    >> simultaneous drawing into two or more (locked) rectangles
    >> on the screen, if the system has more than one processor.
    >> At least, such is my understanding. Whether this increases
    >> performance at all is not clear to me.

    >
    > I agree it would be easier to do a multithreading server with a
    > procedural or system-call sort of interface.
    >


    If one spins off a thread to handle a DisplayConnection, one has a
    multithreaded server. How multithreaded may be of some debate.

    --
    #191, ewill3@earthlink.net
    It's still legal to go .sigless.

  20. Re: how to get rid of XFree in the longterm (just a thought)

    On 2004-02-27, The Ghost In The Machine wrote:

    > Hm...you may be right; I wasn't thinking of multiple clients
    > for some bizarre reason.
    >
    > I'd have to look, but in light of your comment below making X
    > multithreaded appears to merely complicate the design to no
    > purpose.


    And with multiple CPUs, Amdahl's Law can bite you pretty badly --
    even after you get all the synchronization bugs fixed.

    >> I agree it would be easier to do a multithreading server with a
    >> procedural or system-call sort of interface.

    >
    > If one spins off a thread to handle a DisplayConnection, one has a
    > multithreaded server. How multithreaded may be of some debate.


    IMHO, with todays processors (both central and video), X11 is
    so fast that worrying about speeding it up is a waste of effort.

    Unless you're doing gaming or 3D animation stuff, I suppose.

    For us grunts on the ground writing code, writing documents,
    and doing e-mail, worrying about X11 speedup is purely an
    academic exercise.

    --
    Grant Edwards grante Yow! I feel like I am
    at sharing a "CORN-DOG" with
    visi.com NIKITA KHRUSCHEV...

+ Reply to Thread
Page 1 of 2 1 2 LastLast