Selon Richard Dale :
> I'd like to see KDE integrated into an immerisive 3D environment, now we have
> large flat screen displays with suitable 3D hardware acceleration. The only
> project I know, apart from multiplayer online games, trying to do something
> similar is Croquet:
> But this is a research project, and so not something that can be very easily
> implemented before about KDE 5.0.
> Your proposal sounds like a 'single user' version of Croquet, and so you
> wouldn't need the 'tea-time' and 'tea-party' protocols to meet and coordinate
> with other users. But you would need to be able to render windows onto 3D
> surfaces which I think near future versions of X-Window will be able to do.

I tried opencroquet before and yes, what I proposed is a kind of 'single user'
version of Croquet. I really like the idea behind it but, like you said, it's a
research project and it could not be very easily implemented in KDE before some
time. The social part of it is interesting but I find it a little complex for a
first version of an immersive OS.

I would like a kind of kde-croquet software which is more oriented toward the
user task and where integrating software inside of it is simpler as possible.
The possibility of moving great distance inside a 3d world in croquet is
interesting but I think I would not implement it. I think the user will have
enough space all around him to manage the basic tasks he could perform. Maybe I
could add this kind of feature later since it could be useful to perform some
kind of specialized tasks (probably related to visualization, computer
graphics, ...).

After re-reading my post, I realized I forgot to include some important
informations about what I was thinking. A VR application which is based on VR
Juggler runs inside an OpenGL window. The application itself can be launch in
sterocopic mode on the same display (and monitor) on which is was started or on
another display (for example, projected on the screen of the Immersadesk). What
I was thinking at fist, is that I could modify all the QT/KDE GUI
classes/functions in a way which the concerned calls (from softwares, KDE main
UI, ...) would be redirected to my VR library so all the UIs would be draw in
3d inside the OpenGL window. It means each UI functions from QT api and KDE
would be reimplemented in way which permit to display UI in a 3d OpenGL format
and that all the mouse/keyboard events management would be reimplemented to work
in 3d with the gloves or the wand.

I realized by now in reading it by myself that it's probably not feasible and
that it would be a really hard job, but I'm always interested in hearing what
you think.

I think a better idea would be to develop first an OpenGL GUI Api which could be
used to integrate 3d UIs inside 3d worlds. Maybe I could then create an
interface to it which would be based on the way QT Api is designed so it could
be easier for me and for software developers to integrate softwares which use
QT inside an OpenGL application. Then after, maybe I could create a software
which run on KDE and could load some dedicated software to run.

Another option is to wait for future versions of X-Window which will be able to
render windows onto 3D surfaces. I did not read about it, but I guess the
drawback is that it will be less powerful than OpenGL itself? It could anyway
be enough to perform the wanted tasks.

Hmmm... that was a long post with a lot of questions/doubts. Sorry for the
non-proactive form of it: I'm in a reflexion mode and I'm asking myself a lot
of questions...

Some of you have ideas/suggestions about it?

>> Visit to unsubscribe <<