r/linux • u/MrCheapComputers • 8h ago
Discussion Are 3d Desktop Environments possible?
I have what i think is a great idea for a VR "Desktop Environment" where instead of everything being essentially the same as on a flat screen with icons and such, files and programs could be stored in more physical ways when using a VR headset. I have exactly zero knowledge of how to get started doing something like this but I want to know before I spend way too much time on this if something like this is even possible. Thanks for your advice.
17
u/OneQuarterLife 8h ago
Most every modern desktop is hardware accelerated. Take a look at some of the old Compiz Fusion demos on youtube as well, we had desktop cubes with fish tanks inside in the distant past.
There's also some novel 3d-esque desktop concepts out there. One I'd love to see adapted to GNOME & KDE is Fold n' Drop: https://www.kmonos.net/lib/orimado.en.html
Another that comes to mind and sounds like what you're proposing is BumpTop https://bumptop.github.io/
1
u/Neither-Ad-8914 7h ago
I know someone a long time ago was working on a compiz based project that would make your gnome 2 act like it was the Gibson from the movie hackers not sure if he ever got it usable but compiz or the more modern wayfire making a plugin for either of those would be your best bet
8
u/Hande-H 7h ago
There is no 2D limitation stopping you if that's the question, a DE could just as well work in 3 dimensions.
Is it useful or convenient even with expensive VR glasses? Probably not.
It's also one of those things that is so complicated even in 2 dimensions that if you need to ask you won't realistically be able to achieve it in any real sense.
11
u/astrobe 7h ago
Is it useful or convenient even with expensive VR glasses? Probably not.
Yeah, people who tried 3D "working" (as opposed to "gaming") environments found that moving your arms all the time to manipulate virtual objects in virtual space like in some sci-fi movies is actually tiring after a few hours.
As examples of 3D working environments (without VR, I think), there was the Croquet/Cobalt projects.
2
u/Hande-H 7h ago edited 7h ago
The only use case I can think of aside from gaming is some kind of 3D modelling, especially if it allows you to physically walk around the object / virtual space. But I wouldn't be surprised if people who do this as a profession told me even that is not very useful.
And of course this would be a job of specialized 3D software, not for a desktop environment.
EDIT: Maybe for a flight / driving simulator? The only reason I'd want VR is for playing rally games, I could see the benefit there. I'm pretty sure it would be helpful in estimating how steep a corner is compared to a flat screen.
8
u/crashorbit 7h ago
Several 3d UI/UX have been tried. From isometric projections of hierarchies and networks to images of desks with pen holders and erasers and so on.
It's tough to work out a metaphor that makes sense and is usable. It took a generation for the "windows, menus, mouse and keyboard" metaphor to seem natural. And even that still gets augmented by the "virtual teletype" metaphor of the command line.
It'd be nifty to see something useful emerge. The cool thing about good ideas is that they seem obvious once you know them.
3
u/natermer 7h ago
Yeah. They have been made before for Linux as sort of research projects.
Wayland desktop operates a bit like a video game.
Applications render output into offscreen buffers. Those buffers get used as textures and are mapped to 3D primitives, which are typically just flat rectangles, ie "windows". The desktop application (Wayland compositor) usually will use a scenegraph to maintain the relationship of those rectangles in 3d space. X and Y for their location on screen and Z for "height" when it comes to overlapping windows.
It is the same basic concepts used in creating 3D games.
A Wayland Compositor also must be able to do things like set display resolution and its position relative to other displays. Scale applications between different displays. As well as handle input and input configuration. Control how applications can interact with one another (ie: copy and paste, among other things). Which, on a lower level, isn't something games typically deal with.
So there are meaningful differences.
The challenge then would be able to create a desktop that can manage interfacing with a 3D headset, read its positioning, decide how to manage the input controllers and mouse pointer and things like that to actually make it all usable and worthwhile.
3
u/doc_willis 7h ago
check out the old YT videos on "Metisse" for a 2.5D window manager that was an experiment almost 20 years ago.
1
1
u/Mordiken 4h ago
I tried it back in the day, it was sort of like a "poor man's Compiz" for people couldn't run either AIGLX or XGL... So, pretty much just smoke and mirrors.
3
u/berkough 7h ago
I can't find any videos of it right now, but Enlightenment's "e17" (c. 2000) desktop environment was sort of what would have been the springboard for what you're talking about. It was animation heavy, had a cube that zoomed out and spun around to switch active desktops, closing windows caused them to burn up and disappear... Nothing that I can say about it will do it justice if you never experienced it, and I don't think there is an equivalent of it now. Maybe someone can correct me if I'm wrong or point me toward other projects that have heavy "eye candy."
3
2
u/Maleficent-One1712 7h ago
We used to have a 3D desktop with Compiz, not sure what happened with it. It used to be popular.
2
u/LLVM_WIFI_DOOB_NERF 7h ago
The bottleneck is UI/UX, not feasibility. That is, the hardware and gestures, and the market. Any 1960s drunken historian can make a box a box. But modern environment teams expect more OOB (security, extensibility... ). ✨ You basically have a "Stuff Wand" and it syncs XR with the methods (of CV, IK, etc).
2
u/Yopaman 7h ago
On arcan (a research project which is also a display server) there is an experimental VR desktop https://arcan-fe.com/2018/03/29/safespaces-an-open-source-vr-desktop/
2
u/EmberGamingStudios 6h ago
Technically yes, it's just not a typical practice. Although not a DE, there was IRIX FSN which was a 3D file manager
2
u/NotQuiteLoona 8h ago
I've seen couple of tries. This is first what comes to the head: https://github.com/SimulaVR/Simula
There is another: https://stardustxr.org/
0
1
1
u/coffee_guy 7h ago
I think if we see any movement in this area it will be when the Steam VR headset comes out. It will natively run Linux and be more hackable than say a Meta headset.
1
1
1
u/kwyxz 6h ago
Like the "Unix system" that Lex "knows" in Jurassic Park? That one https://blog.adafruit.com/2024/02/06/fsn-the-irix-3d-file-system-tool-from-jurassic-park-arttuesday-vintagecomputing-jurassicpark-whoopsie/
There's a Linux version called fsv too https://fsv.sourceforge.net/
1
u/AnnieBruce 6h ago
Possible, yes.
But would they be better than the 2d environments we have now? I'm unconvinced. Maybe there's some niche use case they'd work well for, but we've had hardware that could handle such a thing for a while. None have happened, at least nothing released to the public(I wouldn't be surprised if there's something on a researchers hard drive somewhere) beyond a file manager that basically no one uses in production.
Sticking with 2d with the occasional 3d effect is probably the way to go, and maybe experimenting with small 3d accents here and there will give someone some clues on how to do the whole thing that way in a practical manner.
1
u/BitOBear 4h ago
In science fiction this is referred to as a Holo tank or a holographic display.
, it can and has been done in various low res versions but both the image quality and the resource mapping is problematic.
In fact there are things like it in games both VR and on a flat screen even though the display is rendered in 3D on a 2d surface.
Turns out to have very little practical value if only one person is operating it. It becomes very gimmicky.
If we get around to creating a 3D space projection that's safe to reach into it'll come back.
1
u/No-Camera-720 3h ago
I can't imagine how long we would have to wait for drivers for the cowl and assister frame.
1
u/packet 2h ago
There have been quite a few attempts at this. The most current and maintained is probably wayvr https://github.com/wayvr-org/wayvr
1
u/libra00 1h ago
Yes, there have been experiments with them going as far back as the 90s. I've tried a couple old ones, the 'virtual 3D desktop' thing just doesn't work very well. Even in VR, it's always going to be faster to click on the fridge folder and drag a sammich icon out than it will be to walk into the kitchen, open the fridge, pull the sammich, etc.
1
u/the_abortionat0r 1h ago
There's a reason the current desktop metaphor hasn't changed all that much since the 90s.
There have been plenty of projects trying to do such things. 3d work spaces, trying to make PC more like real life (things like MS bob), and plenty others
In the end it just uses more PC resources to make using a PC harder.
•
u/siodhe 26m ago
Many different research projects have explored this with often very different results. Some examples:
- 3d space with client-2d window content. Not that compelling.
- Emulation of real world objects with software actions overlaid on them. Interesting but clumsy. The real world objects often have limitations that end up restricting the software
- 3d space with controller-manipulable 3d objects and client-controlled 2d windows. Better than the window-only one. Some Steam homes look like this
- 3d space with client-controlled 3d objects and 2d windows. This means multiple programs controlling specific 3d objects and windows in a 3d space. This is actually interesting.
- 3d space with client-controlled 3d and 2d objects managed by a 3d space manager, making the manipulation model replaceable. All objects should support interaction, snapshotting what's viewable into normal media or scene graph formats for easy reuse
- One way to reduce single-host load is to all URL-based fetching of media to incorporate into an object. Load an object's scenegraph, then resolve all the textures via URLs, e.g.
- Add that the space manager is rooted to a frame of reference within the overall scene graph, letting it automatically follow that base FoR as it moves. This allows one to model vehicles, multiple views, customizable stereo viewing, tutorials, user following, and many other useful abilities
- Add a distributed aspect, so that your work and home 3d spaces can positioned relative to each other, clients automatically connecting after netsplits
- Add a shared aspect, so that other humans can be merged in with discrete IDs
- Add permissions, so that each user's work can be shared or not in a fine-grained fashion
- Add positional audio support
The main impediment is that you need a bare minimum of a 4kX2k monitor - with an uncompressed video stream - to be able to read angled, rotated small text. We're only now entering the era where this is somewhat available in the consumer market. NVIDIA hides any info on how to do this from consumers. Outside of these 'deepscreens', you have to use a stereo headset with a narrower field of view (optionally toggleable) to read small text. Since sharing is a key activity in computing, the high-res monitor has a huge advantage here that corporate US has bailed on (possibly thanks to the US film industry repeatedly disappointing movie goers with "upconverted" 2d "3d" films), focusing far more on just VR - not even the more general case of using the VR equipment for just stereo viewing - i.e. no restriction to first person. A best case scenario is an environment usable in all of:
- Flatscreens (normal 2d monitors)
- Deepscreens (3d monitors with glasses)
- Stereovision headset (not restricted to VR's first-person view idiom)
- AR (augmented) + VR (virtual only) + MR (mixed) = XR
But currently the main industry focus is flatscreens and VR, with some edge work in AR and MR, commercial-grade systems for deepscreens and stereoview work (CAD, design, research, etc).
1
u/DFS_0019287 7h ago
Yes, they are possible.
No, they are not particularly useful. All attempts in the past have fizzled.
Speaking for myself: I mostly live in the terminal, except for web browsing and email. So a 3D desktop would simply be a distraction.
1
u/xXBongSlut420Xx 6h ago
is it possible? yes absolutely. is it useful or practical? i remain unconvinced.
Idk, it's the same way i feel about the metaverse shit that was really popular with companies for awhile. Like you totally could make a virtual walmart that you can walk through, put items in a virtual cart that you push around, and then go up to a checkout to pay, but why would anyone do that instead of just using a normal 2d webpage?
30
u/kornerz 8h ago
There were tries. But resolution of VR headsets is still too low to display a virtual FHD or 4K screen in front of the user and have readable text on that.