Wayland is really slow. I don't know if it's the compositing or what but it's unusable on lighter hardware that X ran fine on.
This is crazy when you think about it. I remember running an X server, Hummingbird I think it was called, on 386 and 486 machines connecting to Suns and it was fine, this was a perfectly acceptable way to work. Couple of xterms, an Emacs, xbiff for email, maybe some xeyes just for fun. Developing with Tcl/Tk and running those applications. Now we have several orders of magnitude more CPU, memory, network and Wayland doesn't even perform as well as that! My mind is truly boggled.
Yeah - I have no clue why you're being downvoted. I have the exact same professional experience where Wayland is slow out-of-box vs X. I also share the experience of getting X to work on ancient hardware without much difficulty.
Let's be real here - if you're needing something to "just work" you're going to install X. Sorry Wayland, you're just not there yet.
> Yeah - I have no clue why you're being downvoted.
I didn't downvote but it's because parents comment are anecdotal and not providing further data one might be able to engage/confirm/refute ... and therefore I learned nothing from reading it.
I'm also running a dual setup of i3/sway and the only reason why I still keep i3 around is screen-sharing in jitsi and similar. and my experience is that wayland has a lower use of CPU/memory than when running X (i3) but it's not why I prefer sway. (I'm using sway with "xwayland disable" so maybe this is where a lot of resources are saved). But the whole discussion is pointless without verifiable benchmarks.
Well, the anecdotes are kind of real, and vast in numbers. I suspect many readers here never used X on a 90s PC or workstation, so they could be forgiven for not realizing it was up to the task on hardware that is now pitiful.
Another example I like is the Nokia N900, which ran X on a phone no less, phone hardware from 2009, and it was pretty good there.
Part of the problem is surely software bloat over time on higher parts of the stack, rather than X itself. You couldn't get the 486 in the comment above to run recent gnome or a recent browser. But you could run software of the era well.
Maybe someone could try to install Wayland on an old device, where latest Xorg works fast, and see if Wayland also does. Comparison video would be nice.
For what it's worth, whenever I tunneled X over a non-local SSH connection, it was slow as molasses. That's because almost all contemporary GUI applications render to a bitmap anyway. Those that actually use the outdated X vector graphics operations look like utter garbage compared to anything post-1995. Frankly, I'd rather my applications are at least somewhat aesthetically pleasing.
Of course it does, just like video-based remote desktop systems work great these days.
That's the point, X's claimed advantages here long since stopped existing, and nobody noticed because the "inferior" approach is perfectly fine with modern internet connectivity.
Honestly, X forwarding doesn't work that well in my experience, unless you have a very stable connection, with quite a bit of bandwidth (~1Mbps at the very least). I've had more success using xpra for forwarding, as I'm often connecting over Wi-Fi (hostel, campus rooms...).
It's also rather complicated to set-up on the server side (xauth, magic cookie, etc).
waypipe, on the other hand, was a breeze to use, even though it's very young. I tried with Firefox and 500Mbps of upload capacity, it worked fine as long as the window wasn't too large.
No, that is the client side. I went back and looked at the documentation, I was wrong and conflated two ways of doing it:
- X forwarding over SSH: this only requires changing X11Forwarding in OpenSSH sshd's config
- Plain X over network, which is secured with `xhost`, insecure, and needs transfering the magic cookie or other authentication information
So, not nearly as complex to setup as I recalled, though it's much simpler to run a nested wayland compositor (which waypipe does) than a X11 server (which xpra does). The difference between X11 and Wayland remote access thins when xpra is involved.
Outside of major metros, in the US a lot of towns only offer up to 5Mbps down, and only then if you pay out the nose. Not sure if it matters for X forwarding, but upload caps are also ridiculously low even on otherwise reasonable connections.
Those that actually use the outdated X vector graphics operations look like utter garbage compared to anything post-1995. Frankly, I'd rather my applications are at least somewhat aesthetically pleasing.
That is entirely subjective no? Personally I think Motif is one of the pinnacles of GUI design.
I also agree, and even if I did think that the modern stuff with the impossible to see borders and the buttons that you can't tell were up and down looked better (I don't), I'd still prefer smooth remote operation.
Alas. That isn't the way the world has went, and it's extremely expensive to be weird.
> Those that actually use the outdated X vector graphics operations look like utter garbage compared to anything post-1995
Modern UIs could do with being a bit more like 1995. Most of my work is performance sensitive so the first thing that goes are all the desktop effects that try to barf pointless rainbows and glitter in my direction.
For what it's worth, Exceed was never fine in my experience, and you were better off with the Cygwin server. At one time, the first thing to ask about certain sorts of Emacs problems that were reported was "Are you running Exceed?", with high probability the answer would be "Yes". I never understood why "we" paid for it. But, yes, X did always run on relatively low-resource machines. (I can't comment on how Wayland compares.)
Part of it is toolkits throwing sometimes multiple megabyte bitmaps to draw, which starts to have problems the moment you don't have a zerocopy, gpu accelerated method to draw them.
The old core protocol approach extensively used optimized graphic operations on the server side, with clients sending things like "draw me a rectangle/fill a rectangle/draw a bunch of lines" etc. - today you're going to get pretty big bitmap (especially with high dpi).
It's the same problem that mobile devices faced, and is related to a lot of issues on how android devices were "janky" (and related to the various hacks that Apple did to make sure your application wasn't capable of overstressing the early iPhones - cause just displaying basic UI was close to doing that.)
I personally don’t know, but most likely when you are not forced to think about memory, efficiency, etc... most people just don’t. So if you aren’t actively developing on those lightweight systems your code won’t run efficiently on them.
Technically what Wayland is doing, using 3D GPU for everything, is the best way forward. Windows is using it since Vista. When done right, gradients mentioned in other comments are free, GPUs have hardware to interpolate values (such as colors) across vertices of triangle, for free. Many other effects are either free or very cheap.
Engineering-wise it's really hard.
Microsoft reworked GPU driver model introducing WDDM. They invented a new user-facing API for that, introducing Direct3D 10. They did that in close collaboration with all 3 GPU vendors. They made user-mode components like desktop compositor itself, dwm.exe, and higher-level libraries to benefit from all that stuff. Initially they were optional things like WPF, Direct2D, DirectWrite, then with Win8 they introduced WinRT later rebranded to UWP. That one is no longer optional and is the only practical way to render "hello world, GUI edition" in modern Windows (possible to do with DirectWrite or legacy GDI but neither of them is practical).
The problem "render nice high-resolution graphics, fast" affects everything, the entire stack. Modern Linux has decent kernel infrastructure (DRM/KMS), but even so, remaining challenges are hard. Linux has less luck with user-facing GPU APIs (Vulkan is not yet universally available, neither is GLES3+ or OpenGL 4.3+). For some GPUs, quality of drivers is less than ideal. OS maintainers oppose stabilizing kernel ABI for drivers. There's no high level GPU-centric graphics libraries, I tried once with moderate success https://github.com/Const-me/Vrmac but that only supports one specific Debian Linux on one specific computer which happens to support GLES 3.1, and some important features are missing e.g. no gradient brushes or stroked pens.
I don't see any large party interested in making that happen. At least not for desktop Linux. Valve started to do relevant things when they thought Windows 10 is going to kill their Steam business model, then it became apparent Microsoft won't make Win10 into an iOS-style walled garden, and they no longer have much motivation.
> Technically what Wayland is doing, using 3D GPU for everything, is the best way forward.
It's great for the common desktop case.
It's not as great for some other cases in which Linux is the preferred platform (headless servers, repurposed old hardware, etc).
The problem isn't, of course, that there exists a solution for this on Linux. It's that that solution is being pushed as the only one that should be maintained—and thus, exist—going forward.
Also phones and tablets. Also embedded devices who have a GPU + LCD, I have personally shipped Linux firmware where I used NanoVG on top of DRM/KMS to render touch screen GUI. Also for kiosks, cars, smart watches and many other applications.
It’s great everywhere you have a high-resolution screen. And it’s mission-critical for ARM devices who don’t have CPU power to render that screen on CPU, at least not at 60Hz.
> headless servers
Why would you want a GUI there? Even Microsoft has console-only “core” editions of their Windows Server, since 2008. They made it because competition from Linux, who had that from the very beginning and was thus way more suitable for cloud use cases. It still is due to other reasons, but that’s another story.
> It's that that solution is being pushed as the only one that should be maintained—and thus, exist—going forward.
I get why some people would want the X to be maintained, but the thing is, it’s very expensive, and not fun.
Developing game console emulators is expensive, but fun and people do that in their free time with great results. Moving forward GPU-targeted Linux GUI is expensive, not too fun, but there’re commercial applications with healthy profit margins (automotive, embedded, etc) so people from these areas are working on that tech. Patching x.org for repurposed old hardware, on the other hand…
Wayland’s mistake is assuming every application has a local GPU. In reality the user has just one GPU attached to his monitor, which is fine for email and gaming, but serious tools run miles away in datacenters. We wouldn’t be rewriting everything in javascript if only we hadn’t forgotten how cool remote X was.
I work on a serious tool, specifically it’s CAM/CAE stuff. Despite Google, Amazon and MS sales people apply pressure to upper management (they want us to move to their clouds and offering gazillions of free compute credits), our software works on desktops and workstations, and I have reasons to believe it gonna stay this way. With recent progress of HPC-targeted CPUs, and steady downward trend of RAM prices, I believe our customers are happier running our software on their own computers, as opposed to someone else’s computers.
> We wouldn’t be rewriting everything in javascript if only we hadn’t forgotten how cool remote X was.
It was cool in the epoch of OpenGL 2. By the time Metal, Direct3D 12, and finally Vulkan arrived, it stopped being cool. Essentially, these APIs were designed to allow apps to saturate PCIe. You can’t transfer that bandwidth over any reasonable network.
What could be the reason behind this? Asking as a noob
Abstractions piled on top of abstractions. Something that might have been 5 function calls deep on either side with a carefully crafted packet in the middle is now 100s on each side.
We have a culture that prizes programmer happiness above all and this means everyone thinks "this is a mess, I'll put my own layer on top to make it nice, then work above that layer". Repeat 100 times and now you have processors literally 2000 times faster that struggle to even keep up with keypresses. But what noone wants to admit, is that it's messy because the problem domain is messy and sometimes you just have to live with the mess and get some real work done. The programmers of old understood this.
Anecdotally I run X11 on several Sun workstations (Motorola 680xx), a Dec Alpha (some RISC), HP X terminals, etc. All of them were reasonably fast or at least not any slower than the Windows and Mac boxes of the time (1990/95.)
We played videogames on them. Does anybody remember Netrek, Xtanks, and a F16 vs MIG flight simulator which I can't remember the name of?
I think much of the problem is that today's systems have vast amounts of eye candy that was all but nonexistent back in the 80s and 90s. X terminals and 486s don't have the resources to throwing fancy visual effects on the screen, and sometimes might not even be color displays.
This is crazy when you think about it. I remember running an X server, Hummingbird I think it was called, on 386 and 486 machines connecting to Suns and it was fine, this was a perfectly acceptable way to work. Couple of xterms, an Emacs, xbiff for email, maybe some xeyes just for fun. Developing with Tcl/Tk and running those applications. Now we have several orders of magnitude more CPU, memory, network and Wayland doesn't even perform as well as that! My mind is truly boggled.