Recently it was noticed that users on more bleeding-edge Linux distributions that updated saw Easy Anti-Cheat no longer working on Linux, the culprit was glibc and now a Valve developer has spoken out about it.

Writing in a small thread on Twitter, Valve developer Pierre-Loup Griffais said:

Unfortunate that upstream glibc discussion on DT_HASH isn't coming out strongly in favor of prioritizing compatibility with pre-existing applications. Every such instance contributes to damaging the idea of desktop Linux as a viable target for third-party developers.

Our thoughts on the topic from this prior compatibility issue in BlueZ apply more than ever: https://github.com/bluez/bluez/commit/35a2c50437cca4d26ac6537ce3a964bb509c9b62#commitcomment-56028543
It is unfortunately yet another entry in a growing list over the years.

We understand that working with a focus on compatibility requires more resources and more engineering trade-offs, but strongly believe it is nonetheless the way to go. We are very interested in helping with any underlying resource constraints.

This prompted CodeWeavers (who work on Wine and with Valve on Proton) developer Arek Hiler to write a blog post titled "Win32 Is The Only Stable ABI on Linux" and their ending statement is something people should think on:

I think this whole situation shows why creating native games for Linux is challenging. It’s hard to blame developers for targeting Windows and relying on Wine + friends. It’s just much more stable and much less likely to break and stay broken.

Hiler certainly isn't the only one to think like that, with another CodeWeavers developer Andrew Eikum mentioning on Hacker News some time ago:

As a long-time Linux dev (see my profile), I have also found this to be true. Linux userland APIs are unstable and change all the time. Some transitions that come to mind that have affected me personally: ALSA->pulse; libudev->libudev2->systemd; gstreamer 0.10->1.0. All of those changes required modifications to my software, and the backwards-compat tools that are provided are buggy and insufficient. Meanwhile, you can still write and run winmm[1] applications on Windows 10, and they will work in almost all cases. It's simply the case that the win32 API is more stable than Linux userland APIs, so it's entirely plausible that games will run better in Wine, which shares that stable ABI, than they will on Linux, especially as time goes on and Linux userland shifts yet again.

[1] winmm dates to the Windows 3.x days!

Situations like this can be pretty messy and this is not a case of open source versus secret closed source anti-cheat stuff either, since the glibc issue affected a Native Linux game (Shovel Knight) and Linux software libstrangle. No doubt there are other things yet to be discovered that were broken by the change.

It is of course also a case that Linux distributions need to ensure they do quality assurance testing, especially for gaming which can end up showing up issues quite easily and that bleeding-edge distributions can and clearly do end up breaking things by pulling new software in so quickly.

Article taken from GamingOnLinux.com.
43 Likes
We do often include affiliate links to earn us some pennies. We are currently affiliated with GOG and Humble Store. See more here.
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly came back to check on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly.
See more from me
123 comments
Page: «5/13»
  Go to:

Klaas 17 Aug
I don't understand why people think that it is a good idea to break existing binary only applications and at the same time expecting that games get native releases. Those two thing are completely incompatible. If you want the Linux desktop to be able to gain a larger market share and get any official releases from commercial developers, such breakages are not an option. It's the same discussion as dropping multilib support since it “only takes space”.

Note: I do not care about EAC breakage (except on principle) since I refuse to play games that use it, but I'm sure that Shovel Knight is not the only native release that is affected by this change.

Apple gets mostly away with complete breakage every few releases since they have a considerable cult following that is happy to pay for anything they do even if they moan in the GOG and Steam forums later when things don't work anymore. And there is an ever increasing number of (indie) developers that has stopped supporting macOS since it is a constant hassle.

TL;DR: IMO doing such changes just to “make things more elegant” will destroy the Linux Desktop irreparably. And there will only be a decreasing number of users left that only use open source software and wine/proton stuff.
ShabbyX 17 Aug
Quoting: pgr
Quoting: ShabbyXThis is absolutely not true. 16KB is 4 pages of memory, saving that on every .so is huge! It's not just that you have the memroy laying around, there are other costs too. There's the cost of loading the objects from disk, maintaing the struct page entries in the kernel etc.
Everything has worked fine so far even with this extra cost, so I doubt the (real world) effect is huge. What kind of improvement this change makes for desktop use case?

Quoting: ShabbyXThere is a reason Linux is _fast_. With your approach, Linux would have been bloatware like the rest of them.
How come Linux is the fastest kernel there is when it absolutely follows that "bloatware" practice?

1. As others mentioned, little things add up. Linux is fast because every little performance improvement is applied. After all, large company X saves a lot of money for improving things by 0.01% simply because the multpilier is so large for them. You enjoy a fast kernel on desktop thanks to that.

2. Two reasons. One is that a good chunk of the ABI people use is POSIX, which is standardized. Linux is not free to change it, no matter how many complaints they may have about it.

But more importantly, it's because Linux actually doesn't follow the bloatware practice. Linux's ABI most definitely changes in backwards incompatible ways. It just happens to change mostly in actively developed areas where users are also developers of the feature and they adapt to new changes.

Linux's motto is not *never change the ABI*, but *never break userspace*. The difference is that if a change breaks ABI but not userspace (like, no active users of it, or userspace happens to not break), then the change goes through perfectly fine.

---

To be clear, I'm not defending glibc. They were wrong to make a backwards incompatible change without incrementing the major version. I'm only saying that "win32 is stable, so it must be good" is a terrible argument.


Last edited by ShabbyX on 17 August 2022 at 1:28 pm UTC
TheSHEEEP 17 Aug
View PC info
  • Supporter Plus
Quoting: ShabbyXI'm only saying that "win32 is stable, so it must be good" is a terrible argument.
I don't think anyone is saying this.
Especially not in a place like this

But its stability over the years is most definitely a good aspect of it.
dibz 17 Aug
Quote...on more bleeding-edge Linux distributions...

(well there's your problem)
EagleDelta 17 Aug
Quoting: dibz
Quote...on more bleeding-edge Linux distributions...

(well there's your problem)

Rarely do I run into issues on "bleeding edge" distributions. It's also important to note that "Bleeding Edge" distributions aren't generally using "Bleeding edge" versions of software. Latest upstream stable versions? yes

Granted, when I think bleeding edge, I think software in beta stage (complete but still testing). There is a HUGE HUGE HUGE problem with certain distributions still using versions of upstream software that the upstream project has made end of life (Python 2.x anyone?)..... which is not a good practice in today's security environment.
EagleDelta 17 Aug
Quoting: minidouNothing got broken. A two decade of depcrecation function got removed, but nothing broken. Or do we just expect everything to be forever maintained ?

As was mentioned by Arek Hiler in his blog post, that "Two Decade deprecation" was very poorly documented and warnings were not easily seen/found/displayed.

 
For those 16 years, it was Glibc who provided the compatibility and overrode the defaults for everyone and there never were any easy-to-spot deprecation warnings. 
It’s also unrealistic to expect every ELF consumer to keep up with undocumented developments of the format.


QuoteI don't expect anyone to check, I expect a CI or a quality gate to stop them from shipping.

Can't have a CI check if the change is poorly documented and not made glaringly obvious to the developer. Working as a software dev myself, I generally do not go to project pages and discussions to find out more about the library I'm using. I rely on the documentation and any output the library gives me. I don't care if it was deprecated for almost 20 years. If it wasn't well documented where developers could see AND warnings were obviously placed, then I'm probably not going to see it.

Even then, glibc made a breaking change without bumping the major version. That's a big break in protocol. In most versioning schemes, the major version is used to denote breaking changes. Regardless if those changes will actually break anything, if it breaks any existing functionality, it's a breaking change.

QuoteI'll call it bad practice, or just not being up to 2022 standards.
I'm sorry, but my terminal generally doesn't have enough scroll back to see all deprecation warnings in the compilation, let alone an easy way to highlight them well. Not to mention, there are a LOT of deprecation warnings in compiled software to the point where developers get so overloaded by warnings that end up amounting to nothing that they just start ignoring them.

A better solution is to update the documentation and put out a big notification ahead of releasing a breaking change.


Last edited by EagleDelta on 17 August 2022 at 2:32 pm UTC
property 17 Aug
This gets a little out of hand. You're all right from a different point of view. So please calm down. :)

The discussion if this is appropriate should and will happen in glibc's channels and only dependent on their goals as API developers. It is very much fine to break your ABI _if_ this is the kind of API you are providing. The kernel is not but they have to decide if their "backwards compatible, portable, and high performance ISO C library" is and should be.

Depending on the outcome of that the distro or runtime maintainers (e.g. Valve) will discuss if or how glibc is appropriate to use for their goals.


I personally hope that something changes in one of these points.


To add a personal note from a non-game-developer: I'm also annoyed by glibc but don't know if we'd fare better with others like musl (any experiences?). There have been breaking changes in the past even for applications that don't use it directly. We keep recompiling a lot after every dependency change against multiple versions and their compatibility to others in order to ensure everything still works. I for example currently cannot release any binaries compiled on my machine because of braking changes when using openssl with glibc (whoops!). As a maintainer I've had lots of integration failures because of glibc. This costs a lot of time especially when testing it (and waiting for all the tests to release a security fix that needs to be pushed NOW). So I can relate to any game developer which just wants to fire and forget their finished fun little puzzle game.
shorberg 17 Aug
View PC info
  • Supporter Plus
With my sysadmin/support tech hat on all I can say is: breaking userspace is an absolute NO-NO. If it aint broke, don't break it.

Case A:
At one point Windows server had an update with a breaking change which crashed our test machine; we reported to Microsoft, two days later we got a new update reverting the previous update. Business as usual. We were just one company out of the many many companies that run on Microsoft's platforms, yet they immediately gave us a quick-line to their experts and made a revert because breaking applications is not OK.

Case B:
We had another issue with a small application that relied on an external device to access a proprietary system. The driver for said device only supported Windows 2000. Thus that machine had to be completely quarantined and we had to keep using an old Win2k system just so we keep the application running. Could not get the hardware devs to ship updated drivers or the application devs to ship and update with support for newer hardware because both companies had gone bankrupt and disappeared a decade earlier. Now a decade later I expect that this application is still crucial even though all it was used for is one small function of the application.

This is reality. This is how the situation is for professionals, whether you are talking about IT systems or heck even infrastructure.

PS. Yeah my customers were not your average Pa & Ma's Baking Corner but a global enterprise so yes we had a bit of weight to throw around in case A, and they absolutely had the finances to make a replacement software in case B, but that would take years of development and testing to make sure it still worked the same, and since it was a proprietary system they might not even have been legally able to do it.

PPS. We have seen an analog version of this recently with trains to/from Ukraine and the different gauges for rail tracks. I found a good map on jakubmarian.com. Now replace that with glibc versions.
dibz 17 Aug
Quoting: EagleDelta
Quoting: dibz
Quote...on more bleeding-edge Linux distributions...

(well there's your problem)

Rarely do I run into issues on "bleeding edge" distributions. It's also important to note that "Bleeding Edge" distributions aren't generally using "Bleeding edge" versions of software. Latest upstream stable versions? yes

Granted, when I think bleeding edge, I think software in beta stage (complete but still testing). There is a HUGE HUGE HUGE problem with certain distributions still using versions of upstream software that the upstream project has made end of life (Python 2.x anyone?)..... which is not a good practice in today's security environment.

Honestly, when I saw the headline with Valve and glibc my very first thought before reading the article was "that's what you get for choosing Arch for newer packages." Which was their stated reason for the switch to Arch in the past. Once I read the article I found out that wasn't the case, and they were just commenting on the situation.

I'll admit my idea of Bleeding Edge may also be an outdated view of what the difference is, which would be a shame if that's the case. Why on earth would people blur that line on purpose? It's silly because the term becomes meaningless if people put it on everything. Bleeding Edge used to mean, essentially, Nightly. Not only would compatibility not be guaranteed between updates, it could completely break.

Personally I still prefer how Debian labels things with stable / testing / unstable. It's clear what it is and isn't, I would consider unstable to be their bleeding edge. I'm not personally advocating using Debian or anything, it's just a good example.
EagleDelta 17 Aug
Quoting: dibzPersonally I still prefer how Debian labels things with stable / testing / unstable. It's clear what it is and isn't, I would consider unstable to be their bleeding edge. I'm not personally advocating using Debian or anything, it's just a good example.

The problem with that is that, as great as Debian is, "stable" runs a LOT of software that the upstream developers no longer support at all and haven't for years.

Which, IMO, is very different than what's happening here. glibc accidentally broke some software and games, but refuse to revert the change. Most projects don't take that kind of stance. They will revert breaking changes and work on a future resolution to the problem instead of pointing blame at someone else.

As for comments on Valve using an Arch-base. That's needed for what they do as fixes for gaming specifically are only found in newer software, drivers, and Linux kernels. Running something like Debian stable would be a nightmare for gaming as things are out of date and unusable for a lot of games really fast. Especially newer games and especially of those games (or WINE/DXVK/etc) rely on newer driver features that may require newer kernel features to function, etc.

Which is where part of the issue lies with gaming and Linux. A lot of "vocal" linux users and devs want GameDevs to develop for Linux, but ALSO to conform to what those "vocal" users/devs think is the "right way" and People don't work like that. Linux has to go to THEM and make their lives easy/easier when working with Linux and do so in a way that THEY are familiar with or they will just nope out and not care. And that will happen because the perception is that we, the linux community, don't care about their perspective.... so why should they care about us?
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone with no article paywalls. We also don't have tons of adverts, there's also no tracking and we respect your privacy. Just good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register

Or login with...
Sign in with Steam Sign in with Twitter Sign in with Google
Social logins require cookies to stay logged in.