Every article tag can be clicked to get a list of all articles in that category. Every article tag also has an RSS feed! You can customize an RSS feed too!
We do often include affiliate links to earn us some pennies. See more here.

Linux Kernel patch sent in for comments to help gaming

By - | Views: 28,545

Collabora have sent in a fresh patch for discussion to the Linux Kernel list to help Linux gaming, acting as a follow-up to their previous attempt.

The idea with their patches, which is in collab with Valve, seems primarily focused on Wine and so Proton for Steam Play due to the differences in how Windows handles things to Linux that Wine needs to support for getting good performance. As the original patch explained:

The use case lies in the Wine implementation of the Windows NT interface WaitMultipleObjects. This Windows API function allows a thread to sleep waiting on the first of a set of event sources (mutexes, timers, signal, console input, etc) to signal.  Considering this is a primitive synchronization operation for Windows applications, being able to quickly signal events on the producer side, and quickly go to sleep on the consumer side is essential for good performance of those running over Wine.

They went onto explain that current Linux Kernel interfaces fell short on performance. With their code being used, they saw a reduction in the CPU utilization in multiple titles running with the Steam Play Proton compatibility layer when compared with current methods. Additionally it doesn't rely on file descriptors so it will also solve issues with running out of resources there too.

The new patch in discussion goes about it a different way to before. Instead of extending the current interface in the Linux Kernel, they're going with building a new system call 'futex2'. It's early on as they're still building it up with this adding the new interface, that they can then expand upon.

In short: it would make Linux gaming better with Wine / Proton in future Linux Kernel versions. However, it would likely have other uses too. You can see the patch set here which is currently under discussion.

Article taken from GamingOnLinux.com.
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly came back to check on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. Find me on Mastodon.
See more from me
The comments on this article are closed.
Page: «2/2
  Go to:

x_wing Jun 14, 2020
Quoting: toojaysThere is no way to use fds for synchronization without a syscall. That's no good for performance-critical paths. Pthreads primitives like mutex, condition, semaphore are designed to avoid syscalls where possible. Ideally (e.g. uncontended mutex lock) they use only atomic operations, but they call futex when they need to block, or to wake other threads.

Being able to wait on multiple futexes at once seems generally useful to me.

Do you know of a common concurrency problem were it makes sense? For me it sounds like a developer is trying to outsmart the scheduler due to a bad soft design and that will always end bad.

Not long ago there was a google engineer that published some code where he implemented spinlocks in order to "improve" performance (basically a busy waiting code to avoid doing the syscall), but in his numbers std::mutex proved to be a good solution on Linux but a "bad one" on Windows. So the issue here was they are Windows programmers that decided to program a Windows workaround on Linux when it's not necessary... IMO this is a prove of why you want to keep at bare minimum threads priority setup on programmers side.
Nanobang Jun 15, 2020
View PC info
  • Supporter
My non-coder, non-developer takeaway is 'Game go faster, this good. Make good thing. Make game go faster." :)
gpderetta Jun 15, 2020
Quoting: toojaysThere is no way to use fds for synchronization without a syscall. That's no good for performance-critical paths.

It is possible to build a fast-pathed mutex (and condition variable) that is userspace only and falls backs on a eventfd for the slow path, exactly like for futex based mutexes [1]. In fact because they do not have to fiddle with VM stuff, I have seen claims that eventfds can be slightly faster on the slow path (but nobody really cares about that, so keep using futexes unless you need to interoperate with poll and friends).

The advantage of futexes is that they are ephemeral and the kernel side support data structures are allocated implicitly on a futex_wait call (when a futex is used to wait for an event) and destroyed as soon as there are no waiters, while eventfds are allocated and destroyed explicitly. But you can have millions of inactive futexes without any issues, while with eventfd you can hit the fd limit very easily. Apparently there are a lot of broken programs that allocate and leak Windows mutex handlers (which are pretty much the equivalent of an Unix fd) but probably because Windows has code to workaround this brokenness or because handlers are lighter weights, it is not much of an issue there. Note that Windows today has keyed events (which are exactly like futexes) and those can't be used on WaitForMultipeObjects either.

[1] I know because I have done it.
gpderetta Jun 15, 2020
as a dev doing low level multithreaded stuff from time to time, this change also enables 64 bit futexes which is quite useful. Linus pushed back on them in the past for questionable reasons, but Windows has them so, any fast emulation in Wine will probably need them as well.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.