Discussion
Loading...

#Tag

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Felix Palmen :freebsd: :c64:
@zirias@mastodon.bsd.cafe  路  activity timestamp 4 months ago

Solved! 馃コ

This was a pretty "interesting" bug. Remember when I invented a way to implement #async / #await in #C, for jobs running on a threadpool. Back then I said it only works when completion of the task resumes execution on the same pool thread.

Trying to improve overall performance, I found the complex logic to identify the thread job to put on a pool thread a real deal-breaker. Just having one single MPMC queue with a single semaphore for all pool threads to wait on is a lot more efficient. But then, a job continued after an awaited task will resume on a "random" thread.

It theoretically works by making sure to restore the CORRECT context (the original one of the pool thread) every time after executing a job, whether partially (up to the next await) or completely.

Only it didn't, at least here on #FreeBSD, and I finally understood the reason for this was that I was using #TLS (thread-local storage) to find the context to restore.

Well, most architectures store a pointer to the current thread metadata in a register. #POSIX user #context #switching saves and restores registers. I found a source claiming that the #Linux ( #glibc) implementation explicitly does NOT include the register holding a thread pointer. Obviously, #FreeBSD's implementation DOES include it. POSIX doesn't have to say anything about that.

In short, avoiding TLS accesses when running with a custom context solved the crash. 馃く

Felix Palmen :freebsd: :c64:
@zirias@mastodon.bsd.cafe replied  路  activity timestamp 4 months ago

I now added a #lockfree version of that MPMC job queue which is picked when the system headers claim that pointers are lockfree. Doesn't give any measurable performance gain 馃槥. Of course the #semaphore needs to stay, the pool threads need something to wait on. But I think the reason I can't get more than 3000 requests per second with my #jmeter stress test for #swad is that the machine's CPU is now completely busy 馃檲.

Need to look into actually saving CPU cycles for further optimizations I guess...

  • Copy link
  • Flag this comment
  • Block
Felix Palmen :freebsd: :c64:
@zirias@mastodon.bsd.cafe  路  activity timestamp 5 months ago

I finally eliminated the need for a dedicated #thread controlling the pam helper #process in #swad. 馃コ

The building block that was still missing from #poser was a way to await some async I/O task performed on the main thread from a worker thread. So I added a class to allow exactly that. The naive implementation just signals the main thread to carry out the requested task and then waits on a #semaphore for completion, which of course blocks the worker thread.

Turns out we can actually do better, reaching similar functionality like e.g. #async / #await in C#: Release the worker thread to do other jobs while waiting. The key to this is user context switching support like offered by #POSIX-1.2001 #getcontext and friends. Unfortunately it was deprecated in POSIX-1.2008 without an obvious replacement (the docs basically say "use threads", which doesn't work for my scenario), but still lots of systems provide it, e.g. #FreeBSD, #NetBSD, #Linux (with #glibc) ...

The posercore lib now offers both implementations, prefering to use user context switching if available. It comes at a price: Every thread job now needs its private stack space (I allocated 64kiB there for now), and of course the switching takes some time as well, but that's very likely better than leaving a task idle waiting. And there's a restriction, resuming must still happen on the same thread that called the "await", so if this thread is currently busy, we have to wait a little bit longer. I still think it's a very nice solution. 馃槑

In any case, the code for the PAM credential checker module looks much cleaner now (the await "magic" happens on line 174):
https://github.com/Zirias/swad/blob/57eefe93cdad0df55ebede4bd877d22e7be1a7f8/src/bin/swad/cred/pamchecker.c

#C #coding

  • Copy link
  • Flag this post
  • Block
Felix Palmen :freebsd: :c64:
@zirias@mastodon.bsd.cafe  路  activity timestamp 5 months ago

Nice, #threadpool overhaul done. Removed two locks ( #mutex) and two condition variables, replaced by a single lock and a single #semaphore. 馃槑 Simplifies the overall structure a lot, and it's probably safe to assume slightly better performance in contended situations as well. And so far, #valgrind's helgrind tool doesn't find anything to complain about. 馃檭

Looking at the screenshot, I should probably make #swad default to two threads per CPU and expose the setting in the configuration file. When some thread jobs are expected to block, having more threads than CPUs is probably better.

https://github.com/Zirias/poser/commit/995c27352615a65723fbd1833b2d36781cbeff4d

Process/thread details shown by top for a running swad instance, showing:

- One main thread waiting for events on kqueue
- 9 threads waiting on a semaphore, 8 of them are worker threads of a thread pool, one is a thread controlling the PAM child process
- One child process waiting on a pipe
Process/thread details shown by top for a running swad instance, showing: - One main thread waiting for events on kqueue - 9 threads waiting on a semaphore, 8 of them are worker threads of a thread pool, one is a thread controlling the PAM child process - One child process waiting on a pipe
Process/thread details shown by top for a running swad instance, showing: - One main thread waiting for events on kqueue - 9 threads waiting on a semaphore, 8 of them are worker threads of a thread pool, one is a thread controlling the PAM child process - One child process waiting on a pipe
  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About 路 Code of conduct 路 Privacy 路 Users 路 Instances
Bonfire social 路 1.0.0-rc.2.21 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login