Discussion
Loading...

#Tag

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Felix Palmen :freebsd: :c64:
@zirias@mastodon.bsd.cafe  路  activity timestamp 3 months ago

I just stress-tested the current dev state of #swad on #Linux. The first attempt failed miserably, got a lot of errors accepting a connection. Well, this lead to another little improvement, I added another static method to my logging interface that mimics #perror: Also print the description of the system errno. With that in place, I could see the issue was "too many open files". Checking #ulimit -n gave me 1024. Seriously? 馃く On my #FreeBSD machine, as a regular user, it's 226755. Ok, bumped that up to 8192 and then the stress test ran through without issues.

On a side note, this also made creating new timers (using #timerfd on Linux) fail, which ultimately made swad crash. I have to redesign my timer interface so that creating a timer may explicitly fail and I can react on that, aborting whatever would need that timer.

Anyways, the same test gave somewhat acceptable results: throughput of roughly 3000 req/s, response times around 500ms. Not great, but okayish, and not directly comparable because this test ran in a #bhyve vm and the requests had to pass the virtual networking.

One major issue is still the #RAM consumption. The test left swad with a resident set of > 540 MiB. I have no idea what to do about that. 馃槥 The code makes heavy use of "allocated objects" (every connection object with metadata and buffers, every event handler registered, every timer, and so on), so, uses the #heap a lot, but according to #valgrind, correctly frees everything. Still the resident set just keeps growing. I guess it's the classic #fragmentation issue...

  • Copy link
  • Flag this post
  • Block
Felix Palmen :freebsd: :c64:
@zirias@mastodon.bsd.cafe  路  activity timestamp 5 months ago

Nice, #threadpool overhaul done. Removed two locks ( #mutex) and two condition variables, replaced by a single lock and a single #semaphore. 馃槑 Simplifies the overall structure a lot, and it's probably safe to assume slightly better performance in contended situations as well. And so far, #valgrind's helgrind tool doesn't find anything to complain about. 馃檭

Looking at the screenshot, I should probably make #swad default to two threads per CPU and expose the setting in the configuration file. When some thread jobs are expected to block, having more threads than CPUs is probably better.

https://github.com/Zirias/poser/commit/995c27352615a65723fbd1833b2d36781cbeff4d

Process/thread details shown by top for a running swad instance, showing:

- One main thread waiting for events on kqueue
- 9 threads waiting on a semaphore, 8 of them are worker threads of a thread pool, one is a thread controlling the PAM child process
- One child process waiting on a pipe
Process/thread details shown by top for a running swad instance, showing: - One main thread waiting for events on kqueue - 9 threads waiting on a semaphore, 8 of them are worker threads of a thread pool, one is a thread controlling the PAM child process - One child process waiting on a pipe
Process/thread details shown by top for a running swad instance, showing: - One main thread waiting for events on kqueue - 9 threads waiting on a semaphore, 8 of them are worker threads of a thread pool, one is a thread controlling the PAM child process - One child process waiting on a pipe
  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About 路 Code of conduct 路 Privacy 路 Users 路 Instances
Bonfire social 路 1.0.0-rc.2.21 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login