Exploring the Fragmentation of Wayland, an xdotool adventure
https://www.semicomplete.com/blog/xdotool-and-exploring-wayland-fragmentation/
#HackerNews #Wayland #Fragmentation #xdotool #Linux #Desktop #Environment #Open #Source #Adventure
#Tag
Exploring the Fragmentation of Wayland, an xdotool adventure
https://www.semicomplete.com/blog/xdotool-and-exploring-wayland-fragmentation/
#HackerNews #Wayland #Fragmentation #xdotool #Linux #Desktop #Environment #Open #Source #Adventure
Trying to further improve #swad, and as I'm still unhappy with the amount of memory needed ....
Well, this little and special custom #allocator (only dealing with equally sized objects on a single thread and actively preventing #fragmentation) reduces the resident set in my tests by 5 to 10 MiB, compared to "pooling" with linked lists in whatever #FreeBSD's #jemalloc serves me. At least something. 🙈
https://github.com/Zirias/poser/blob/master/src/lib/core/objectpool.c
The resident set now stabilizes at 79MiB after many runs of my somewhat heavy jmeter test simulating 1000 distinct clients.
I just stress-tested the current dev state of #swad on #Linux. The first attempt failed miserably, got a lot of errors accepting a connection. Well, this lead to another little improvement, I added another static method to my logging interface that mimics #perror: Also print the description of the system errno. With that in place, I could see the issue was "too many open files". Checking #ulimit -n gave me 1024. Seriously? 🤯 On my #FreeBSD machine, as a regular user, it's 226755. Ok, bumped that up to 8192 and then the stress test ran through without issues.
On a side note, this also made creating new timers (using #timerfd on Linux) fail, which ultimately made swad crash. I have to redesign my timer interface so that creating a timer may explicitly fail and I can react on that, aborting whatever would need that timer.
Anyways, the same test gave somewhat acceptable results: throughput of roughly 3000 req/s, response times around 500ms. Not great, but okayish, and not directly comparable because this test ran in a #bhyve vm and the requests had to pass the virtual networking.
One major issue is still the #RAM consumption. The test left swad with a resident set of > 540 MiB. I have no idea what to do about that. 😞 The code makes heavy use of "allocated objects" (every connection object with metadata and buffers, every event handler registered, every timer, and so on), so, uses the #heap a lot, but according to #valgrind, correctly frees everything. Still the resident set just keeps growing. I guess it's the classic #fragmentation issue...
A space for Bonfire maintainers and contributors to communicate