Pre-LLM assisted debugging tools.
A crippling and subtle timing bug in the onboard software of the Jupiter bound JUICE spacecraft. It takes the right stuff to troubleshoot such an issue with the running system 200 million km away.
https://www.esa.int/Enabling_Support/Operations/Juice_team_resolves_anomaly_on_approach_to_Venus
A crippling and subtle timing bug in the onboard software of the Jupiter bound JUICE spacecraft. It takes the right stuff to troubleshoot such an issue with the running system 200 million km away.
https://www.esa.int/Enabling_Support/Operations/Juice_team_resolves_anomaly_on_approach_to_Venus
🥳 New Kitten Release
Housekeeping:
• Updated runtime version to Node version 22.18.0 (latest LTS).
• Removed --experimental-global-customevent
in node launch command (as CustomEvent
is no longer behing the CLI flag since Node v19.0.0)
• Renamed --experimental-loader
flag to --loader
as the experimental prefix is no longer required.
Enjoy!
💕
#Kitten#KittenRelease #SmallWeb#SmallTech #web #dev#HTML#JavaScript#CSS#NodeJS
Ah, and also, forgot to mention this change:
Improved:
• Debugging your Kitten app is now easier when you run it using `INSPECT=true kitten …` as the Node runtime is launched using the `--inspect-brk` tag instead of the `--inspect` tag. This means that execution will wait for your debugger (e.g., Chromium’s DevTools at `chrome://inspect`, etc.) to connect before starting the server. This makes it possible to hit breakpoints that might previously have been impossible to reach as they occured before you had a chance to run the debugger.
Full change log:
https://codeberg.org/kitten/app/src/branch/main/CHANGELOG.md#2025-08-12
#Kitten #SmallWeb #SmallTech #NodeJS #debugger #debugging #web #dev #JavaScript
🐍 Decode Any Python Code With This 5-Step Method
「 It’s rare to write all the original code in an application yourself, and even rarer for an application to be completely rewritten from scratch. More likely, your workflow will involve working with code that was written by someone else (who may no longer be available to explain it) and iterated on by others (who also might be out of reach) long before it ever appears on your screen 」
https://thenewstack.io/decode-any-python-code-with-this-5-step-method/
🐍 Decode Any Python Code With This 5-Step Method
「 It’s rare to write all the original code in an application yourself, and even rarer for an application to be completely rewritten from scratch. More likely, your workflow will involve working with code that was written by someone else (who may no longer be available to explain it) and iterated on by others (who also might be out of reach) long before it ever appears on your screen 」
https://thenewstack.io/decode-any-python-code-with-this-5-step-method/
🥳 Xdebug 3.4.5 Released!
🐛 This is a bug fix release.
🧬 One fix addresses crashes when using Xdebug's debugger with (nested) Fibers.
↪️ A second bug addresses an issue where, while debugging, Xdebug sometimes calls get property hooks which can then update and change the object's state.
📄 The full list of changes can be found on the updates page: https://xdebug.org/announcements/2025-07-14
I need help. First the question: On #FreeBSD, with all ports built with #LibreSSL, can I somehow use the #clang #thread #sanitizer on a binary actually using LibreSSL and get sane output?
What I now observe debugging #swad:
- A version built with #OpenSSL (from base) doesn't crash. At least I tried very hard, really stressing it with #jmeter, to no avail. Built with LibreSSL, it does crash.
- Less relevant: the OpenSSL version also performs slightly better, but needs almost twice the RAM
- The thread sanitizer finds nothing to complain when built with OpenSSL
- It complains a lot with LibreSSL, but the reports look "fishy", e.g. it seems to intercept some OpenSSL API functions (like SHA384_Final)
- It even complains when running with a single-thread event loop.
- I use a single SSL_CTX per listening socket, creating SSL objects from it per connection ... also with multithreading; according to a few sources, this should be supported and safe.
- I can't imagine doing that on a *single* thread could break with LibreSSL, I mean, this would make SSL_CTX pretty much pointless
- I *could* imagine sharing the SSL_CTX with multiple threads to create their SSL objects from *might* not be safe with LibreSSL, but no idea how to verify as long as the thread sanitizer gives me "delusional" output 😳
For two days straight, I just can't reproduce #swad #crashing with *anything* in place (#clang #sanitizer instrumentation, attached #debugger like #lldb) that could give me the slightest hint what's going wrong. 😡
But it *does* crash when "unobserved". And it looks like this is happening a lot sooner (or, more often?) when using #LibreSSL ... but I also suspect this could be a red herring in the end.
Situation reminds me of my physics teacher back at school, who used to say something in german I just can't ever forget:
"Wer misst, misst Mist."
Feeble attempt in english would be "the one who measures measures crap", it was his humorous way to bring one consequence of #Heisenberg's indeterminacy principle to the point. And indeed, #debugging computer programs always suffers from similar problems...
Visualize and debug Rust programs with a new lens
Panic most recently used by lkpikmalloc ...
Well, that was fast... didn't even get a mouse cursor of a full MATE Desktop menu system load. Was yet to connect kgdb to COM1 (need to swap from minicom to do so)... makes me want a PCIe RS232 card (for "comconsole_pcidev") so that I have a few more COMs to play with on redirects. Gotta love these iGPU tash-bins eh? "It's better than not having a GPU right?" ... not really.''
Closed bug report from the drm-515-kmod, discussing amdgpu memory leak. so, maybe a new one in drm-61-kmod, would not be surprised.
- https://github.com/freebsd/drm-kmod/issues/258
Short term revision of approach:
----
1. Today via post arrives, an AMD Radeon Pro W7500 (single slot 8GB, Navi-whatever gen)
2. I'll block off the iGPU during loader.conf sequence, using a "pptdev" blackhole (not for VM pt, but maybe an experiment for a 14.1 VM with the known-good amdgpu version).
3. Known as: throw money at the problem?
Some hardware notes:
----
1. This is not a Nvidia GPU situation; there are several generations of cards in the room which have been cycled through the workstation during "hardware isolation" and "process of elimination" sequences. I know those are stable, and which gen cards require which nvidia driver versions for stability purposes.
2. This is not a FreeBSD kernel issue, nor a Xorg "Plain Jane FrameBuffer" situation. The kernel (14.0, 14.1, 14.2) is stable and fine, and the basic vt driver for non-4K display-port functionality works fine. I can work all day in a series of tmux windows with some fifty or so panes, but that's not quite the optimal experience.
3. The AMD iGPU (Raphael) maxes out default to 512MB GART VRAM, and it can handle 240Hz @ 4K all day with no issues as long as that 512M doesn't get used up... that is until the latest amdgpu kmod drm, which crashes whenever it feels like it.
Michael... yes yes, I do have a lot of hardware, but this issue has surpassed the Sunk Cost Fallacy and has become a consumate knowledge-requirement process. I must know where this is failing so horrendously, otherwise the operating rule of "if it doesn't fulfill its hardware destiny, it will get the hammer and flames"... and the hardware is too nice for that - plus I could involve Supermicro support since it's still in warranty, but a replacement motherboard or CPU for the iGPU isn't going to solve a kernel module issue.
In the interim, laptop life and tablet meetings are getting me by, mostly decently.
Debug items of interest:
----
intsmb0: at device 20.0 on pci0
intsmb0: Could not allocate I/O space
device_attach: intsmb0 attach returned 6
drmn0: Fetched VBIOS from VFCT
amdgpu: ATOM BIOS: 102-RAPHAEL-008
drmn0: Trusted Memory Zone (TMZ) feature not supported
drmn0: PCIE atomic ops is not supported
drmn0: VRAM: 512M 0x000000F400000000 - 0x000000F41FFFFFFF (512M used)
[drm ERROR :amdgpu_bo_init] Unable to set WC memtype for the aperture base
Loader items of usage:
----
# Multi-Console Output
# boot output primary: TTY, standard monitor via UEFI
# boot output secondary: COM1 RS232 Redirect (physical)
# boot output tertiary: COM2 RS232 Redirect (BMC SoL)
ipmi_load="YES"
boot_mute="NO"
boot_verbose="YES"
verbose_loading="YES"
boot_multicons="YES"
boot_serial="YES"
console="efi,comconsole,comconsole"
comconsole_port1="0x3F8"
comconsole_speed1="115200"
comconsole_port2="0x2F8"
comconsole_speed2="115200"
hw.uart.console="io:0x3f8,br:115200 io:0x2f8,br:115200"
#amd #gpu #drm616kmod #freebsd #debugging #engineering #4amlife #5amlife #debuggingForever#tiredOfDebugging#goddamnMemoryLeaks#linuxDriversByAssociation#moreTerminals #rs232#tiredNowGoNap #friday
💾 4am AMD Xorg/Kernel Debugging 💾
Ongoing fun ongoes, so much so. This iteration receives triple output: tty video (Display Port), COM1 redirect via DB9 to laptop running minicom, COM2 to the usual BMC SoL terminal watched from ipmitool on adjacent laptop.
Coming up, rebooting into GENERIC-DEBUG kernel rebuilt with remote GDM (kgdb) access via the COM1 link to laptop.