This sounds horrifying: https://www.hpcwire.com/2025/11/19/new-world-order-coming-with-new-doe-supercomputers/
Apparently Oracle will be operating the next big DOE systems. I don't have words...
#Tag
This sounds horrifying: https://www.hpcwire.com/2025/11/19/new-world-order-coming-with-new-doe-supercomputers/
Apparently Oracle will be operating the next big DOE systems. I don't have words...
This sounds horrifying: https://www.hpcwire.com/2025/11/19/new-world-order-coming-with-new-doe-supercomputers/
Apparently Oracle will be operating the next big DOE systems. I don't have words...
Made my day when @geerlingguy stopped by the Texas Tech #SC25 booth to take a look at our x86 mini-clusters that run Warewulf, Ansible, Spack, and complete scientific #HPC applications with the same software stack as on our main clusters.
@guix may be a good alternative or even a companion for
European Environment for Scientific Software Installations
Quote: "What if there was a way to avoid having to install a broad range of scientific software from scratch on every HPC cluster or cloud instance you use or maintain, without compromising on performance?"
I faced it while preparing #guixastroupdate 2025/11
Made my day when @geerlingguy stopped by the Texas Tech #SC25 booth to take a look at our x86 mini-clusters that run Warewulf, Ansible, Spack, and complete scientific #HPC applications with the same software stack as on our main clusters.
@guix may be a good alternative or even a companion for
European Environment for Scientific Software Installations
Quote: "What if there was a way to avoid having to install a broad range of scientific software from scratch on every HPC cluster or cloud instance you use or maintain, without compromising on performance?"
I faced it while preparing #guixastroupdate 2025/11
Deadline for submissions for the 11th #HPC, #BigData, and #DataScience devroom at #FOSDEM26 (Brussels, Sat-Sun 31 Jan + 1 Feb 2026) is Mon 1 Dec 2025. Please see details at the link below. Looking forward to another dynamic, exciting, packed session! https://hpc-bigdata-fosdem26.github.io/
Deadline for submissions for the 11th #HPC, #BigData, and #DataScience devroom at #FOSDEM26 (Brussels, Sat-Sun 31 Jan + 1 Feb 2026) is Mon 1 Dec 2025. Please see details at the link below. Looking forward to another dynamic, exciting, packed session! https://hpc-bigdata-fosdem26.github.io/
This took a while. After the new version of the Snakemake paper (a rolling paper on F1000) came out, the DOI now is "working" 🥳 :
https://doi.org/10.12688/f1000research.29032.3
From my point of view, it particularly describes the working with various #HPC batch systems. And: Development did not cease. If you want to follow our announcement bot for updates: @snakemake
#Snakemake #ReproducibleComputing #DataAnalysis #OpenScience #WorkflowManagement
Sustainable data analysis with Snakemake
Pour celleux qui veulent s'informer en quelques mots et images sur le sujet sur lequel Jean a travaillé en thèse avec moi, je viens de voir que le petit blurb qu'on nous avait demandé pour le rapport d'activité GENCI 2024 est maintenant publié. #physics #astrophysics #fluids #hpc #VisMaVieDeScientifique #ChercheursQuiTrouventUnPeu
P. 43 (typos de mise en forme : 108 K -> 10^8 K et 10243 -> 1024^3)
https://www.genci.fr/sites/default/files/brique/fichier/10-2025/RA2024%20Fr.pdf
Pour celleux qui veulent s'informer en quelques mots et images sur le sujet sur lequel Jean a travaillé en thèse avec moi, je viens de voir que le petit blurb qu'on nous avait demandé pour le rapport d'activité GENCI 2024 est maintenant publié. #physics #astrophysics #fluids #hpc #VisMaVieDeScientifique #ChercheursQuiTrouventUnPeu
P. 43 (typos de mise en forme : 108 K -> 10^8 K et 10243 -> 1024^3)
https://www.genci.fr/sites/default/files/brique/fichier/10-2025/RA2024%20Fr.pdf
Just submitted a talk for FOSDEM (been invited). They asked to attach an icon-image for the talk. So I drew one. The compute racks are difficult to identify as such, but this is as far as my aquarelle skills go.
arXiv says it will no longer accept computer science (CS) category review articles and position papers unless they have been accepted at a journal or a conference and complete successful peer review.
But then in the details, it turns out that the work doesn't just have to be accepted, it also has to be published with a DOI, and it can't be published by a workshop at a conference, as "the review conducted at conference workshops generally does not meet the same standard of rigor of traditional peer review".
Unless my published position paper is not open access, what value would there be in putting it on arXiV after it has already been published?
And as a workshop organizer who takes peer review seriously, I feel a bit insulted. This workshop/conference distinction seems particularly harsh when many CS conferences have more workshop papers than conference papers, and the workshops, as highly focused venues, can have higher quality peer-review than the parent conference.
And finally, conferences publish non-peer reviewed work (such as invited papers), which seems to mean that this new policy isn't really going to work like the moderators think it will.
As an AEiC of @joss, I understand that dealing with AI-generated papers is a challenge, but I feel like this change is going to harm the community more than it will help it.
When I started to work in #HPC, the people I had the misfortune to work with published only with and for conferences. All of this work was studentware, resp. abandonware. Classical proof-of-concept stuff.
I cannot remark on the quality of review in this field. But from the group I am thinking of, every output was lousy - and they claimed to go to prestigious conferences. Still today I hardly ever draw relevant information from such CS conference papers. My point of view might be a bit special, I have to admit.
So, I don't know if those arXiv arguments are valid. Yet, I feel my - certainly biased - view confirmed.
Edit: PS it is rather unfortunate that the blog link does not offer numbers.
Just getting back from two day course teaching #Snakemake programming on #HPC clusters. And I have several observations:
- the education level everywhere else seems way better, than on my home cluster. It is still a good idea, to assume the worst.
- it is never a good idea to accept bring-your-own-device: The Windows users with their tiny laptops will always have connection issues to a HPC system and will be working with one terminal (and usually tiny fonts).
- Some kind of IDE, common for all on decent screens, is a minimum. There are several options.
- clusters with QOS settings (quality of service) are confusing for some, because it is of course confusing to have a new obscure flag, when you are digesting a dozen other learning items.
- I need to fix a few things 😊
And most of all: I really need to start conceptualizing a smaller workshop, “bring your workflow, and we will port it to this system”. A new PR to the #SLURM plugin might help to ease the partition selection ...
Beep, Beep - I am your friendly #Snakemake release announcement bot.
There is a new release of the Snakemake executor for #SLURM on #HPC systems. Its version now is 1.9.2!
Give us some time, and you will automatically find the plugin on #Bioconda and #Pypi.
If you want to discuss the release, you will find the maintainers here on Mastodon!
@rupdecat and @johanneskoester
If you discover any issues, please report them on https://github.com/snakemake/snakemake-executor-plugin-slurm/issues.
See https://github.com/snakemake/snakemake-executor-plugin-slurm/releases/tag/v1.9.2 for details. Here is the header of the changelog:
𝑅𝑒𝑙𝑒𝑎𝑠𝑒 𝑁𝑜𝑡𝑒𝑠 (𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑦 𝑎𝑏𝑏𝑟𝑖𝑔𝑒𝑑):
𝐁𝐮𝐠 𝐅𝐢𝐱𝐞𝐬
* logo: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/367
Nvidia DGX Spark: When Benchmark Numbers Meet Production Reality
https://publish.obsidian.md/aixplore/Practical+Applications/dgx-lab-benchmarks-vs-reality-day-4
#HackerNews #Nvidia #DGX #Spark #Benchmark #Production #Reality #AI #Technology #HPC
Nouvelle saison de ☕ Café Guix, les rencontres en ligne pour discuter de #Guix pour la #RechercheReproductible et le #HPC, que l’on débute ou qu’on soit déjà à fond. 👇
https://hpc.guix.info/events/2025-2026/café-guix/
Premier épisode le 14 octobre !
Nouvelle saison de ☕ Café Guix, les rencontres en ligne pour discuter de #Guix pour la #RechercheReproductible et le #HPC, que l’on débute ou qu’on soit déjà à fond. 👇
https://hpc.guix.info/events/2025-2026/café-guix/
Premier épisode le 14 octobre !
Thought for the day:
We used to measure data centres in how many rack units of capacity they had - e.g. this data centre has 5000 rack unit equivalent capacity. This is a measure of physical capacity.
Then we measure data centres in TFLOPS - how many trillions of floating point operations they could perform. This is a measure of computational capacity.
Now we measure data centres in Gigawatts - how much power they consume. This is a measure of power consumption, not physical space or output.
What *should* we measure data centres in? Gigalitres of water consumed? Households disrupted by their construction?
A space for Bonfire maintainers and contributors to communicate