Discussion
Loading...

#Tag

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Christian Meesters
@rupdecat@fediscience.org  ·  activity timestamp 2 days ago

Just getting back from two day course teaching #Snakemake programming on #HPC clusters. And I have several observations:

- the education level everywhere else seems way better, than on my home cluster. It is still a good idea, to assume the worst.
- it is never a good idea to accept bring-your-own-device: The Windows users with their tiny laptops will always have connection issues to a HPC system and will be working with one terminal (and usually tiny fonts).
- Some kind of IDE, common for all on decent screens, is a minimum. There are several options.
- clusters with QOS settings (quality of service) are confusing for some, because it is of course confusing to have a new obscure flag, when you are digesting a dozen other learning items.
- I need to fix a few things 😊

And most of all: I really need to start conceptualizing a smaller workshop, “bring your workflow, and we will port it to this system”. A new PR to the #SLURM plugin might help to ease the partition selection ...

  • Copy link
  • Flag this post
  • Block
Snakemake Release Robot
@snakemake@fediscience.org  ·  activity timestamp 4 days ago

Beep, Beep - I am your friendly #Snakemake release announcement bot.

There is a new release of the Snakemake executor for #SLURM on #HPC systems. Its version now is 1.9.2!

Give us some time, and you will automatically find the plugin on #Bioconda and #Pypi.

If you want to discuss the release, you will find the maintainers here on Mastodon!
@rupdecat and @johanneskoester

If you discover any issues, please report them on https://github.com/snakemake/snakemake-executor-plugin-slurm/issues.

See https://github.com/snakemake/snakemake-executor-plugin-slurm/releases/tag/v1.9.2 for details. Here is the header of the changelog:

𝑅𝑒𝑙𝑒𝑎𝑠𝑒 𝑁𝑜𝑡𝑒𝑠 (𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑦 𝑎𝑏𝑏𝑟𝑖𝑔𝑒𝑑):
𝐁𝐮𝐠 𝐅𝐢𝐱𝐞𝐬

* logo: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/367

GitHub

Release v1.9.2 · snakemake/snakemake-executor-plugin-slurm

1.9.2 (2025-10-28) Bug Fixes logo (#367) (3781f36)
Snakemake HPC logo for Mastodon
Snakemake HPC logo for Mastodon
Snakemake HPC logo for Mastodon
  • Copy link
  • Flag this post
  • Block
Hacker News
@h4ckernews@mastodon.social  ·  activity timestamp 5 days ago

Nvidia DGX Spark: When Benchmark Numbers Meet Production Reality

https://publish.obsidian.md/aixplore/Practical+Applications/dgx-lab-benchmarks-vs-reality-day-4

#HackerNews #Nvidia #DGX #Spark #Benchmark #Production #Reality #AI #Technology #HPC

dgx-lab-benchmarks-vs-reality-day-4 - AIXplore - Tech Articles - Obsidian Publish

dgx-lab-benchmarks-vs-reality-day-4 - AIXplore - Tech Articles - Powered by Obsidian Publish.
  • Copy link
  • Flag this post
  • Block
Your weary 'net denizen boosted
Ludovic Courtès
@civodul@toot.aquilenet.fr  ·  activity timestamp 4 weeks ago

Nouvelle saison de ☕ Café Guix, les rencontres en ligne pour discuter de #Guix pour la #RechercheReproductible et le #HPC, que l’on débute ou qu’on soit déjà à fond. 👇

https://hpc.guix.info/events/2025-2026/café-guix/

Premier épisode le 14 octobre !

  • Copy link
  • Flag this post
  • Block
Ludovic Courtès
@civodul@toot.aquilenet.fr  ·  activity timestamp 4 weeks ago

Nouvelle saison de ☕ Café Guix, les rencontres en ligne pour discuter de #Guix pour la #RechercheReproductible et le #HPC, que l’on débute ou qu’on soit déjà à fond. 👇

https://hpc.guix.info/events/2025-2026/café-guix/

Premier épisode le 14 octobre !

  • Copy link
  • Flag this post
  • Block
Ja-ah-ah-n Lehnardt :couchdb: and 1 other boosted
Kathy Reid
@KathyReid@aus.social  ·  activity timestamp last month

Thought for the day:

We used to measure data centres in how many rack units of capacity they had - e.g. this data centre has 5000 rack unit equivalent capacity. This is a measure of physical capacity.

Then we measure data centres in TFLOPS - how many trillions of floating point operations they could perform. This is a measure of computational capacity.

Now we measure data centres in Gigawatts - how much power they consume. This is a measure of power consumption, not physical space or output.

What *should* we measure data centres in? Gigalitres of water consumed? Households disrupted by their construction?

#data #dataCentres #HPC

  • Copy link
  • Flag this post
  • Block
Kathy Reid
@KathyReid@aus.social  ·  activity timestamp last month

Thought for the day:

We used to measure data centres in how many rack units of capacity they had - e.g. this data centre has 5000 rack unit equivalent capacity. This is a measure of physical capacity.

Then we measure data centres in TFLOPS - how many trillions of floating point operations they could perform. This is a measure of computational capacity.

Now we measure data centres in Gigawatts - how much power they consume. This is a measure of power consumption, not physical space or output.

What *should* we measure data centres in? Gigalitres of water consumed? Households disrupted by their construction?

#data #dataCentres #HPC

  • Copy link
  • Flag this post
  • Block
Stefano Marinelli boosted
Eva Winterschön
@winterschon@mastodon.bsd.cafe  ·  activity timestamp last month

I 💝 OpenZFS

Working on research for a HPC storage cluster, one of my architecture doc sections quote this information from the wonderful group at Klara:

> OpenZFS In the Wild
>
> .. Lawrence Livermore National Laboratory (LLNL) undertook porting ZFS to Linux, to form the backbone of their Lustre distributed filesystem. They noted that OpenZFS facilitated building a storage system that could support 1 terabyte per second of data transfer at less than half the cost of any alternative filesystem.
>
> Based on the success seen at LLNL, Los Alamos National Laboratory (LANL) started using ZFS as well.
>
> In the latest example, just a few months ago the U.S. Department of Energy’s Oak Ridge National Laboratory announced it had built Frontier, the world’s first exascale supercomputing system and currently the fastest computer in the world, backed by Orion, the massive 700 Petabyte ZFS based file system that supports it. This impressive system contains nearly 48,000 hard drives and 5,400 NVMe devices for primary storage, and another 480 NVMe just for metadata.

https://klarasystems.com/articles/openzfs-openzfs-for-hpc-clusters/#:~:text=built%C2%A0Frontier%2C%20the%20world%E2%80%99s%20first%20exascale,480%20NVMe%20just%20for%20metadata

#openzfs #zfs #freebsd #linux #engineering #supercomputing #hpc

Klara Systems

OpenZFS - OpenZFS For HPC Clusters - Klara Systems

Discover how OpenZFS enhances HPC cluster performance, reliability, and scalability with advanced storage management features.
  • Copy link
  • Flag this post
  • Block
Eva Winterschön
@winterschon@mastodon.bsd.cafe  ·  activity timestamp last month

I 💝 OpenZFS

Working on research for a HPC storage cluster, one of my architecture doc sections quote this information from the wonderful group at Klara:

> OpenZFS In the Wild
>
> .. Lawrence Livermore National Laboratory (LLNL) undertook porting ZFS to Linux, to form the backbone of their Lustre distributed filesystem. They noted that OpenZFS facilitated building a storage system that could support 1 terabyte per second of data transfer at less than half the cost of any alternative filesystem.
>
> Based on the success seen at LLNL, Los Alamos National Laboratory (LANL) started using ZFS as well.
>
> In the latest example, just a few months ago the U.S. Department of Energy’s Oak Ridge National Laboratory announced it had built Frontier, the world’s first exascale supercomputing system and currently the fastest computer in the world, backed by Orion, the massive 700 Petabyte ZFS based file system that supports it. This impressive system contains nearly 48,000 hard drives and 5,400 NVMe devices for primary storage, and another 480 NVMe just for metadata.

https://klarasystems.com/articles/openzfs-openzfs-for-hpc-clusters/#:~:text=built%C2%A0Frontier%2C%20the%20world%E2%80%99s%20first%20exascale,480%20NVMe%20just%20for%20metadata

#openzfs #zfs #freebsd #linux #engineering #supercomputing #hpc

Klara Systems

OpenZFS - OpenZFS For HPC Clusters - Klara Systems

Discover how OpenZFS enhances HPC cluster performance, reliability, and scalability with advanced storage management features.
  • Copy link
  • Flag this post
  • Block
d@nny disc@ mc² boosted
spack
@spack@mast.hpc.social  ·  activity timestamp 3 months ago

Spack v1.0.1 is out!

https://github.com/spack/spack/releases/tag/v1.0.1

This is a bug fix release -- check out the release notes for details! #hpc #spack

  • Copy link
  • Flag this post
  • Block
spack
@spack@mast.hpc.social  ·  activity timestamp 3 months ago

Spack v1.0.1 is out!

https://github.com/spack/spack/releases/tag/v1.0.1

This is a bug fix release -- check out the release notes for details! #hpc #spack

  • Copy link
  • Flag this post
  • Block
Your weary 'net denizen boosted
ICM
@icm@mastodon.sdf.org  ·  activity timestamp 3 months ago

We're expanding our gallery space in August to set up a Thinking Machines: Connection Machine with an event to celebrate the 40th anniversary. Join as a BOOTSTRAP member to attend the preview and get a behind the scene peek at the CM2's HOST, a Symbolics 3670 LISP machine, as we restore and recover its *LISP (StarLISP) programming system.

https://icm.museum

https://toobnix.org/w/57vV3XjbcdEjNVqYGTzZJ4

#ai#HPC #retrocomputing #vintagecomputing

  • Copy link
  • Flag this post
  • Block
ICM
@icm@mastodon.sdf.org  ·  activity timestamp 3 months ago

We're expanding our gallery space in August to set up a Thinking Machines: Connection Machine with an event to celebrate the 40th anniversary. Join as a BOOTSTRAP member to attend the preview and get a behind the scene peek at the CM2's HOST, a Symbolics 3670 LISP machine, as we restore and recover its *LISP (StarLISP) programming system.

https://icm.museum

https://toobnix.org/w/57vV3XjbcdEjNVqYGTzZJ4

#ai#HPC #retrocomputing #vintagecomputing

  • Copy link
  • Flag this post
  • Block
der.hans and 1 other boosted
Ludovic Courtès
@civodul@toot.aquilenet.fr  ·  activity timestamp 4 months ago

Want to join a ✨ dream team ✨ to work with #Guix in #HPC? Let’s talk!
https://recrutement.inria.fr/public/classic/fr/offres/2025-09146

  • Copy link
  • Flag this post
  • Block
Ludovic Courtès
@civodul@toot.aquilenet.fr  ·  activity timestamp 4 months ago

Want to join a ✨ dream team ✨ to work with #Guix in #HPC? Let’s talk!
https://recrutement.inria.fr/public/classic/fr/offres/2025-09146

  • Copy link
  • Flag this post
  • Block
bioinformatician_next_door boosted
Ludovic Courtès
@civodul@toot.aquilenet.fr  ·  activity timestamp 4 months ago

Today colleagues and I present not one but two #Guix tutorials at Compas, the French #HPC conference!

① #Guix + #Emacs#Org for #ReproducbibleResearch
https://guix-org-tutorial-compas-2025.gitlab.io/tutorial/

② Deploying #HPC code on supercomputers with #Guix
https://guix-hpc.gitlabpages.inria.fr/compas-tutorial-2025/

  • Copy link
  • Flag this post
  • Block
Ludovic Courtès
@civodul@toot.aquilenet.fr  ·  activity timestamp 4 months ago

Today colleagues and I present not one but two #Guix tutorials at Compas, the French #HPC conference!

① #Guix + #Emacs#Org for #ReproducbibleResearch
https://guix-org-tutorial-compas-2025.gitlab.io/tutorial/

② Deploying #HPC code on supercomputers with #Guix
https://guix-hpc.gitlabpages.inria.fr/compas-tutorial-2025/

  • Copy link
  • Flag this post
  • Block
jbz
@jbz@indieweb.social  ·  activity timestamp 5 months ago

Energy Department Unveils New Supercomputer That Merges With A.I. - The New York Times

「 Lawrence Berkeley National Laboratory expects the new machine — to be named for Jennifer Doudna, a Berkeley biochemist who shared the 2020 Nobel Prize for chemistry — to offer more than a tenfold speed boost over the lab’s most powerful current system 」

https://archive.ph/2025.05.31-144912/https://www.nytimes.com/2025/05/29/technology/energy-department-supercomputer-ai.html

#hpc #ai #science

  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0-rc.3.21 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login