I was checking a faculty candidate what the #HPC is offering on its newest iteration of the cluster (that is fee per CPU second use). The high mem instance? 64 GB of RAM. Which is what my <checks notes> decade+ ancient server.
Within the last two weeks I was running a relatively simple random forest that needed 80+ GB on version 1 of the cluster. Well I guess I'm not doing that after it's decomissioned. Which sent me down the rabbit hole. What would it take for me to build a 256 GB RAM server?