If I could go back in time with this manuscript, I would have either designed the workflow to process the ~6000 possibly useful datasets in batches of 100 or so, or done the filtering to the actual useful ~700 of ~6000 in one part of the workflow, and then the analysis of the useful ones in another part of the workflow.
I didn't imagine that there would be fifteen odd processing and analysis steps on the ~6000 files -> ~91,000 targets in the workflow. 🤦♂️