Me too. I have to know both. Benchmarking group wants PBS. Devs want slurm. Some work on the same machine and I need to make sure they don't pickup the same resources. The context switching is exhausting.
You use ls -lth then scroll up to see the most recent stuff. If you instead do ls -lrth it will sort in reverse time order so the thing you see just above you after the ls command will be the most recent...saves lots of scrolling up...especially if you have a directory with 100s of files in from previous runs.
Great video and great channel! I've just subscribed and will definitely share with my peers. I'm getting started with HPC, so I have a basic question. Since HPCs are mainly based on terminals, how do you follow up after running your jobs? Do you download the data to your own computer? If so, I assume it makes sense to "develop" a script on your computer and then use the HPC when it is mature enough, right?
Thank you for posting this. Very nice clear brief intro on Slurm! I wonder how easy it is to install. We use htcondor+docker to access gpu servers at work, and am considering giving htcondor AND slurm a whirl for side projects at home.
Make us a program that can be used to automate workflows while keeping track of dependencies. I made a video about it awhile back … ruclips.net/video/eWHE2RIGrWo/видео.html
Does your institution have a high performance computer that you have access to?
I'm currently architecting a cluster for the institution that I work for. Thanks for the video.
I work at the University of Oregon with our HPC called Talapas! Great video!
The range of topics covered on this channel is truly amazing (and always very practical). Cannot thank you enough!
My pleasure - thanks for watching!
This is great, i'm an HPC Admin and we just switched over from torque/pbs to slurm so this helps understanding slurm better.
Me too. I have to know both. Benchmarking group wants PBS. Devs want slurm. Some work on the same machine and I need to make sure they don't pickup the same resources. The context switching is exhausting.
Perfect timing - I started using SLURM 2 weeks ago and this filled in lots of gaps for me.
Wonderful- I’m so glad to hear this was helpful! 😊
Thank you so much for making this video. It makes bioinformatics/metabarcoding analysis way. more approachable for me.
Fantastic - glad it helped!
This is an amazing channel - thanks for your tireless work Pat!!
My pleasure! Thanks for watching 🤓
great, please more videos on this important topic! you are a genious!
Thanks Vir!
You use ls -lth then scroll up to see the most recent stuff. If you instead do ls -lrth it will sort in reverse time order so the thing you see just above you after the ls command will be the most recent...saves lots of scrolling up...especially if you have a directory with 100s of files in from previous runs.
awesome tip!
I am pretty familiar with SLURM, but I have no experience with AWS. I would love a primer for AWS!
Great video and great channel! I've just subscribed and will definitely share with my peers.
I'm getting started with HPC, so I have a basic question. Since HPCs are mainly based on terminals, how do you follow up after running your jobs? Do you download the data to your own computer? If so, I assume it makes sense to "develop" a script on your computer and then use the HPC when it is mature enough, right?
Thank you for posting this. Very nice clear brief intro on Slurm! I wonder how easy it is to install. We use htcondor+docker to access gpu servers at work, and am considering giving htcondor AND slurm a whirl for side projects at home.
Great!
For checking your own jobs in the slurm queue try "squeue --m"
Awesome - thanks!
Ours also uses slurm
I built my own 5-node HPC with Lustre and SLURM/Munge.
PartitionName=lustrefs
AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL
AllocNodes=ALL Default=YES QoS=N/A
DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO
MaxNodes=UNLIMITED MaxTime=UNLIMITED MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED
NodeSets=ALL
Nodes=oss[1-5]
PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
OverTimeLimit=NONE PreemptMode=OFF
State=UP TotalCPUs=40 TotalNodes=5 SelectTypeParameters=NONE
JobDefaults=(null)
DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED
TRES=cpu=40,mem=39365M,node=5,billing=12
04:27 AWS
21:48 slurm arrays
23:00
#! /bin/bash
...
#SBATCH --array 1-10
SEED=$( (SLURM_ARRAY_TASK_ID) )
echo $SEED
Thank you for the amazing video. Quick question, why did you have to re-run the single slurm script again at min 25:50 ?
I think I was trying to show how to run an array job that would fire off multiple jobs rather rather than a job that only fired off one seed
Hi, can you help me figure out how can I define custom resources in slurm ( resource should have count, associated with multiple nodes)
Hi, I need help how to use the g16 package on the server using slurm mode.
Hi - I'd encourage you to reach out to the system administrators for your HPC for help with this question.
Wanna do Google Cloud Platform sometime?
That would be awesome to try. I’ve worked on AWS but should check out google cloud too!
Alles Bestens. Aber auch besuche mich!
what is the meaning of "make" command in your script?
Make us a program that can be used to automate workflows while keeping track of dependencies. I made a video about it awhile back … ruclips.net/video/eWHE2RIGrWo/видео.html
I need help installing sour
Sorry I’m not much help with this. Our HPC administrators maintain slurm for us
What the eps for AWS ever done ?
It's not High Performance Computer, it's High Performance Computing. Academics run this type of equipment like grandmothers drive cars.
Never Ever use AWS for HPC.