Three ways to use slurm on a high performance computer (HPC) (CC130)

Поделиться
HTML-код
  • Опубликовано: 21 сен 2024

Комментарии • 41

  • @Riffomonas
    @Riffomonas  3 года назад +9

    Does your institution have a high performance computer that you have access to?

    • @felipe96150
      @felipe96150 2 года назад

      I'm currently architecting a cluster for the institution that I work for. Thanks for the video.

    • @flscapes
      @flscapes 6 месяцев назад +1

      I work at the University of Oregon with our HPC called Talapas! Great video!

  • @cyg7655
    @cyg7655 3 года назад +13

    The range of topics covered on this channel is truly amazing (and always very practical). Cannot thank you enough!

    • @Riffomonas
      @Riffomonas  3 года назад

      My pleasure - thanks for watching!

  • @Frankie_Freedom
    @Frankie_Freedom 11 месяцев назад +3

    This is great, i'm an HPC Admin and we just switched over from torque/pbs to slurm so this helps understanding slurm better.

    • @mrrooster7976
      @mrrooster7976 24 дня назад

      Me too. I have to know both. Benchmarking group wants PBS. Devs want slurm. Some work on the same machine and I need to make sure they don't pickup the same resources. The context switching is exhausting.

  • @aleonflux1138
    @aleonflux1138 3 года назад +3

    Perfect timing - I started using SLURM 2 weeks ago and this filled in lots of gaps for me.

    • @Riffomonas
      @Riffomonas  3 года назад

      Wonderful- I’m so glad to hear this was helpful! 😊

  • @taylorprice5297
    @taylorprice5297 3 года назад +3

    Thank you so much for making this video. It makes bioinformatics/metabarcoding analysis way. more approachable for me.

    • @Riffomonas
      @Riffomonas  3 года назад

      Fantastic - glad it helped!

  • @borisn.1346
    @borisn.1346 2 года назад +2

    This is an amazing channel - thanks for your tireless work Pat!!

    • @Riffomonas
      @Riffomonas  2 года назад

      My pleasure! Thanks for watching 🤓

  • @1973vgc
    @1973vgc 2 года назад +3

    great, please more videos on this important topic! you are a genious!

  • @666ejames
    @666ejames 5 месяцев назад +1

    You use ls -lth then scroll up to see the most recent stuff. If you instead do ls -lrth it will sort in reverse time order so the thing you see just above you after the ls command will be the most recent...saves lots of scrolling up...especially if you have a directory with 100s of files in from previous runs.

  • @alaricwdsouza
    @alaricwdsouza 3 года назад +10

    I am pretty familiar with SLURM, but I have no experience with AWS. I would love a primer for AWS!

  • @LuizGNA
    @LuizGNA Год назад +2

    Great video and great channel! I've just subscribed and will definitely share with my peers.
    I'm getting started with HPC, so I have a basic question. Since HPCs are mainly based on terminals, how do you follow up after running your jobs? Do you download the data to your own computer? If so, I assume it makes sense to "develop" a script on your computer and then use the HPC when it is mature enough, right?

  • @aigonewrong.
    @aigonewrong. Год назад +1

    Thank you for posting this. Very nice clear brief intro on Slurm! I wonder how easy it is to install. We use htcondor+docker to access gpu servers at work, and am considering giving htcondor AND slurm a whirl for side projects at home.

  • @GL-Kageyama
    @GL-Kageyama 7 месяцев назад +1

    Great!

  • @RasmusKirkegaard
    @RasmusKirkegaard 3 года назад +3

    For checking your own jobs in the slurm queue try "squeue --m"

  • @FareedaKalsoom
    @FareedaKalsoom Месяц назад +1

    Ours also uses slurm

  • @jefflucas_life
    @jefflucas_life 6 месяцев назад +1

    I built my own 5-node HPC with Lustre and SLURM/Munge.
    PartitionName=lustrefs
    AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL
    AllocNodes=ALL Default=YES QoS=N/A
    DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO
    MaxNodes=UNLIMITED MaxTime=UNLIMITED MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED
    NodeSets=ALL
    Nodes=oss[1-5]
    PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
    OverTimeLimit=NONE PreemptMode=OFF
    State=UP TotalCPUs=40 TotalNodes=5 SelectTypeParameters=NONE
    JobDefaults=(null)
    DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED
    TRES=cpu=40,mem=39365M,node=5,billing=12

  • @liutrvcyrsui
    @liutrvcyrsui Год назад +1

    04:27 AWS
    21:48 slurm arrays
    23:00
    #! /bin/bash
    ...
    #SBATCH --array 1-10
    SEED=$( (SLURM_ARRAY_TASK_ID) )
    echo $SEED

  • @nairuzelazzabi2172
    @nairuzelazzabi2172 Год назад +2

    Thank you for the amazing video. Quick question, why did you have to re-run the single slurm script again at min 25:50 ?

    • @Riffomonas
      @Riffomonas  Год назад

      I think I was trying to show how to run an array job that would fire off multiple jobs rather rather than a job that only fired off one seed

  • @dikshantrajwal9987
    @dikshantrajwal9987 Год назад +1

    Hi, can you help me figure out how can I define custom resources in slurm ( resource should have count, associated with multiple nodes)

  • @Learning432
    @Learning432 Год назад +1

    Hi, I need help how to use the g16 package on the server using slurm mode.

    • @Riffomonas
      @Riffomonas  Год назад

      Hi - I'd encourage you to reach out to the system administrators for your HPC for help with this question.

  • @JOHNSMITH-ve3rq
    @JOHNSMITH-ve3rq 3 года назад +3

    Wanna do Google Cloud Platform sometime?

    • @Riffomonas
      @Riffomonas  3 года назад

      That would be awesome to try. I’ve worked on AWS but should check out google cloud too!

  • @AlexMiller-Wuppertal
    @AlexMiller-Wuppertal 3 года назад +2

    Alles Bestens. Aber auch besuche mich!

  • @xiaoli0510
    @xiaoli0510 2 года назад +2

    what is the meaning of "make" command in your script?

    • @Riffomonas
      @Riffomonas  2 года назад +1

      Make us a program that can be used to automate workflows while keeping track of dependencies. I made a video about it awhile back … ruclips.net/video/eWHE2RIGrWo/видео.html

  • @Joshthegoated
    @Joshthegoated 6 месяцев назад +1

    I need help installing sour

    • @Riffomonas
      @Riffomonas  6 месяцев назад

      Sorry I’m not much help with this. Our HPC administrators maintain slurm for us

  • @omarelbliety3949
    @omarelbliety3949 Год назад

    What the eps for AWS ever done ?

  • @canadianrepublican1185
    @canadianrepublican1185 Год назад

    It's not High Performance Computer, it's High Performance Computing. Academics run this type of equipment like grandmothers drive cars.

  • @canadianrepublican1185
    @canadianrepublican1185 Год назад

    Never Ever use AWS for HPC.