It will great if you can mark name other videos of OpenHPC series with number too (like Part 2) so it is little easier to follow for newbies. Thank you!
Thank you for the video. Unfortunately, though, some of the recording (e.g., listing out the components in Excel) is really low quality, even at 1080p, making it impossible to tell what you are doing.
@@sysadminsean Yeah, no idea what was going on. I was on three different computers watching this at 1080p. Tested my network and was getting 300+ Mbps and it did not look good. Now I'm watching again (because your videos keep coming up when searching "OpenHPC" and "Proxmox") and now it looks fine. No idea. In any case, love the series!
This is such a timely video. My shop is trying to get Slurm working to run containers on an HPC system running RHEL and Star-CCM+. It's not going very well. I never knew you could run it using Proxmox. I have to watch this and your next video when I can concentrate. Thank you so much!
Sure thing. This will be the basic building blocks part of a cluster are you trying to use slurm to spin up container nodes or containers on nodes? I've done the latter but I'm still looking into doing the former.
@@sysadminsean I'm not sure. I think we are trying to use Slurm to spin up container nodes and then possibly run several containers at a time. I'm one step removed from the process and a consultant is doing the work, so I'm a little fuzzy on all of it. I found out I may have to support all of it once we start so I'm trying to get up to speed as fast as I can.
This is soo cool! We are about to buy a 4-node HPC for chemistry calculations on NWChem and i will have to set up everything with 0 knowledge XD Cant wait for the 3rd episode. I am planning to run it on Infiniband at the end but first just simple ethernet. The nodes will be equipped with FDR cards and we also buy the FDR switch. We get the cables later.
I could try, do you know of any test jobs I could run and expected result times? I promise my 'cluster' will be very subpar but I could absolutely show it 'working'
@@sysadminsean There must be many testjobs in the NWChem folders. Job time is not important but how i can get to the point where i can run jobs across nodes XD The machine comes next week. Initially i am thinking of making a simple LAN and the four machines will be individual units. Three will work with the best installation we can do currently and one will left for installation and HPC creation trials. It is really appreciated if you alter your plans by my request ;)
Well you wouldn't need proxmox for that. If you're running 2 servers as Compute nodes you could use the dell 540 run as the head node itself. The only reaosn to run proxmox would be if you want to work on snapshoting for modifications your head node. You'd also need to make sure when you're deploying your warewulf images that you pull them from a preinstalled version on the AMD EPYC servers as they'll have some different code instructions from the dell r540(because i assume thats an intel box)
Is possible to configure open HPC clustering in a normal pc (like i have three cpu with intel i7 10nth gen processor and each having 2TB of disk space and 32 gb of ram space in it)
It will great if you can mark name other videos of OpenHPC series with number too (like Part 2) so it is little easier to follow for newbies. Thank you!
Thank you for the video. Unfortunately, though, some of the recording (e.g., listing out the components in Excel) is really low quality, even at 1080p, making it impossible to tell what you are doing.
Interesting. Thanks for the info.
@@sysadminsean Yeah, no idea what was going on. I was on three different computers watching this at 1080p. Tested my network and was getting 300+ Mbps and it did not look good. Now I'm watching again (because your videos keep coming up when searching "OpenHPC" and "Proxmox") and now it looks fine. No idea. In any case, love the series!
This is such a timely video. My shop is trying to get Slurm working to run containers on an HPC system running RHEL and Star-CCM+. It's not going very well. I never knew you could run it using Proxmox. I have to watch this and your next video when I can concentrate. Thank you so much!
Sure thing. This will be the basic building blocks part of a cluster are you trying to use slurm to spin up container nodes or containers on nodes? I've done the latter but I'm still looking into doing the former.
@@sysadminsean I'm not sure. I think we are trying to use Slurm to spin up container nodes and then possibly run several containers at a time. I'm one step removed from the process and a consultant is doing the work, so I'm a little fuzzy on all of it. I found out I may have to support all of it once we start so I'm trying to get up to speed as fast as I can.
@@SyberPrepper That sounds like it might be singularity hopefully you'll get to find out more soon.
This is soo cool! We are about to buy a 4-node HPC for chemistry calculations on NWChem and i will have to set up everything with 0 knowledge XD Cant wait for the 3rd episode. I am planning to run it on Infiniband at the end but first just simple ethernet. The nodes will be equipped with FDR cards and we also buy the FDR switch. We get the cables later.
By the way, could you make a final test in the final video with running an NWChem job on the nodes?
I could try, do you know of any test jobs I could run and expected result times? I promise my 'cluster' will be very subpar but I could absolutely show it 'working'
@@sysadminsean There must be many testjobs in the NWChem folders. Job time is not important but how i can get to the point where i can run jobs across nodes XD The machine comes next week. Initially i am thinking of making a simple LAN and the four machines will be individual units. Three will work with the best installation we can do currently and one will left for installation and HPC creation trials. It is really appreciated if you alter your plans by my request ;)
@@szbalogh I'll do what I can but can't make any promises.
Could I make an older Dell R540(?) run Proxmox and have that controlling 2 physical HPC servers (running AMD EPYC processors and 2 GPUs each)?
Well you wouldn't need proxmox for that. If you're running 2 servers as Compute nodes you could use the dell 540 run as the head node itself. The only reaosn to run proxmox would be if you want to work on snapshoting for modifications your head node. You'd also need to make sure when you're deploying your warewulf images that you pull them from a preinstalled version on the AMD EPYC servers as they'll have some different code instructions from the dell r540(because i assume thats an intel box)
Is possible to configure open HPC clustering in a normal pc (like i have three cpu with intel i7 10nth gen processor and each having 2TB of disk space and 32 gb of ram space in it)
Sure! Just remember one machine needs to be the provisioning host/head node and others are compute