EPYC TrueNAS Scale Build and VM Install

Поделиться
HTML-код
  • Опубликовано: 31 янв 2025
  • Between the power draw and cooling requirements of running my own server rack, it's time for some much needed consolidation, starting with virtualizing my TrueNAS and Proxmox servers into one massively overkill box.
    Oh, and I'll be installing TrueNAS Scale, based on Debian Linux!
    But first... What am I drinking???
    A two-fer of disappointments unfortunately. Between the Plug and Play IPA from Matchless, to the KCBC Kung Fu Karaoke, there's just not a lot of good things I can say.
    Links to items below may be affiliate links for which I may be compensated
    Check out parts from today's build:
    Supermicro MBD-H11SSL-I-O Socket SP3: amzn.to/3f8H79P
    AMD Epyc 7601 32-Core CPU: ebay.to/3ngX3xc
    I've got merch, and you can get it too!
    craftcomputing...
    Follow me on Twitter @CraftComputing
    Support me on Patreon or Floatplane and get access to my exclusive Discord server. Chat with myself and the other hosts on Talking Heads all week long.
    / craftcomputing
    www.floatplane...
    Music:
    Acid Trumpet by Kevin MacLeod
    Link: incompetech.fi...
    License: filmmusic.io/s...

Комментарии • 229

  • @hardbrocklife
    @hardbrocklife Год назад +27

    I love tech youtubers that actually leave their mistakes in their videos. Mistakes are not a loss, they are just data conducive to performing better in the future.

  • @adamw9764
    @adamw9764 3 года назад +48

    Everytime I watch one of your videos, I want more stuff.....my wife and I are already talking about setting up a rack in the garage and priced out some AC options.....thanks a lot you glorious glorious bastard you

  • @stetsonwhitehouse7495
    @stetsonwhitehouse7495 3 года назад +4

    I appreciate your humor and personality! I've been watching you for awhile. Always enjoy it!

  • @jessiedehart45
    @jessiedehart45 5 месяцев назад

    2 years later I finally find this video, right when I plan to do a very similar thing on Xeon E5 2687w V4. Love all the information I get from your videos and watch all that I can, but, this one was serendipitous to say the least! Keep up the awesome work!

  • @GreySkullification
    @GreySkullification Год назад

    Gotta say you probably didn't intend it but you have one of the only and best videos I could find for passing an HBA and assoc. disks directly to a TrueNas VM. You are the best Jeff! I have SMART data!!!

  • @Burnman83
    @Burnman83 3 года назад +25

    Nice build!
    If that is possible, it'd be really interesting to see a direct comparison of the wattage needed in the typical idle situation between all the servers you had before, and the one virtual host you have afterwards =)

  • @mianderson86
    @mianderson86 3 года назад +82

    There is one thing I learned when scaling our data center up was anything using DDR3 these days, while still great hardware, is extremely less efficient than modern server hardware and can be done with less chassis all around. Its wild you can scale 7 servers down to 1 and the efficiency will be awesome!

    • @KaimasterXD
      @KaimasterXD 3 года назад +10

      And modern hardware also tends to have a way lower idle power draw. And it is somewhat likely that your home lab will spend a lot of its lifetime at low or even no load.

    • @wheisenberg559
      @wheisenberg559 3 года назад +5

      Agreed, my Ryzen 5800X runs circles around my Lenovo D30 (2x E5-1680v2). Still, for a cheap entry, these offer great performance.

    • @nadtz
      @nadtz 3 года назад +5

      I have an old client we condensed 38U down to 10 last year after we did a rack audit and realized how old some of their stuff was. Keeping in mind 8U was replacig all the servers on 2 2u machines, another 2x2u for failover/load bearing machines and we were able to throw in 2 1u offline backup machines. Was a fun project for me as I got to do all the testing before we rolled out and a happy client is always a good thing. That said my home machines are still old v2/3 xeon machines because my needs are minor. one test machine and a freenas box.

  • @Dean_Smith
    @Dean_Smith 3 года назад +61

    If only there were some remote KVM based on a pi....and a video about how to do it....with beer...(drinkable beer at that)

    • @vaidkun
      @vaidkun 3 года назад

      I think that super micro board has BMC that includes kvm with dedicated network port so no need to use separate kvm on pi, I don't know why craft computing does not use it, because that's the point of using server motherboard.

    • @brunekxxx91
      @brunekxxx91 3 года назад

      Well i guess there is pikvm (But that takes many resources so a raspberry pi might not run it (even if it's named PI kvm so yeah)

    • @terancemoore5861
      @terancemoore5861 Год назад

      ​@@vaidkun😊😊😊😊😊😊

  • @TechyGuy17
    @TechyGuy17 3 года назад +6

    What would i do in life without a new craft computing video? Got to feed my server need without spending more money ;)

  • @demonrz2655
    @demonrz2655 3 года назад +2

    I do gotta say, I'm glad the home lab era is here. I myself run quite a few servers in my own home lab to include DL360 Gen8 x2 (1 with 352 GB DDR3, and the other unused until I setup the HVM), Synology NAS RS2421+, Dell PowerEdge R330, and an old HP DL380 Gen 5, running off a Tripp Lite SMART1500LCD 1500VA, and a Alcatel 3750G 52-port switch. If you have time, I'd like to get some additional tips for setup. Been playing around with enterprise servers for a while and found more efficient methods of hosting cloud services and VMs that I've been utilizing. Gotta love corporates who won't listen, lol.

  • @stephenp4440
    @stephenp4440 3 года назад +2

    This is a great evolution. I have a very similar setup to your old TrueNAS server and I want to virtualize it but I came to the same conclusion that the IvyBridge wasn't enough horsepower.

  • @Maxw3llTheGreat
    @Maxw3llTheGreat 3 года назад +12

    The thing im most excited for is the linux drivers for that fusion iodrive. I bought a 1.2tb one a couple years ago to use as a cache only to find out it was basically impossible at that time. This will probably give me a reason to upgrade from the freenas OS im still running on my server

  • @rcdenis1
    @rcdenis1 3 года назад +2

    I knew this video was coming! Thank you Jeff!!

  • @JasonLeaman
    @JasonLeaman 3 года назад +2

    I used one of those IO-drives in my Lenovo St550 as a storage volume, they worked well and ESXI loves them. NO issues at all.

  • @NightHawkATL
    @NightHawkATL 3 года назад +8

    I second all the others that say to create a bridge. I did that for a time to dedicate ports from my 4 port Gb card to TrueNAS, a Windows VM and Plex and then I decided to do a LAGG group of all 4 ports. I don't have 10Gb or a real need for it yet so I am sticking with what I have. I bought a second 4 port card for future expansion and will do that as a LAGG as well, once I get a 16 port switch that supports it.

    • @kienanvella
      @kienanvella 3 года назад +1

      more specifically, definitely prefer openvswitch bridges, the performance is much better than the default linux bridging.

    • @peny1981
      @peny1981 3 года назад

      Did you manage to make file sharing to a single client use these 4 ports, i.e. you managed to achieve a transfer of e.g. 400MB / s to the client?

    • @NightHawkATL
      @NightHawkATL 3 года назад

      @@peny1981 I haven’t down a full test but the LAGG config doesn’t really combine the ports to use as one. The way it works allows for multiple lanes so if one is busy with a large transfer of data then it moves to the next. It is like the difference between one lane on an interstate or highway and multiple lanes.

  • @jacobnoori
    @jacobnoori 3 года назад

    Thanks for leaving in the learning process - It helps a ton when I'm doing research.

  • @jonmayer
    @jonmayer 3 года назад

    FreeNas, er TrueNas is so set it (up) and forget it that I often don't keep up with it's new offerings. TrueNAS Scale sounds great, thanks for the info. I'm going to try this out for my new storage / plex server.

  • @herrminni
    @herrminni 3 года назад +5

    Hi Jeff,
    just wanted to add my findings on virtualized TrueNas Scale with ZFS encryption:
    As CPU you have to set "host", as otherwise no AES / AVX extensions work inside the TrueNas VM.
    Without the default CPU type "kvm64" that proxmox has, AVX is not supported.
    So you get a bad transfer rate and high CPU usage without this setting.
    One can check if the CPU instructions are available with the linux commands:
    "cpuid | egrep -i "(avx|aes)" | sort | uniq | grep true"

  • @Alex-sc2rc
    @Alex-sc2rc 2 года назад +1

    came here to install truenas inside a virtual machine. stayed for a random dude running to his garage multiple times. cheers

  • @christoffer4017
    @christoffer4017 3 года назад +1

    I appreciate that you left in the part where it didn't work to pass through the NICs

  • @shaniqualatoya5012
    @shaniqualatoya5012 3 года назад +1

    Loving the server series!

  • @Gunzy83
    @Gunzy83 2 года назад

    Literally just did this and imported my pool that was originally on arch, then proxmox and then truenas scale on bare metal. I had some trouble with VM features I needed that were missing in scale but really wanted the appliance experience to manage, monitor and share my pools (via NFS, iscsi and democratic CSI). Proxmox to the rescue.

  • @bonemealmc
    @bonemealmc 3 года назад +9

    This is gonna be Epyc!

  • @Zphor4jc
    @Zphor4jc 2 года назад

    I appreciate the beer grading at the end. The computer talk was good too, but the beer made the video.

  • @stucorbishley
    @stucorbishley 3 года назад

    Got strong Oceans 11 vibes on that build montage.. 👌

  • @jefffontenot1782
    @jefffontenot1782 Год назад

    From one Jeff to another... That was a good intro! 😂

  • @AmnesiaPhotography
    @AmnesiaPhotography 3 года назад

    This is one of the reasons I bought a new epyc based rack mount server. 24 sata bays and 8 can also accept u.2 nvme, all in a single box and can expand ram to 1tb as needed. Sure it wasn’t cheap hardware wise but I don’t have to worry about lack of vmw processor support and can support w bunch of workloads

  • @syedmwma
    @syedmwma 3 года назад

    Nice! Has always looking forward to this.

  • @chromerims
    @chromerims Год назад

    ballooning RAM off. Useful, thank you 👍
    So was switching from TN-Core to TN-Scale more than anything else driven by wanting to accommodate those nice VSL3/4 expansion cards? Maybe I missed the justification from the video.
    Kindest regards, friends and neighbours.

  • @jaymax97
    @jaymax97 3 года назад +2

    Thank you for this tutorial! Been virtualizing TruenasCore in proxmox. But wanting to reinstall it bare metal due to proxmox cluster issues and want Truenas on 24/7. Gonna try Scale for Debian driver support!

  • @CheapSushi
    @CheapSushi 3 года назад

    These are my favorite kind of videos.

  • @MatthewHill
    @MatthewHill 3 года назад +1

    You can open a *second* beer? Dude, I'd be at least a 12-pack in by that point. :-)

  • @ArifKamaruzaman
    @ArifKamaruzaman 3 года назад

    Your videos are always good. I didn't even have NAS. but I'm building one from scrap hardware I have.

  • @LampJustin
    @LampJustin 3 года назад +1

    I love the Level1Techs vibes ❤️❤️

  • @ToXXeRR
    @ToXXeRR 3 года назад +1

    Hey Jeff, in terms of truenas scale and reverting back to truenas core, the new beta of scale is using newer version of ZFS with update that can't be undone. I am sure someone mentioned this already but just in case here you go.

    • @bikerchrisukk
      @bikerchrisukk 3 года назад

      That's true as far as I know too, there's no undo button for the file system.

  • @blkspade23
    @blkspade23 3 года назад +2

    You missed the fact that network devices share the same IOMMU group, which is the blocker for splitting. The solution would probably be to create a separate bridge to map the VM nic to.

  • @markmcelroy1872
    @markmcelroy1872 3 года назад +1

    In Virtualbox I was able to get networking working by using a bridged adapter instead of NAT in the networking settings. After that I was able to reach the web server at the IP address shown in the vm after startup.

  • @Kage-Yami
    @Kage-Yami 3 года назад +1

    Ahh, yes, I was waiting for TrueNAS Scale to show up here. It certainly looks interesting based on what I saw on their site. That being said, I kinda expected TrueNAS Scale to be the bare-metal hypervisor in this situation... Certainly makes sense why you wouldn't do it _yet_ 😉

    • @CraftComputing
      @CraftComputing  3 года назад +1

      Oh don't worry, I will be testing out its VM chops before too long ;-)

  • @StuMcDonaldStuey
    @StuMcDonaldStuey 3 года назад

    I would love one of those Craft Computing glasses!

    • @CraftComputing
      @CraftComputing  3 года назад +1

      craftcomputing.store

    • @StuMcDonaldStuey
      @StuMcDonaldStuey 3 года назад

      @@CraftComputing but do you ship internationally?

    • @CraftComputing
      @CraftComputing  3 года назад

      @@StuMcDonaldStuey Of course! We ship to over 70 countries, and rates are VERY affordable.

  • @MuratTamer1
    @MuratTamer1 3 года назад +2

    Thanks for the videos. I am fun of them. You're great. Watching them with a beer on my table :)
    I am working on a storages and I was never able to have near bare metal disk access speeds with Proxmox. SSD Speed 520M/s Under Proxmox 200M/s But with an Hypervisor like Esxi or Hyper-V that's much much better. If it is a project about disk access, is it a good move to do it on Proxmox?

  • @chrisbowie1438
    @chrisbowie1438 3 года назад +3

    You can revert back to CORE but ONLY if you DONT perform the ZFS upgrade on the Pools you imported from CORE.

  • @tommytp85
    @tommytp85 3 года назад

    I need that beer! Purely for the nostalgic can.

  • @BradHedges
    @BradHedges 2 года назад +1

    When you are actually installing TN, at ~ 6:49 in, you jump over the setup of the OS, System, and Hard Disk. Can you talk about those parts? You can't install/create a VM without storage, and I'd like to have it work efficiently. Thanks.

  • @MacroAggressor
    @MacroAggressor Год назад +1

    Where is the video on splitting ports from a NIC? Having trouble finding it.

  • @michaelnorris1530
    @michaelnorris1530 9 месяцев назад

    Awesome channel! Thanks for the tutorials. I got a question... Is it possible to do map a iSCSI drive from a TrueNAS server to the Proxmox hypervisor?

  • @justanotherhuman-d6l
    @justanotherhuman-d6l 3 года назад +3

    Jeff, Do you know: Will SCALE be free like CORE after the beta or will I have to kill the VM and make a new CORE-Deb VM after SCALE comes out of beta?

    • @CraftComputing
      @CraftComputing  3 года назад +3

      YES! Scale is Free. Enterprise will continue to be the paid variant, and there will be a support plan available for Scale.
      AFAIK

  • @rdsii64
    @rdsii64 2 года назад

    As usual, love the chanel. If you're into it, I would love to see some Emby content.

  • @allandresner
    @allandresner Год назад

    Your videos are great, but sometimes leave out key info like the details of the VM.. which BIOS did you choose in your VM? etc

  • @DrMitsos
    @DrMitsos 3 года назад +4

    -"I will be passing on more devices than just the HBA"
    - Please do GPU Passthrough on TrueNAS Scale VM!!!
    Also should i go pure metal or the Proxmox way only for TrueNAS Scale installation?

  • @SilentDecode
    @SilentDecode 3 года назад +2

    I've been running Ubuntu with ZFS on my R720 for almost a year now. It has 8x 1TB SFF Enterprise SATA drives in RAIDZ2, connected to a HBA that is in passthrough to my vNAS VM. The rest of the box runs ESXi7, with dual E5-2680v2 and 256GB of DDR3 LRDIMM RAM. I'm kinda proud of my little creation :D

  • @mrspock128
    @mrspock128 2 года назад +1

    Did I miss the instructions? I don't see them anywhere. Thanks!

  • @JoshuaBoyd
    @JoshuaBoyd 3 года назад +4

    You should tackle IPv6 for homelabs in a future video (or video series).

    • @Kilzu1
      @Kilzu1 3 года назад +1

      IPv6 isn't wise to use unless you have good understanding of network security (unlike IvP4, IvP6 isn't masked behind a NAT or even use NAT, and each computer / device will recieve both public and local IvP6 address), also there's no speed gains on IvP6, all IvP6 gives you, is public IP to each device due to fact that world isn't going to run out of those for next million years or so (quite literally)

  • @MrRoosterx
    @MrRoosterx 3 года назад +3

    How do you implement the APC UPS auto shutdown under Proxmox?

  •  3 года назад +2

    Seems like this is trying to compete with Unraid, given its features seem very similar. Native ZFS support is a big feature over Unraid IMO. Very interested to follow its development.

  • @Founders4
    @Founders4 3 года назад +1

    Hey Jeff, any updates on running Scale on bare metal and hosting your VM's from there? I only host a couple of VM's in addition to Core but more granular control and better PCI passthrough would be excellent.

  • @nyanates
    @nyanates 2 года назад

    Not quite feeling the love with a virtualized NAS setup. I get the need to consolidate HW if you’re low on real estate but isn’t there a significant i/o performance hit vs a dedicated box TrueNAS implementation?
    Not to mention I was away from home for a month and my Proxmox node was/still unable to access remotely networked shares for some reason. Not sure I trust the Proxmox dependency for local storage access.

  • @TheSleepyCraftsman
    @TheSleepyCraftsman 3 года назад +2

    So when are we going to get the epic solar farm build?

    • @CraftComputing
      @CraftComputing  3 года назад +1

      Depends on the budget for my garage rebuild....

  • @Vash612584
    @Vash612584 3 года назад +2

    You may have said but I missed it, but what NAS chassis is that?

  • @renovxperts
    @renovxperts 2 года назад

    Thanks a ton for the great content. I have found your videos quite helpful as I find my way around this “new world” of self-hosting / home lab setup.
    In a Proxmox + TrueNAS setup, what is best approach for ZFS Storage Pool. Is it best to setup the zpool in Proxmox for use by the NAS software or is it better to setup the zpool from within the NAS software?

  • @vasquezmi
    @vasquezmi 2 года назад +1

    Jeff, now that it has been some time I wanted to ask your thoughts on its use within Proxmox. Does it meet your goals and efforts? I ask because I am thinking about moving to TrueNAS scale from TNCore but want to apply this across a Proxmox cluster for HA. Thoughts on that approach? Or from a NAS perspective should I look to abstract the NAS from the cluster / HA and make it a data resource at the datacenter level of the cluster?

    • @morosis82
      @morosis82 2 года назад

      I've been looking at the same thing and thinking more like ceph is a better answer. HA storage to go with the HA compute. A bit easier to add new capacity also, unless you can expand ZFS vdevs? I know it's something that was being explored.

  • @jerma984
    @jerma984 2 года назад

    8:46 there are no instructions linked below... might want to link the docs or something.

  • @SirCrest
    @SirCrest 3 года назад

    Now I know where those 2650 v2's are coming from 😏

  • @AllTheFactorys
    @AllTheFactorys 3 года назад

    I think it's in my end but for a second or two it was looking like the audio and video was out of sync. Around 8:22 to 8:27 ( where I stop the video). Hopefully it's at my end only :)

  • @huplim
    @huplim 3 года назад

    ❤️These videos!

  • @JuggernOtt81
    @JuggernOtt81 2 года назад

    6:59 damn... what settings were SKIPPED between GENERAL and CPU?

  • @adamtoth9114
    @adamtoth9114 3 года назад

    I've run into the same network passthrough problem on my HP ML350 gen9 server. It has 4 GbE ports, I intended to dedicate one to a specific vm but no luck. As far as I know it's some kind of IRQ management problem.

  • @RickJohnson
    @RickJohnson 3 года назад

    Wondering when you made this since TrueNAS Scale 21.08 BETA has been out for a few weeks now. I tried to sidegrade from TrueNAS Core on an older N54L microserver, but the DB migration kept failing. Waiting a bit longer, but really want the Linux version!

    • @CraftComputing
      @CraftComputing  3 года назад +1

      I had downloaded the 06 beta a couple weeks ago in prep for this video. I didn't see the 08 update until after I filmed.
      But I was able to update to 08 direct from the GUI in about 5 minutes.

  • @mycosys
    @mycosys 3 года назад +1

    Early engagement, all hail the mighty algorithm

  • @JohnWeland
    @JohnWeland 2 года назад

    So with this setup you would use Proxmox for setting up VMs and not the virtualization inside TrueNAS Scale?

  • @patrickprafke4894
    @patrickprafke4894 Год назад

    Was that 2 sata drives on a pcie card? I've never seen that before.

  • @amnottabs
    @amnottabs 3 года назад +2

    me with a sandybridge i5 NAS and half a laptop running in a shelf using a gigalan network: yeah I like servers too!

  • @DigitsUK
    @DigitsUK 2 года назад

    Do you not use IPMI for remote server access? You could have rebooted from your desk...

  • @Beastyboy1029RBLX
    @Beastyboy1029RBLX 3 года назад

    Maybe I’m a simpleton, what PCIE card is being populated @4:45 in the video?
    And is it possible to use within a standard windows 10 environment?

  • @VideoManFL
    @VideoManFL 2 года назад

    Super grovy background music.

  • @LeonisYT
    @LeonisYT 3 года назад

    10:36 this is actually exactly why I have a single fallback Ethernet card lol.

  • @RobertMizen
    @RobertMizen 3 года назад

    God damn it Jeff, just finished a TrueNAS Core build, and suddenly I'm doing TrueNAS Scale. Well its the weekend and no better time to play with new software and toys.
    Question, since its built on Debian, I'm assuming some better community support forth coming. Assuming they do not lockdown Scale?

  • @eduncan911
    @eduncan911 3 года назад

    So, where are you posting your WTS ads? :)

  • @Defiant031636
    @Defiant031636 3 года назад +2

    Was wondering when the 32 core Epyc was going to replace a bunch of the old lower core count servers in that rack. Only issue now is redundancy with one box running most of it.

    • @CraftComputing
      @CraftComputing  3 года назад

      There's a 64-core box right below it in the rack. I'll likely have the more critical VMs as cold standbys on that box.

  • @camjohnson2004
    @camjohnson2004 Год назад

    SR-IOV for splitting devices, provided they support SR-IOV

  • @crc-error-7968
    @crc-error-7968 2 года назад

    Hello, unfortunately my eng is not good enough so I could have missed this part in your video. Do you think that this solution proxmox + truenas scale (virtualized) can be a good idea?
    Because my original idea was to use truenas scale to do all (NAS & vm), but, the actual "limitation" of the vm in truenas is the usb, if I want to use an usb stick or another usb device inside of the vm I have to pass the entire controller, so I can't have 2 vms that use different usb devices at same time. I have never tried proxmox but I think (I hope) that it is possible to pass only the device to a vm instead of the controller

  • @PriceyBuilt
    @PriceyBuilt 3 года назад

    I try not to look at the power draw for my server. The dozen HDDs and two SSDs probably don't take up much, but I hate think what the idle power draw of the 2 x X5675 CPUs is. It did have 2 x E5645 CPUs, but one of the services it's running needed faster cores. US$20 each for the 2 x X5675 was a lot cheaper than a whole platform upgrade.

  • @nekomakhea9440
    @nekomakhea9440 3 года назад

    Sounds like you were running like 3.5kW draw, including cooling, to keep a home datacenter running, lol.
    At what point does it become economically imperative to put a solar farm on your roof / yard, and become your own powerplant, just to get your power bill under control?

    • @CraftComputing
      @CraftComputing  3 года назад +4

      I'll be looking into solar next summer :-D

    • @nekomakhea9440
      @nekomakhea9440 3 года назад

      @@CraftComputing That sounds awesome!

  • @roachxyz
    @roachxyz 3 года назад

    Have you done a video on Virtual Machines? I had VMWare on my last pc but I don't remember how I did it. I also didn't know how to use it.

  • @bullhayward2729
    @bullhayward2729 Год назад

    I know this is kinda an old video now but I used it to get true truenas working recently.. question. Is the way you connected the hhd to trunas is that considered a passthrough or is a true piece passthrough different again? Is there any downside to doing it this way? My board has 3 sasmini ports so not sure I can figure out how to pass those through and why add another card if I don't need to its a server board with threadripper pro cpu

  • @mikkelgeorgsen
    @mikkelgeorgsen 3 года назад +1

    Why use Proxmox in this case, TrueNAS Scale has the same virtulization backend (KVM) so it would make more sense to simply skip Proxmox entirely.

    • @Tanax13
      @Tanax13 3 года назад

      Wouldn't it be better to skip TrueNAS instead and just use Proxmox?

    • @mikkelgeorgsen
      @mikkelgeorgsen 3 года назад

      @@Tanax13 no, Proxmox doesn't provide the proper NAS bits, TrueNAS Scale does as well as KVM and docker.

    • @Tanax13
      @Tanax13 3 года назад +2

      @@mikkelgeorgsen Right. In that case I second your first question; Why use Proxmox?

  • @chrismarinohardin9929
    @chrismarinohardin9929 Год назад

    Why are you putting TrueNAS Scale inside of Proxmox when TrueNAS Scale is also a hypervisor?

  • @dustojnikhummer
    @dustojnikhummer Год назад

    Because of my motherboard I could pass through my HBA into my VM sk I had to add them as a virtio kvm drive. Hopefully that doesn't bite me in the ass in the future.

  • @kienanvella
    @kienanvella 3 года назад

    That's a big ouch. "cache drives" aren't just cache drives. As I'm sure you discovered already, you should have really removed the SLOG devices from the pool before the migration.

  • @ahmedeid188
    @ahmedeid188 3 года назад

    Hi Jif,
    Thank you for the tutorial,
    I had virtualized TrueNAS on proxmox and Created SMB Share to use through my network. but I have a weird issue which any time I do copy data to the shared Folder or copy anything out of it everything get's slow on proxmox and all the other VM's hang and once copy is done all the VM's back to normal. any idea what is going on?
    Thanks

  • @Alphahydro
    @Alphahydro 3 года назад

    Epyc and Proxmox goodness.

  • @tonyperez2690
    @tonyperez2690 Год назад

    Thank you for the video. I got a R720 for home lab. I set the IT mode and installed Proxmox on a ssd. It works good and I can see the disk. I tried to follow you guide but after I installed Truenas as a vm, Truenas cant see the hard drive disks to create a pool. I am able to see them in proxmox. Do you have any suggestions?

  • @Starky3000
    @Starky3000 3 года назад

    Just curious on where the link is for those written instructions for ProxMox PCI passthrough?

  • @kennethnicklowicz1030
    @kennethnicklowicz1030 2 года назад

    I am trying to get my nas (upgraded to scale) and my milestone 2019 server in one box. I was looking and this ProxMox looks like it may do it. Make sure you dont upgrade your pools as once you do this running scale you cant go back to core ... I was able to buy some drives and fix the issue being mine were created 10 years ago when I started using Freenas and they still are alive being I just resilvered as they failed. I was gonna do a VM in Scale but this might be a better idea as I am running older HP Proliant G7 for the milestone and a newer Xeon For the nas

    • @kennethnicklowicz1030
      @kennethnicklowicz1030 2 года назад

      oh AHDH I forgot to say thank you which is why I comment in the first place. Never used the prox if that's bare metal VM i'm going to try it

  • @jarnhand266
    @jarnhand266 3 года назад

    I am new to this whole home server thingy, and I may be a bit slow here, but; why do you run Proxmox and Truenas inside Proxmox? Why not just Truenas?!

  • @diegoweb900
    @diegoweb900 3 года назад

    Hey Jeff! Nice video :D
    Btw, would you be able to make a video of how to build a dedicated server with scalable resources?
    For example: I want to host a dedicated virtualmin machine, some websites would potentially grow a lot and I would need to add a new HDD/SSD. How could I add this HDD/SSD into the machine and let the storage capacity grow inside the mentioned system, without having to move files to this new storage unit? I've head about Storage Pool, but I'm unsure how to use it.
    This is different from having a Emby server for example where you can just point a new location for new files.
    I was thinking about having a big LVM Volume and then: 1 Logical unit for General Purpose (like OS) and another Logical unit for /home folder, which I could expand whenever needed, just by adding new storage into this LVM Volume. But how would this work in practice? Could you make a video about it?
    Thanks ;D

  • @James-hy8gu
    @James-hy8gu 3 месяца назад

    6:57 skips over the most important part of mounting the drives....

  • @mrmotomoto
    @mrmotomoto 2 года назад

    Anyone know what add in card he used for those two 2.5 SSDs? You can see it at 4:42

  • @Mr_Sprint
    @Mr_Sprint 3 года назад

    I've been running TrueNas on ESX for about 2 years. Decided a few weeks ago to switch to Proxmox. Total disaster, after a week of troubleshooting, I had to go back to ESX. The throughput was just appalling. I run a 900P Optane which is partition into two 15Gb Slogs and 220Gb L2ARC. i THINK it was something funky going on with the 900Ps passthrough, as when I removed it from the pools, performance returned, and performance to the 900P alone (using dd) was as expected, but when working as part of other pools, it fell apart. Maybe an IRQ issue? Gutted I never got it to work :( Hope you have more success!

  • @adamtoth9114
    @adamtoth9114 3 года назад

    Hey Jeff! Can you please provide a link for the pci-e card holding the dual ssds you've put in the far right pci-e slot?

    • @CraftComputing
      @CraftComputing  3 года назад +1

      You got it! Warning though, I couldn't get it working in Proxmox for some reason. It was running in my TrueNAS server for a year no problem though.
      amzn.to/3lfK9ND

    • @adamtoth9114
      @adamtoth9114 3 года назад

      @@CraftComputing Thanks, I'll get back to you with the results if I can make it work!

  •  3 года назад

    nice, what ram and cpu sink did you use for the build ?