What do you think of our Nu Nu...wait, how many Nus? Anyways, what do you think of our new server? Let us know below! Buy a SABRENT Rocket 4 Plus 8TB NVME M.2 SSD: lmg.gg/qgDC3 Buy an ICY DOCK EZCONVERT MB705M2P-B NVMe SSD to 2.5" U.2 SSD Converter: lmg.gg/fjTFz Buy a StarTech U2M2E125 NVMe SSD to 2.5" U.2 SSD Converter: lmg.gg/76FvY Buy a GIGABYTE R282-Z9G 2U Rackmount Server: lmg.gg/CxlsQ Buy an AMD EPYC Milan 75F3 CPU: lmg.gg/DzxJF Buy Micron 3200MHz CL22 DDR4 2x4GB EEC Memory: lmg.gg/k1myU Buy an Ableconn E1s-DT157 NVMe M.2 SSD to NVMe EDSFF E1.S SSD Adapter: lmg.gg/PC1kk Purchases made through some store links may provide some compensation to Linus Media Group.
From an enterprise perspective, those consumer prices are rounding errors. We recently priced a temporary database using existing hardware for a client at $800,000 and their response was essentially, "how do you want to be paid?"
He has to make sure sabrent gets their money's worth. They sent him $20k+ worth of drives. They've since gone up in price since the first video though.
Linus is now the software on the server. He remotely operates the channel from the server with autonomous robots that operate as hands. All Linus media group employees have been assimilated and are ai generated.
Not lying, server videos are my favourite content on this channel. Not to mention that I gained vast amount of server knowledge from these videos. Also I need to know who is the new editor. 4:428:04 😂
It's always fun watching amateurs learning the lessons that IT pros learned decades ago. Document everything, hot swap is not optional, and if you really care about high availability you need a clustered solution. One day the janky solutions will end in tears.
They already damn near lost everything and spent a ton of time and money to recover…to just turn around and keep doing the same things. Jake in particular should not be allowed anywhere near business critical infrastructure.
@@jm5206 I dont think Jake is the biggest issue at hand, it seems far more like budget. i.e "Consumer grade drives" Typically it sounds like Jake wants to do full enterprise grade stuff but Linus doesnt.
@Nick Sallee He is a problem. Can't give a 20 year old someone to manage your business critical infrastructure. Hire someone that knows what is what. Good for him that he is learning on mistakes, but really bad for Linus over all.
@@pancevovelsowhat does age have to do with it. Anyone with zero constraints could do that job. They have constraints and I’m pretty sure a veteran sysadmin wouldn’t cope in that environment.
5:39 i love how he blows the dust up and then away. thats something everyone should do when its laying down. if not then the dust lands back into the case...smart. i thought i was the only one
tbf, I kinda feel like Linus should try to see if he can scale the editting servers more horizontally instead of vertically. Sure, it'll add a bit of complexity but if that 1 machine goes down, all editors are kinda screwed instead of just a bunch of them.
just if they did it after the zfs pool corruption. But they didn't used to if i remember correctly the zfs pool had an offsite node (idk the zfs terminology) idk tho if they set it to be in HA tho
@@Iaotle off site backups are just that, backups. Just because data is backed up does not mean that it is easy or quickly accessible. For something business critical like this server you really want something with high availability.
They'd still need to turn it off, pull out each of the bays, and read all the ssd serial numbers. If the SN isn't visible from the top, then that also involves unscrewing and unplugging each drive. Not hours of testing, but enough time that I can see why just slotting an extra spare in the front was a better way to go.
Storage requires performance - but it also requires reliability, availability and serviceability. It's been fun (if painful) to watch LTT make mistakes and correct them... Admittedly, for something so vital to your company, one would have inter-site replication. Currently, my concerns are underlying software issues (as hit previously) and stuff like the server mobo itself failing. It's relatively rare, but you'd have a bad time. A twin canister based approach would probably be more bulletproof, but would still have a midplane to crap out. So, yeah, good replication and redundancy over separated failure domains feels like where I'd go. Admittedly, I'm basically suggesting getting a second storage array for no additional capacity. But keeping your data is always handy.
With 100 employees a well proven SAN storage with vendor support is not too far away. We have customers way smaller who can afford this. The problem with LTT is the huge amount of data and the requirement to be extremely fast. Our "normal" customers only have the requirement of being reliable, but not necessarily huge and fast at the same time.
@@drstefankrankThe solution would just be to have multiple servers for their editors rather than one huge one. One server for like 5 editors for example. This is what is normally done in the industry lol.
@@username8644 yeah - their performance requirements may be high, but honestly so are many customers these days. And I'm honestly not sure one can call it /huge/ - multi-petabyte footprints are pretty common these days.
@@username8644 Yeah, it's mostly end user stuff for the cheap, but no enterprise. There's a reason it costs more. Having a storage that just opens a ticket at the vendor and having a replacement part waiting the next morning without any manual steps is gold.
Love the server videos. I use you guys as examples all the time for our own servers at work for video production. Also fairplay to the editor on this the eye stuff really cracked me up.
We have a build like this in progress currently using a Dell PowerEdge R7525, a pair of AMD Epyc 7763 CPUs, 1TB of RAM, and 24 of Micron's 30.72TB 9400 Pro U.3 drives. There's so much to learn about tuning these to run in TrueNAS for workloads at this performance level. It's been quite the long haul but I'm happy to say it performs better than anything I've ever seen, even on this channel.
One of the things they didn't cover is how the ARC Cache reduces latency If you get your DB, Web Server and file contents into ARC Cache your read performance skyrockets. You go from making a queries, building a page from something like 2600ms to 60ms. With multi-threaded DBs and Web Servers seeing the most improvement.
I am loving the server content! Been learning the hard way by diving into the deep end. My team has no one else to configure or maintain our servers...and I'm relatively new at sysadmin type stuff. Lots of limitations to work with and things to learn. So, standard IT job? 😂 Would love to know more about what you use to stress test your server configuration. Thanks for the show and tell!
Yeah i guess they know that but the remark they were making was about how overly complicated they were, with the full sled and mechanism and stuff, while their only purpose is to be a plug
@@memerified because a 1080p screen on a handheld is like a 4k screen on a laptop - too much battery drain for not enough visual uplift. Unless you're playing plugged in, it's not worth the extra pixels. It's like a less egregious Pixel Book - specs for the sake of specs without thinking them far enough through not to mention the software problems. W11 on a handheld is hell on earth, especially without trackpads (I tried W11 on Deck and only the trackpads worked). Just. Fucking. No.
I think you should be more precise about the read parity check for ZFS. It’s a feature on the filesystem level not in RAIDZ level. None RAID controller in the world doing parity check on every read. All of them doing background parity check for silent corruption. This includes ZFS RAIDZ. When you have read parity check on ZFS filesystem level, you can have other RAID controller to replace RAIDZ and still enjoy the read parity check feature that includes GRAID GPU RAID.
I've used Startech products for years now and have found all their stuff to be incredibly reliable and really good value. The only problem I had was the casing came lose on an HDMI cable and they replaced it despite it being over 2 years old.
A cluster of 3-5 Whonnock servers makes more sense. Each could have a less extreme setup with much more combined CPUs/RAM/Drive space. A cluster has a lower risk of downtime and spare lanes to put more IO devices like NICs\GPUs\Accelerators. You would have spare CPUs to do other work like other network services. If you ever needed more CPUs/IO/Storage then you slap in another server to the cluster. 45 drives could probably help with that.
Welcome to the future! I did that months ago on my old Dell 13 Gen servers (a rack full of them) that used SD cards for their boot device (don't ask, it wasn't my decision). I now use the StarTech converter and the loads of M.2 we have, as the boot devices on all the servers and do all of this as Metal-as-a-Service. Granted, it's not for production use but for our lab to allow us to move more quickly!
Yeaaah ... Please, LTT, don't do such horror-inducing face distortion things. I don't care much myself since I'm an old geezer, but I don't want to stop listening to LTT because my 5yo girl happens to be nearby and gets a look at a nightmare fuel. This has NOTHING to do with technology. Please remove that vid and reupload without that. Thank you!
I love these enterprise tech videos! Consumer stuff is all kinda samey, but there's so much cool problem solving that goes into enterprise systems designed for very specific use cases!
I love watching you guys play with servers as it cracks me up sometimes. I know a few weeks ago for us we had a customer having issues with our 10gbps ports we linked up for him on another switch in a different rack so we installed one of those connectx4 100g ports and stuck them on the same switch with a pair of 40g ports and bam issue solved. :) Consumer SSD's are often times the same speed if not faster than most enterprise drives. You only get more endurance out of the enterprise drives which is why we mostly use them in the data center world. We just racked up 3x 2U 24 bay nvme systems the other day.
When i worked with R&D of telecom servers in the 00s they had one system that was so noisy the union halt work on them even with hearing protection on. I dont know what kind of fans they use, but the sound was pretty much identical ro that of a model jet engine
1:35 "at least its a good transition (pause) to how were going to fix this", you can see the other guy cracking up because he knows hes going to troll all of us who thought he was going to switch to a sponsor ad
What is good in being tech-channel - any expensive gear maintenance or upgrade is also a content for viewers. It is some kind of uroboros - making video about making video.
haha The editing of Nick saying, "Take your hands out of my pockets," was gold. Oh man that sucks (the explanation of the issue in the intro) but hey at least they're trying to fix it instead of just leaving it.
Careful with sabrent nvmes had two die in the space of a week after a firmware update over a pcie riser, which nuked firmware on both drives. They’re no good for mission critical stuff
Linus is pumping out content so fast that I don't have enough time to keep up with watching. I hope he's not disrupting the other employees workflow or the new CEO might just need to have a word with him.😂
The Linus lesson for the day, documentation. Ran into a similar problem with my server, trying to figure out which designation in the raid configuration equates to which bay in the server is always fun.
One of the first things we learn in IT is fault tolerance and Qos. Oh and documentation. You have to document everything .It looks like Linus skipped the fundamentals.
The fact this janky server works at all is amazing at this point......starting from scratch is it's only hope. Or you can just continue using this beast until it explodes making for good content...and LTT is all about that approach all the way 😂
Imagine to be able to run an internal infrastructure for a company this size with consumer hardware/non-HA solutions based on "content". Kinda jelly NGL 😂
Enterprise SSDs aren't that expensive, if you look for the right ones. Intel P5510 7.68TB is around 435CHF with taxes. Samsung PM 9A3 is around 500 with taxes. So around 640-740 cad. Thats cheaper than some M.2 consumer drives are, at least, if you only compare to TLC and Gen4. But none of this consumerdrives is rated for 1DWPD @ 5years. And we haven't even touched features like powerlossprotection or continuity.
The first minutes of this video reminded me of my old boss at a former work place, all bells & whistles, no documentation, backup plan, disaster recovery plans in place.
WORD OF WARNING with these adapters! I've tried a few different brands but every now and then they like to drop offline for a bit and then come back. That's not too big a deal for my smaller use case, but in a ZFS array that can be real bad news. Even a successful rebuild will invisibly eat away at the endurance of your consumer drives, and if you have multiple simultaneous failures in just the right configuration, it's game over. These adapters are great when they work, but they're the weakest link in what sounds like a critical server for you guys. Something worth setting alerts to monitor and if you experience dropouts like I did, time to move to a few Micron 9400 Pro 32 TB drives.
As I continue to watch videos from LTT, I am realising that it is more of a hobbyist/enthusiast channel and should not be taken too seriously. It seems that if they had consulted with IT professionals, they would have avoided some of the silly issues they make from their own lack of knowledge. While this makes great content, it also highlights their lack of knowledge in a professional setting, which shines a light that they have limited knowledge and are curious about what other info/knowledge they lack when they produce other videos. That said I do find their videos entertaining.
hi, do you mind sharing in what ways they could make a more professional job? Most of us are here to learn and LTT inspired many of us to go into tech careers. What do you think they are doing wrong?
@@gabrielenitti3243, I used to be in the IT industry and I think it's great that LTT inspires people to learn more about it. However, as someone who has studied IT, I can say that while hobbyists and enthusiasts like LTT only focus on the specific chapters or pages they need at the moment, someone who has studied IT must read all the chapters and be tested to ensure they have a well-rounded knowledge of the field. LTT tends to focus on new technology and often makes mistakes that they later have to correct, which could be problematic in a professional setting. My advice is to follow IT professionals in addition to LTT to get a better understanding of the industry and consider pursuing a diploma or degree in IT if you're serious about it. Many IT professionals enjoy LTT's content for entertainment value but don't take it too seriously. For reference, I have a Diploma in IT and studied Computing Science at university.
RAID cards do not perform parity verification. That is typically always a file system function. In general, RAID cards do not need to verify parity on every read operation because the parity information is typically used during the rebuild process or when a drive fails. In the case of SupremeRAID by Graid Technology, Inc., we rely on background parity verification to periodically check the integrity of the data stored on the RAID array by verifying the parity information. This is the common practice in the industry for all RAID products including Broadcom, Intel’s VROC, Linux MD and even RAIDZ inside of ZFS. For a user that insists on deployment where parity is constantly verified, they are more than welcome to implement SupremeRAID alongside ZFS to get the desired outcome. By doing so, users would also benefit from the “wildly fast” performance that SupremeRAID offers.
im realy not sure about the editing on this video it feels a bit off and definatly over done with the face distortion effects. also the sceen with the cpu at 4:00 has a realy unpleasnt shimmer/distortion on the slowed image. still a nice video just not ltts best edit in my opinion.
Linus renaming nu whonnock is now as bad as some Intel and AMD naming mechanisms. Just wait until they're as bad as monitor and television naming conventions. The solution: naming the old systems by the year while the current gen remains as whonnock.
The impact this channel has on my business is mad. I was just about to get a nas.. then nvme . Now I can do nvme Nas using an adapter... Good damn I just need to make my own cloud at this point
yey! even tho we are a multimillionaire enterprise, let's use shitty equipment for our most important product! it is interesting to watch, but at the same time kinda silly
How dare you lie to us. And make maintanence bad. Next server video you're probably gonna talk about losing the data again (in a few years) because some cosmic rays messed up the file systems of multiple drives.
For as much as Linus raves against terrible naming schemes, it's surprising how terrible LTT's naming scheme for Whonnock servers is. Every time they do an update, they add like three new's to the front of whatever server they're working on.
Once again, your lack of a sysadmin who knows squat about servers comes to bite you in the ass. I'm an entry-level sysadmin and I shake my head and die a little inside every time I see almost anything you guys post about servers.
Hey, that's great that it's not another dataloss disaster! Now, uh... you remembered to mark the serials on the front of the cases for quick troubleshooting this time, right? ...RIGHT?!!
What do you think of our Nu Nu...wait, how many Nus? Anyways, what do you think of our new server? Let us know below!
Buy a SABRENT Rocket 4 Plus 8TB NVME M.2 SSD: lmg.gg/qgDC3
Buy an ICY DOCK EZCONVERT MB705M2P-B NVMe SSD to 2.5" U.2 SSD Converter: lmg.gg/fjTFz
Buy a StarTech U2M2E125 NVMe SSD to 2.5" U.2 SSD Converter: lmg.gg/76FvY
Buy a GIGABYTE R282-Z9G 2U Rackmount Server: lmg.gg/CxlsQ
Buy an AMD EPYC Milan 75F3 CPU: lmg.gg/DzxJF
Buy Micron 3200MHz CL22 DDR4 2x4GB EEC Memory: lmg.gg/k1myU
Buy an Ableconn E1s-DT157 NVMe M.2 SSD to NVMe EDSFF E1.S SSD Adapter: lmg.gg/PC1kk
Purchases made through some store links may provide some compensation to Linus Media Group.
Pin comment lol
Nu nu means something else in 8ndia
@@prathameshlotankar405 bruh
Add a cow decal and call it Moo Moo Whonnock
Not Bad. Its not like the FASTEST
Linus: We can use cheap consumer SSDs
Also Linus: proceeds tu put in sabrent 8TB SSD, which costs more than my whole computer
gives a perspective on how wild(ly expensive) server stuff is
I mean... its his version of cheap, i guess. xD
From an enterprise perspective, those consumer prices are rounding errors. We recently priced a temporary database using existing hardware for a client at $800,000 and their response was essentially, "how do you want to be paid?"
@@aurunemaru And how expensive running a company is... vs a personal home thing.
to be fair, in the server space, "cheap" means "this didn't cost me close to a million bucks."
Every server update seems to start with "our last server update was a bit of a lie..."
So the "high school chemistry curriculum" approach
@@forthex true but, not a bad way to teach chemistry.
True !!
They changed the title so I think that was a bit too good of a statement lol
because they use OpenZFS again and again, and even didn't try to read and check alternatives!
I like how he said "leftover 8 TB " drives.
Around 1:49
Sad😢
sabrent sent him a ton of them for a pc build
He has to make sure sabrent gets their money's worth. They sent him $20k+ worth of drives. They've since gone up in price since the first video though.
Linus and his son messing about with server hardware has to be my favourite type of LTT videos.
xDD thank you for this comment, sir !
Thats his boyfriend
You mean his boyfriend
The year is 2341. Linus releases what he calls "the final file server room update"
It’s only 6 Yottabytes
Nu nu nu nu nu ... Linus.
@@maighstir3003 you beat me to it haha
@@maighstir3003 Linunununununus*
Linus is now the software on the server. He remotely operates the channel from the server with autonomous robots that operate as hands. All Linus media group employees have been assimilated and are ai generated.
Not lying, server videos are my favourite content on this channel. Not to mention that I gained vast amount of server knowledge from these videos.
Also I need to know who is the new editor. 4:42 8:04 😂
Also 12:09 :) lol
nightmare fuel of edits
edit: OF COURSE IT'S APRIME, how could I forget him being the king of meme editing in LMG
You can always see all the people involved in the videos at the very end.
stop the effects
Alexandre Potvin. All credits are at the end of the video.
It's always fun watching amateurs learning the lessons that IT pros learned decades ago. Document everything, hot swap is not optional, and if you really care about high availability you need a clustered solution. One day the janky solutions will end in tears.
Like Wonnock 1 resulted in dirty underwear.
They already damn near lost everything and spent a ton of time and money to recover…to just turn around and keep doing the same things. Jake in particular should not be allowed anywhere near business critical infrastructure.
@@jm5206 I dont think Jake is the biggest issue at hand, it seems far more like budget. i.e "Consumer grade drives" Typically it sounds like Jake wants to do full enterprise grade stuff but Linus doesnt.
@Nick Sallee He is a problem. Can't give a 20 year old someone to manage your business critical infrastructure.
Hire someone that knows what is what.
Good for him that he is learning on mistakes, but really bad for Linus over all.
@@pancevovelsowhat does age have to do with it. Anyone with zero constraints could do that job. They have constraints and I’m pretty sure a veteran sysadmin wouldn’t cope in that environment.
5:39 i love how he blows the dust up and then away. thats something everyone should do when its laying down. if not then the dust lands back into the case...smart. i thought i was the only one
tbf, I kinda feel like Linus should try to see if he can scale the editting servers more horizontally instead of vertically.
Sure, it'll add a bit of complexity but if that 1 machine goes down, all editors are kinda screwed instead of just a bunch of them.
Right.. IF
Looks like that's what they are doing with the Weka implementation that Jake referenced in the video and they have discussed before.
But LTT has offsite backups I think.
just if they did it after the zfs pool corruption. But they didn't used to
if i remember correctly the zfs pool had an offsite node (idk the zfs terminology)
idk tho if they set it to be in HA tho
@@Iaotle off site backups are just that, backups. Just because data is backed up does not mean that it is easy or quickly accessible. For something business critical like this server you really want something with high availability.
Been getting into making a homelab. Ill never need something this insane for my home but its great information for my already packed brain.
Regarding identification of failed drive you can do using M.2 Serial number where it is visible in OS.
They'd still need to turn it off, pull out each of the bays, and read all the ssd serial numbers. If the SN isn't visible from the top, then that also involves unscrewing and unplugging each drive. Not hours of testing, but enough time that I can see why just slotting an extra spare in the front was a better way to go.
Storage requires performance - but it also requires reliability, availability and serviceability. It's been fun (if painful) to watch LTT make mistakes and correct them... Admittedly, for something so vital to your company, one would have inter-site replication.
Currently, my concerns are underlying software issues (as hit previously) and stuff like the server mobo itself failing. It's relatively rare, but you'd have a bad time. A twin canister based approach would probably be more bulletproof, but would still have a midplane to crap out. So, yeah, good replication and redundancy over separated failure domains feels like where I'd go.
Admittedly, I'm basically suggesting getting a second storage array for no additional capacity. But keeping your data is always handy.
With 100 employees a well proven SAN storage with vendor support is not too far away. We have customers way smaller who can afford this. The problem with LTT is the huge amount of data and the requirement to be extremely fast. Our "normal" customers only have the requirement of being reliable, but not necessarily huge and fast at the same time.
Most problems Linus encounters are all software related. Everything from all his wireless tech in his house, to OS they run on their servers.
@@drstefankrankThe solution would just be to have multiple servers for their editors rather than one huge one. One server for like 5 editors for example. This is what is normally done in the industry lol.
@@username8644 yeah - their performance requirements may be high, but honestly so are many customers these days. And I'm honestly not sure one can call it /huge/ - multi-petabyte footprints are pretty common these days.
@@username8644 Yeah, it's mostly end user stuff for the cheap, but no enterprise. There's a reason it costs more.
Having a storage that just opens a ticket at the vendor and having a replacement part waiting the next morning without any manual steps is gold.
Love the server videos. I use you guys as examples all the time for our own servers at work for video production. Also fairplay to the editor on this the eye stuff really cracked me up.
You should be using them as an example of what not to do.
I would not do what these guys do. While it is entertaining, their solutions are janky at best and they are not professionals
We have a build like this in progress currently using a Dell PowerEdge R7525, a pair of AMD Epyc 7763 CPUs, 1TB of RAM, and 24 of Micron's 30.72TB 9400 Pro U.3 drives. There's so much to learn about tuning these to run in TrueNAS for workloads at this performance level. It's been quite the long haul but I'm happy to say it performs better than anything I've ever seen, even on this channel.
One of the things they didn't cover is how the ARC Cache reduces latency If you get your DB, Web Server and file contents into ARC Cache your read performance skyrockets. You go from making a queries, building a page from something like 2600ms to 60ms. With multi-threaded DBs and Web Servers seeing the most improvement.
O
Did they mention what OS they're going to be running here?
@@SixOThree 10:40 - They are using "TrueNas Scale"
@@ericneo2 thank you. I did miss that
I am loving the server content! Been learning the hard way by diving into the deep end. My team has no one else to configure or maintain our servers...and I'm relatively new at sysadmin type stuff. Lots of limitations to work with and things to learn.
So, standard IT job? 😂
Would love to know more about what you use to stress test your server configuration. Thanks for the show and tell!
10:27 Those blanks are actually to equalize airflow across all drives.
Yeah i guess they know that but the remark they were making was about how overly complicated they were, with the full sled and mechanism and stuff, while their only purpose is to be a plug
@@giacomo.1574 Yea they are unnecessarily complicated.
Hey Linus, thanks. I put a 1TB SSD in my Steam Deck today and got a ton of help from your last videos and the LTT forums - it was awesome.
Why not rog
@@memerified because a 1080p screen on a handheld is like a 4k screen on a laptop - too much battery drain for not enough visual uplift. Unless you're playing plugged in, it's not worth the extra pixels. It's like a less egregious Pixel Book - specs for the sake of specs without thinking them far enough through not to mention the software problems. W11 on a handheld is hell on earth, especially without trackpads (I tried W11 on Deck and only the trackpads worked). Just. Fucking. No.
I think you should be more precise about the read parity check for ZFS. It’s a feature on the filesystem level not in RAIDZ level. None RAID controller in the world doing parity check on every read. All of them doing background parity check for silent corruption. This includes ZFS RAIDZ. When you have read parity check on ZFS filesystem level, you can have other RAID controller to replace RAIDZ and still enjoy the read parity check feature that includes GRAID GPU RAID.
But you know what Linus never lies about? Our sponsor, Glasswire!
I've used Startech products for years now and have found all their stuff to be incredibly reliable and really good value. The only problem I had was the casing came lose on an HDMI cable and they replaced it despite it being over 2 years old.
Alexandre did an amazing job lol, 4:41 had me dying 🤣
was wondering what's the name of background music used here
A cluster of 3-5 Whonnock servers makes more sense. Each could have a less extreme setup with much more combined CPUs/RAM/Drive space. A cluster has a lower risk of downtime and spare lanes to put more IO devices like NICs\GPUs\Accelerators. You would have spare CPUs to do other work like other network services. If you ever needed more CPUs/IO/Storage then you slap in another server to the cluster. 45 drives could probably help with that.
What clustering technique do you think can output 20+ GB/s? For GlusterFS you would need about 500-600 Gbps interconnect even with 3 servers alone.
Welcome to the future! I did that months ago on my old Dell 13 Gen servers (a rack full of them) that used SD cards for their boot device (don't ask, it wasn't my decision). I now use the StarTech converter and the loads of M.2 we have, as the boot devices on all the servers and do all of this as Metal-as-a-Service. Granted, it's not for production use but for our lab to allow us to move more quickly!
That part where linus looks at the camera after Jake calls the card old...the eyes will haunt me forever.
Yeaaah ... Please, LTT, don't do such horror-inducing face distortion things. I don't care much myself since I'm an old geezer, but I don't want to stop listening to LTT because my 5yo girl happens to be nearby and gets a look at a nightmare fuel. This has NOTHING to do with technology. Please remove that vid and reupload without that. Thank you!
6:32 I kinda like it they are more enthusiastic about the special wrench then about the server
Linus smiled when jake said he doesn't have a bugatti knowing damn well he can afford one
I love these enterprise tech videos! Consumer stuff is all kinda samey, but there's so much cool problem solving that goes into enterprise systems designed for very specific use cases!
I love watching you guys play with servers as it cracks me up sometimes. I know a few weeks ago for us we had a customer having issues with our 10gbps ports we linked up for him on another switch in a different rack so we installed one of those connectx4 100g ports and stuck them on the same switch with a pair of 40g ports and bam issue solved. :)
Consumer SSD's are often times the same speed if not faster than most enterprise drives. You only get more endurance out of the enterprise drives which is why we mostly use them in the data center world.
We just racked up 3x 2U 24 bay nvme systems the other day.
HAHAHA Jake was particularly excited in this video.
At least now you have a solid foundation to build your home server from as you expand it.
7:55 Woahhh Nick Light is back?!!! Nice seeing him on camera
❤
Linus trashes Intel for their naming scheme,
Also Linus naming his server like : nu nu nu Whonnock
I think we are at this point when it's easier to write: "5xNU Whonnock"
@@darekmistrz4364 yes sir, but I wonder Linus will counter this with how world wide web is faster to say than WWW 🤣
4:44 Editors have been seriously on point lately with the funny edits.
When i worked with R&D of telecom servers in the 00s they had one system that was so noisy the union halt work on them even with hearing protection on. I dont know what kind of fans they use, but the sound was pretty much identical ro that of a model jet engine
4:41 Best edit in YEARS. Whoever is responsible for that deserves a bonus.
Still waiting for the day Linus finally fixed server problems
1:35 "at least its a good transition (pause) to how were going to fix this", you can see the other guy cracking up because he knows hes going to troll all of us who thought he was going to switch to a sponsor ad
Linus + Jake is fun to watch. Especially when they try to do things their way. Why would there be professional stuff? 😄
I'm not sure if it is a new editor or a vision officer changer but this episode sure is something
💯💯🔥
What is good in being tech-channel - any expensive gear maintenance or upgrade is also a content for viewers. It is some kind of uroboros - making video about making video.
haha The editing of Nick saying, "Take your hands out of my pockets," was gold.
Oh man that sucks (the explanation of the issue in the intro) but hey at least they're trying to fix it instead of just leaving it.
The read/write and endurance limits of consumer SSDs is making nervous in their configuration.
I could feel my soul starting to leave my body at 4:42 that was actually slightly terrifying lmao
Careful with sabrent nvmes had two die in the space of a week after a firmware update over a pcie riser, which nuked firmware on both drives. They’re no good for mission critical stuff
The editing on this is absolutely hilarious 😂
Linus is pumping out content so fast that I don't have enough time to keep up with watching.
I hope he's not disrupting the other employees workflow or the new CEO might just need to have a word with him.😂
4:41 here we see Linus regret his decision of stepping down as CEO since he can’t fire Jake.
If i’ve learned anything about LTT, it’s that the server room will never be fully upgraded.
The Linus lesson for the day, documentation.
Ran into a similar problem with my server, trying to figure out which designation in the raid configuration equates to which bay in the server is always fun.
One of the first things we learn in IT is fault tolerance and Qos. Oh and documentation. You have to document everything .It looks like Linus skipped the fundamentals.
That's what makes his server environment so hectic to deal with. 🤣
You think he don't know? He is just making videos and what the real BTS stuff that is proper will not be seen by us.
@@zhen86 do you know?
Linus is now in his element. Glad he doesn't have the stress of day to day operations. No price can bring you joy.
The fact this janky server works at all is amazing at this point......starting from scratch is it's only hope.
Or you can just continue using this beast until it explodes making for good content...and LTT is all about that approach all the way 😂
Jake is the best co-host. Maybe that's because i love server and server hardware videos the most and he wins by default lol
Imagine to be able to run an internal infrastructure for a company this size with consumer hardware/non-HA solutions based on "content". Kinda jelly NGL 😂
I know right? For a company that was valued at 100 million dollars recently they sure run janky infrastructure.
That is part of the charm and a source of future income, after all their viewers want to see them repairing their janky infraestructure.
I love the editing. its reminiscent of the good ol YTP's from the 2010's
Enterprise SSDs aren't that expensive, if you look for the right ones.
Intel P5510 7.68TB is around 435CHF with taxes. Samsung PM 9A3 is around 500 with taxes. So around 640-740 cad. Thats cheaper than some M.2 consumer drives are, at least, if you only compare to TLC and Gen4. But none of this consumerdrives is rated for 1DWPD @ 5years.
And we haven't even touched features like powerlossprotection or continuity.
Yep the m.2's will be dead in no time
Can't wait till the nu nu nu nu whonnock server vid drops
Just keep in mind that SDDs perform better when warm. There is an optimal temp, so don't cool them to an absurd level.
This won't stop Linus from trying his insane artic cooling solutions on them. 😁
12:08 I did notice, editor!
Linus is great! Love the server stuff!
I really like this video. It radiates the same fun energy as some of LTT's old videos.
The editors definitely had fun with this episode. I know I did. 🤓
It's one editor. He's credited in the end of the video.
Personally I found the face edits really distracting
@@GarethPW Yeah it would've looked funnier without edit
Edit 4:41
The first minutes of this video reminded me of my old boss at a former work place, all bells & whistles, no documentation, backup plan, disaster recovery plans in place.
A dead drive is like a sneeze in an array.
Give whoever edited this video a raise!!
WORD OF WARNING with these adapters! I've tried a few different brands but every now and then they like to drop offline for a bit and then come back. That's not too big a deal for my smaller use case, but in a ZFS array that can be real bad news. Even a successful rebuild will invisibly eat away at the endurance of your consumer drives, and if you have multiple simultaneous failures in just the right configuration, it's game over. These adapters are great when they work, but they're the weakest link in what sounds like a critical server for you guys. Something worth setting alerts to monitor and if you experience dropouts like I did, time to move to a few Micron 9400 Pro 32 TB drives.
Thank you, Linus, for including industrial computer systems in your content. I'm interested in this field and I also enjoy watching it.
What you guys gonna do with the old nu nu Whonnock?
Holy balls, Jake, not even two minutes in and already we're at *sublime* jank? I'm loving it.
As I continue to watch videos from LTT, I am realising that it is more of a hobbyist/enthusiast channel and should not be taken too seriously. It seems that if they had consulted with IT professionals, they would have avoided some of the silly issues they make from their own lack of knowledge. While this makes great content, it also highlights their lack of knowledge in a professional setting, which shines a light that they have limited knowledge and are curious about what other info/knowledge they lack when they produce other videos. That said I do find their videos entertaining.
hi, do you mind sharing in what ways they could make a more professional job? Most of us are here to learn and LTT inspired many of us to go into tech careers. What do you think they are doing wrong?
@@gabrielenitti3243, I used to be in the IT industry and I think it's great that LTT inspires people to learn more about it. However, as someone who has studied IT, I can say that while hobbyists and enthusiasts like LTT only focus on the specific chapters or pages they need at the moment, someone who has studied IT must read all the chapters and be tested to ensure they have a well-rounded knowledge of the field. LTT tends to focus on new technology and often makes mistakes that they later have to correct, which could be problematic in a professional setting. My advice is to follow IT professionals in addition to LTT to get a better understanding of the industry and consider pursuing a diploma or degree in IT if you're serious about it. Many IT professionals enjoy LTT's content for entertainment value but don't take it too seriously. For reference, I have a Diploma in IT and studied Computing Science at university.
I used to hate Linus, now I freaking love him. I see so much of myself in him now for some reason.
Or just log what drives sn are installed in each card so you know where its located
Some of my favourite videos on LTT are Linus and Jake getting giddy nerding out about fast servers lol😆
RAID cards do not perform parity verification. That is typically always a file system function. In general, RAID cards do not need to verify parity on every read operation because the parity information is typically used during the rebuild process or when a drive fails. In the case of SupremeRAID by Graid Technology, Inc., we rely on background parity verification to periodically check the integrity of the data stored on the RAID array by verifying the parity information. This is the common practice in the industry for all RAID products including Broadcom, Intel’s VROC, Linux MD and even RAIDZ inside of ZFS.
For a user that insists on deployment where parity is constantly verified, they are more than welcome to implement SupremeRAID alongside ZFS to get the desired outcome. By doing so, users would also benefit from the “wildly fast” performance that SupremeRAID offers.
whoever randomly decided to do content aware scaling in the video, thank you for that, had a good chuckle and few cursed images
im realy not sure about the editing on this video it feels a bit off and definatly over done with the face distortion effects. also the sceen with the cpu at 4:00 has a realy unpleasnt shimmer/distortion on the slowed image. still a nice video just not ltts best edit in my opinion.
2030:
Linus: Hi everyone. In today's video we will build nu nu nu nu nu nu nu nu nu nu nu nu nu Whonnock server
Jake: "face palm"
Linus renaming nu whonnock is now as bad as some Intel and AMD naming mechanisms.
Just wait until they're as bad as monitor and television naming conventions.
The solution: naming the old systems by the year while the current gen remains as whonnock.
I'm sure it's been talked about loads of times by now, but I just wanted to say I've been enjoying the longer intros before the sponsor bit.
Let intel decide who has worst naming convention.
The impact this channel has on my business is mad. I was just about to get a nas.. then nvme . Now I can do nvme Nas using an adapter... Good damn I just need to make my own cloud at this point
yey! even tho we are a multimillionaire enterprise, let's use shitty equipment for our most important product! it is interesting to watch, but at the same time kinda silly
Who edited this? It was great!
Woah... Things are actually "expensive" when you have to pay for them??
😂
Jake was flipping hilarious (and on-point, IMHO) with his clearly displayed thoughts about the NuNu naming scheme 😂
and with the Mellanox cards 🤣🤣
Can we just appreciate the editing on this one though?? I love the Gen Z styling we're getting with the meme-faces
❤
i hated it, maybe i am to old, but WTF! >_
Alex you mad lad🤣🤣 Such a fun edit 7:56😂
How dare you lie to us. And make maintanence bad. Next server video you're probably gonna talk about losing the data again (in a few years) because some cosmic rays messed up the file systems of multiple drives.
man i don't like the face change filter
"At least it's a good transition into....how we're going to fix this" really got me. Me screaming TO OUR SPONSOR and getting played sadge
For as much as Linus raves against terrible naming schemes, it's surprising how terrible LTT's naming scheme for Whonnock servers is. Every time they do an update, they add like three new's to the front of whatever server they're working on.
Linus on USB and intel naming: "This is insane, what were they thinking!?"
Linus on server naming: "This is nu nu nu nu nu Whonnock"
Once again, your lack of a sysadmin who knows squat about servers comes to bite you in the ass. I'm an entry-level sysadmin and I shake my head and die a little inside every time I see almost anything you guys post about servers.
this.
Love how the editors are having fun with their faces haha
How dare you lie to us Linus
Hey, that's great that it's not another dataloss disaster! Now, uh... you remembered to mark the serials on the front of the cases for quick troubleshooting this time, right? ...RIGHT?!!
Wtf is up with the editing?
The editor most have had a blast with all the crazy eyes in this one.
I love the editing in this one! The eyes at 12:07 Hilarious😂