We’ve NEVER done this before… - Mother Vault Part 1 - JBOD
HTML-код
- Опубликовано: 2 июн 2024
- Get $25 off all pairs of Vessi Footwear with offer code LinusTechTips at www.Vessi.com/LinusTechTips
Get your Crucial DDR5 RAM today at: crucial.gg/LTT_DDR5
We've built some cool machines on the channel, but nothing is even close to the plan for our 3.6-petabyte archival storage array.
Discuss on the forum: linustechtips.com/topic/14409...
Check out the SuperMicro JBOD: lmg.gg/jbod
Check out InfiniteCable's MiniSAS HD external cables: lmg.gg/minisashdexternal
Purchases made through some store links may provide some compensation to Linus Media Group.
► GET MERCH: lttstore.com
► SUPPORT US ON FLOATPLANE: www.floatplane.com/
► AFFILIATES, SPONSORS & REFERRALS: lmg.gg/sponsors
► PODCAST GEAR: lmg.gg/podcastgear
FOLLOW US
---------------------------------------------------
Twitter: / linustech
Facebook: / linustech
Instagram: / linustech
TikTok: / linustech
Twitch: / linustech
MUSIC CREDIT
---------------------------------------------------
Intro: Laszlo - Supernova
Video Link: • [Electro] - Laszlo - S...
iTunes Download Link: itunes.apple.com/us/album/sup...
Artist Link: / laszlomusic
Outro: Approaching Nirvana - Sugar High
Video Link: • Sugar High - Approachi...
Listen on Spotify: spoti.fi/UxWkUw
Artist Link: / approachingnirvana
Intro animation by MBarek Abdelwassaa / mbarek_abdel
Monitor And Keyboard by vadimmihalkevich / CC BY 4.0 geni.us/PgGWp
Mechanical RGB Keyboard by BigBrotherECE / CC BY 4.0 geni.us/mj6pHk4
Mouse Gamer free Model By Oscar Creativo / CC BY 4.0 geni.us/Ps3XfE
CHAPTERS
---------------------------------------------------
0:00 Intro & History
4:19 The JBOD
9:15 The Computer
14:10 FILLED
15:26 Cabling & Zoning
16:23 It's... LOUD
18:19 IPMI
18:54 What about dual path & high availability?
19:58 It's aliveeee!
21:30 Performance testing & outro - Наука
At this point they may as well make a server for all the footage of them installing more servers
recording at 8k, that actually could be a thing
oMg iTs Gd rEpLaY yOuTuBeR!!!1!!
Neverending cycle
Servception
@@bradhaines3142 how many pople have 8k? 0.1%?
"The first petabyte project was built in 2015"
Me: *freezes half way through eating* "... 7 years?!?! What have I done with my life!?"
Same thing I've done, been watching LTT videos?
that's pretty sad tbh
Truee.. i was like.. wait was it that long ? Really ?
yes i am kind of ashamed
Now I'm feeling Old man xDD
In the grim darkness of 2025, Linus tech tips finds itself barely recognizable as thousands of servers coat their buildings like a 2 meter thick layer of armor, shielding the world from the madman at it's heart. Inside the asylum, a monument to the sins of Linus' technology addiction, he slaves away streaming 24/7 as he tries in vain to "save the universe" in sever project after server project. His employees are wired directly into everything as little more than motherboards waiting to be dropped by Linus. As time goes by, Linus will himself finally become the master of the internet as he wires himself into his temple of tech tips, continually giving tips about tech so arcane that they are lost upon the ears of the masses. In the year 2025, Linus tech tips becomes unknowable, unrecognizable, feared, and worshipped.
the Matrix sequel that never happened
@@shiskeyoffles I was thinking Ghost in The Shell, but Matrix works too (especially considering the former was an inspiration to the latter).
Even though the videos are getting worse compared the ones at the house.
ALL HAIL THE EMPEROR OF THE UNIVERSE, LINUS! someone should start to enlighten enough techpiests and start to ramp up the sacrifices......
@@shiskeyoffles Didn't Stargate SG-1 have an episode where Daniel becomes all knowing and is able to defend Earth against the Gould?
These videos are truly genius. As a company they need to upgrade things, so they go way overboard and it makes an awesome video showing some unique tech configs we wouldn't otherwise see. They get sponsors to send in components for free, sponsorspots sold on the video and ad-revenue from the video it self. The epitome of efficiency.
The amount of times he has said “not one, not two, but three”
It just shows how big things are getting at LTT
Gabe Newell could learn a lot here!
And expensive. Enterprise level storage... Yet not, is still damn expensive.
soon it's going to be LTTT
@@aledrpepg I agree
Next year, I'm expecting to see the same thing, in flash storage 🤪
The crazy thing is, you could put more storage per unit right now. It would just cost about 30x more lol
Haha at least you can easily transfer data to your other computers 😂
hi jeff
with 100TB Exadrive SSD ofc
Ask your retro jeff to do this year itself 🤭
@@chrisrib05 that would be 9 petabytes raw in one of those JBODs
I love how every time linus picks something heavy/expensive up you can hear everyone in the background collectively have a mild panic moment
"You can fix anything... with money."
Said with such dejection from a man that literally throws money at *all* of his problems.
As long as the sponsors are willing to throw their money at his problems, it's fine. If they're not, there's a slight problem.
Linus doesn't throw money at his problems - he drops it on his problems.
*Everytime Linus needs a new server* : "WE HAVE A SERIOUS PROBLEM"
Viewers: Yeah, it's the jank.
That is most of us when we need something new ;p
Lol true 😅
Moore's Law can't keep up with LTT's data collection :)
@@MarkusHobelsberger true lol
12:19 Biggest thing I have to deal with as a datacenter Storage Admin is constantly checking people that assume Storage doesn't need lots of host resources. Perfectly competent technical people somehow miss that processing and handling IO isn't free in terms of system resources, both at the Storage System AND Host Initiator sides.
20:20 Metadata dedicated is a godsend for huge fileservers like yours -- The idea is that filesystem metadata lookup is actually a really big performance hit because metadata reads/writes are very small, very random, and usually with a sync_required status so they hold up the line. With a metadata tier, you put all your metadata in a dedicated high performance tier/pool/spam/whatever and it not only speeds up your metadata activity, but it also keeps your other storage open for regular IO without having to shudder around working those small metadata IO packets in the middle of regular workload.
ZFS is an absolute CPU hog. setup a lab to test out latest amd vs ice lake, and it just takes every core and maxes it.
But ZFS is also pretty darn awesome.
Such a cool job how did you get into it?
Well... for most people storage truly doesn't need many resources... any computer within the last 10 years could probably serve as a 10TB 1-drive NAS. But the little bits of processing power really add up when you have multiple petabytes... not to mention the sheer bandwidth just to keep so many drives happy. Add a NAS-grade filesystem like ZFS and it's a lot of work for any system.
@@QualityDoggo the problem is they are trying to DIY petabyte storage. It is dumb dumb move. Where they using a comercial system say a DSS-G then honestly 1PB or 10PB or 20PB its basically the same. Bit more time replacing failed drives but not much. You can also replace everything live. You can patch the system live. You can do OS upgrades live. You can add more capacity to the system live. You can at end of life replace *every* component live. You have one drive letter for *everything*. That's been part of my day job (I look after an HPC system) for more than 15 years now.
@@jonathanbuzzard6648 JBOD is used in professional applications, hardly a DIY move. Just because something is running on something other than a HP, Lenovo, or IBM branded hardware, doesn't imply it isn't professional
Finally, they are doing it right, I've seen so many of their setups and thought, well if that works best for you and the team, then cool. But I always thought in the back of my head, that they really should separate the storage and compute and have the storage compatible to where you can swap out the compute unit and continue to roll without downtime. Your storage is starting to look strong now LTT, sweet !! ❤️
Going from something like glusterfs to what's effectively just a single file server really isn't an upgrade. However if downtime isn't a particular issue and there's no need for clustering etc then it's an ok solution that's much easier to manage, and sometimes that's more important than how good the tech is.
Installing these in Lab 2 is quite an ingenious idea considering one of Lab 2's key features is its insane power infrastructure. Then just run 40Gbit or something fiber back to Lab 1 :D
One day it’d be cool if you guys made a video covering ALL the servers you guys have ever had dating back to the original. Covering all the names, capacity, reason for upgrading, all that stuff, that’d be awesome!
And bring in ServeTheHome with his actual experience doing server maintenance to roast them.
@@hikaru-live I'm sure Ryan from lvl1techs would be good at roasting as well :)
Might be a good video idea to post to the forum!
they need a standalone server for a server video
Linus in 2024 : The MOTHER VAULT is completely FULL!
2023.... its full of 16k video
@@davidreynolds8865 2025: The MotherVault 2 is full... Of 32k vids.
I'd give it a month
More like Linus tomorrow:
@@pitekargos6880 Or would it be the MotherMotherVault? Or the OtherMotherVault?
Love the bit at 1:57 where Jake clearly couldn't wait for Linus to finish his intro and tried to surreptitiously get the box open. 🤣
at 2:00 he stares deeply at the box cutter wishing he had it LUL
Finally, they are doing it. Hope this solution works for you. I've been using a slightly more refined version of this and it's been working OK. Really glad you are finally considering merging the vault and wannick together. Once you've set that up correctly it should be much better/easier to manage.
Linus always building some insane stuff for our entertainment even if it has no feasible uses for us... kudos
Someday… we may need this information
I mean, he needs it and he's making a video to get back some money, lol
Or he's just upscaling his sponsor machine, earning money doing it.
@@thunderarch5951 he doesn’t “need” it he said in the past he’s a data horder. It’s purely to be able to use high quality flash backs to previous videos rather than downloading a low quality version from YT
No uses for them either, no one cares about 2 year old videos let alone 10
What boggles my mind is that despite it being a known issue for years and despite spending so much money on so many other projects... you guys still haven't hired a full time system administrator?
they have a few people capable but they're usually doing other things
It's a pet project they're mostly just doing for the fun of doing it. Linus has admitted they'd be fine without it, it's just nice to have for grabbing old footage to reference
sure but think of all the content we'd miss out on. cant wait for them to screw up this server as well
Because all that money spent is probably still less than it costs to hire a full time SA in their area. Plus Linus needs to control everything cause he thinks he knows everything best, and a competent SA would probably not deal with his BS.
@@Flameb0 even if they don't screw it up they can come up with lots of new things they can do with that caliber of equipment
23:18 - I remember having my mind absolutely blown with the NetApp E5600A and DE6600’s years ago that had this same feature. Not only the density at the time (60 drives in 4U), but also the drawer that was still in operation when pulled was insane. Also had optical LED channels from each individual drive interface board going to the front panel. So cool.
I Absolutely love these storage videos!
Linus I watched your video around 2 years ago when you were deciding to quit due to juggling family and other psychological problems, now to see how far you have come I'm glad you didn't. Thanks :)
I feel like it was just an honest need that he had to tell everyone his stress. The shared host schedule and the channel diversification I think has really allowed him to keep doing only what he loves.
tbf he never said that he decided to leave, just "thinking about retiring"
Feels like these servers have been rebuilt like 6 times this year alone
😂 😂 😂 😂 😂 😂
[We’ve NEVER done this before…]
I.... don't..... believe it.
1:10 Luke enters. HELL YEAH ANOTHER GREAT VIDEO COMING STRAIGHT UP
Well, I'm not sure this is what LTT likes to do, but I loved the extra ZFS info we got. Definitely a surprising TIL or two in there. Thanks!
There's one cool thing with ceph : it makes data loss *reaaaly* unlikely, because when an error is detected, as long as you have enough space left in the cluster, in terms of redundancy, and space, it will recover automatically and after that, even if you didn't change your bad drive, you'd have the same amount of redundancy as before the drive went bad, you just lost the drive's capacity, and usually you don't let such a cluster fill up to the point where it's unable to do that, and you expand before.
Also, ceph is a joy to expand : make the node join the cluster, create OSDs, done. Ceph will automatically rebalance the data if needed. Same thing if a node goes down, if there are enough nodes left to keep the required level or redundancy, it will automatically rebalance the data to make it happen.
We use this in our IT club at our uni, it saved us multiple times, keeping our infrastructure up and running even when we f things up. It is capable of recovering from very, VERY bad situations.
There was a time where we didn't have the "manpower" needed to replace bad drives in our cluster : if was fine : ceph kept these bad drives out of the cluster and rebuilt the redundancy, so even after all the drives of a node failed (bad HBA card), some within days of another, with no manual operation, there was no data loss, only capacity loss, which was fine, as we overprovisioned.
Truly a marvel.
In comparison, ZFS is only *good*, I think, a lot less flexible, certainly.
Edit : also, ceph is able to handle drives of drastically different capacities, and handling it as best as possible, without much intervention, if at all (depending on the crush rules, etc ...).
Jup. Ceph is absolutely amazing. We ran our cloud on GlusterFS and absolutely hated every bit of it. Now we are running on Ceph. Every issue we have encountered has been just so pleasant in comparison. Drive failure - no sweat (as you explained above, Ceph handles it so well. I'll replace the disk just whenever I feel like it). Expanding the cluster - It gets faster as a bonus! Bitrot on a drive - detected and fixed reliably. Rebalancing the cluster after a node-outage or adding new drives / nodes - works as expected with virtually no ill-effect on the performance delivered to VMs.
CEPH lets you do os updates with out big downtime
I am happy to announce that I built my first pc exactly 1 year ago, that means I did not miss any single video of LTT from 1.8 years almost, it was a new space for me I remember what horrible parts I chose while making the first list of parts then after getting knowledge From here and other creators also i selected the best bang for the buck components. My budget was very low so it had to be good. While my first selection was like just looks and names of the brand but the when I made the second list which was around 6-7months later it was much better. All and all thanks LMG for helping me love computers
I love these types of videos where we see some of the serious tech that is used for data storage and you could come across working in the IT world
i love watching the server videos these guys make cause it got me interested in doing a career in servers. Currently in taking my classes now, so exciting.
The LTT screwdriver is the LTT equivalent of Intel's Arc GPU's. "Coming soon"
Let's hope they don't have the same driver issues.
@@SciFiFactory "driver" .. heh ;)
At least this one is supposed to be top tier
ya except he actually has screw drivers on their way
@Susanna It may actually be the best ratcheting screwdriver ever, but no one who uses a screwdriver regularly wants a ratcheting one.
“We built this million dollar computer”
- never heard from again after two videos
“Here’s another three and a half petabytes of raw storage”
Seriously, what happened with that?! No mention of it here at all, but it seems it's for the same issue?
@@arklanuthoslin Yeah, just wondered the same thing and looked through the comments for an explanation. I mean the other one was just 2-3 month ago?!
From memory, that was the production server. This is the archival storage server.
@@arklanuthoslin on the WAN show they mentioned Jake was busy working on the house + replacing staff on sick(?) leave
@@arklanuthoslin The million dollar server isn't theirs to keep. And Jakes hasn't had the time
Let's take a moment to appreciate the fact that the 1000 watts quoted here is powerful enough to run most small push style lawn mowers
In terms of power density per RU, this is small bikkies. Start adding Instinct or Tesla cards and you can triple or more that desnity.
Two racks of servers is the rough equivalent of the power available to a small electric farm tractor.
Relative to the size of the system, it's not that much power since it's literally just spinning up disks plus minimal power for the SAS IO module and expanders, IPMI board etc. You can use more than that in a 1U server or a workstation if you want to. Couple of Xeons or Epycs and a couple of GPUs and you're already past that.
11:39
Linus?!? XD
Is the first petabyte vault really already 7 years old? Oh boy, time goes by way too fast ^^
The good thing about ceph is you could eventually roll all the other storage nodes in to the cluster. If you do that, please utilize watchers on other non-storage servers to add to the quorum. Having split brain storage is a nightmare.
The connector is call AirMax. Used a lot on datacenter blade servers. treat them super gentle. They jam and break really easily. I recommend never disconnecting them.
If i were Linus i would do a cold storage , with Ultrium tapes . They are denser and can hold data for long periods of time 15 years+ ,no problem
should have done tape a lot more than than just talk a little about it. a library would have back up everything.
"Dude! Is there a fighter jet starting in front of the building?!"
"Nah mate, just Jake & Linus testing a new storage server"
xD
I'm expecting them to build their own cloud solution in a year.
I hope not... AWS and Backblaze have them beat easily due to scale. The advantage of a big server is the local speeds are not limited by any ISPs
@@administratorwsv8105 I am with you, but you know why this is and that it, though reluctantly, has an understandable reason for it.
The funny thing is, the Clustering Tech Linus specifically said they wouldn't be using (Ceph) is the gold standard for Cloud Storage, scaling to dozens of PB.
At the moment, Technology is keeping up with Linus' demands; he can keep scaling up harder to get more storage.
At some point he'll have to scale out. And there, Ceph will be waiting
If they go with trueness core rather than scale, cinder is an option which would then potentially open the door to an internal openstack deployment, I can think of a few interesting applications for such a thing at LTT.
@@ubermidget2 Ceph always wins..
It's very cool but from what I've heard LTT has like 50 employees. This is like 26 Terabytes per person if you are doing a 1-to-1 backup (and not all employees will utilize this fully). Perhaps LTT has a larger data inefficiency/clean-up issue? A media company of this size should not/does not need this much storage. It's definitely cool but it seems like for the past year or so we've been getting server upgrades pretty often. It seems like a bandaid solution to a larger problem IMO. It may be in LTT's best interest moving forward to hire someone to establish standardized procedures around data retention and general operational efficiency. If these things weren't sponsored like they are LTT would be breaking the bank just trying to keep up with its data problem.
you are aware that they keep all old video footage for archival purposes, right? A media company this large, i'm actually surprised they're *only* using those Petabytes. yes, they could delete their old footage, and to be honest, for archival really going to tape would be cheaper and in their best interest, but they've not got a data retention problem, they just have a lot of data that is considered to be relatively valuable to LTT
I don't think you understand how much footage 8k recordings are. I go on vacation and bring back 40GB, but now I can upload while on vacation
@@ashleighrowe2565 So valuable that they literally just said they didn't have access to it for over a year and didn't care.
It would be awesome if you did a video on how you organize ~2PB worth of files. It seems everyone has their own scheme, and to see what you all have come up with would be useful.
A someone who has worked in a datacenter for a storage company, this is the most unremarkable thing ever.
But I still find it really fun to see them geek out over it.
I'm an IT student and so far in my classes we've touched on these JBODs a little bit but they've never really been explained. It was nice to see this in action and have a nice explaination given. Looking forward to seeing more!
@@tim3172 not really. I don't know about other curriculums but I only really got that kind of stuff in my concepts class where they taught about the physical parts of the pc and there was still an optical drive in the tests and stuff. Other than that, it's been pretty up to date.
16:45 Yeah I worked on JBODs like that for a Media Company and they really really do tear your ears apart. I got tinnitus after realising my hearing protection wasn't enough.
I'm really enjoying learning all this stuff about huge storage and data management!
I'd love to see them installing a tape backup. But they never complete a storage.
They did, but we never heard of it ever again
Buying the second half of the new building and converting it to an automated tape drive storage.
That whould be one heck of a series.
@@xanderplayz3446 Yeah you jolted my memory, it was 2018 ruclips.net/video/alxqpbSZorA/видео.html
Since then LTO-9 (18 TB) has arrived and LTO-12 (144 TB) has been teased.
@@administratorwsv8105 What is used instead?
@@administratorwsv8105 What are you talking about? HPE, Dell all still sell LTO tape drives or libraries. What we've seen is a consolidation in the manufacturers, a number of these IT vendors just resell another OEM's drives & libraries like Lenovo and Dell do with their badge on it. IBM & HPE still have a wide range of libraries. HPE's biggest LTO library can expand to a 56,400 slot, 144 drive 2.53EB behemoth. IBM has something a bit smaller. The other consolidation is that HPE & Quantam have left the development of the drives to IBM. Those three companies control the LTO consortium.
The chemistry between Jake and Linus is always entertaining to watch
At this point they literally just need to have a data center building with all their storage servers lol
You guys are insane and I like it. Keep up the good content!!
Will I ever build such a server? No!
Will I still watch Linus and Jake build it? Ofcourse, I will!
Thanks for always keeping even the crazy builds so fun to watch!
*a few "linus dropped some things" later..*
"Our 3600TB Data Recovery Disaster!!"
LTT needs a site safety manager. Most businesses would write up or even fire someone for handling heavy-weight extremely expensive hardware like that. That table is one bolt or bracket away from popping and someone getting crushed.
17:33 "2000 watt Power supplies" said Shaggy to Scooby! 😂
24:22 Jake you're running 16 jobs all to the same drive, this makes it more random. You should be good enough with a single job at 1m block size. Your real world performance will probably be similar to what FIO showed if your application or RAID software doesn't serialize what is possible. Also SES pages should show drives connected in one of the pages
bump
The only silver lining about the whole server situation at LMG is the fact that you get to build a shiny new one for content & better than ever, complete with sponsors.
@C
No
@C that’s cool buddy
No one asked
The editor that added the yeee dinosaur at the beginning deserves a raise AND brownie points
8:56 SAMTEC and many others make connectors like those, they are usually called board-to-board interconnects or backplane connectors. Supermicro no doubt made (or had someone make) them custom for that and their other servers.
Since you seem to have missed it - a loss of SPECIAL (metadata) device will result in the total loss of data. I mean, the data without metadata is useless. If you were to go this way, I would suggest splitting this device across disks in different locations - e.g. a mirror with half of it in the JBOD. Which kind-of kills most of the benefits.
you may have missed that Jake was suggesting using redundant SSD drives for that. yes the loss of data risk is still there, but given they're doing RAID-Z2, the loss of data risk is there too, and with the AFR of a SSD being lower, on average, than that of a HDD, assuming you get past the start of the bathtub curve, the risk is not any more raised by using the special metadata vdevs on a SSD like his suggestion than it would be staying on the drives that are there already, especially as Jake has 3 cold spare of those drives ready to deploy in the case of a failure of a drive
@@ashleighrowe2565 Bathub curve applies to the risk of single disk failure. In case of a critical device such a SPECIAL, it bears to consider other sources of risk as well, like for example accidental wiping of all disks in a single computer chassis. Normally "accidental wiping" is not much of risk, but you know - Linus Sebastian has his ways :^-D
I'm discussing w/ 45Drives setting up a SPECIAL metadata device, and indeed if we go this route it'd be a mirror pair of SSDs.
I love how at their scale they could pay a company to manage their data professionally but they instead just YOLO it for the sake of making more content...
I anticipate that there will be an #LMG Datacenter before too long. Knowing Linus, he'll put it on his property of his new house to get it all paid for by sponsors, clients, and tax write-offs.
What I'd like to know is whether they'd ever auction off their old tech (like the SATA drives as they upgrade to SAS JBODs)...
Object storage from AWS or Backblaze B2 would be a good option for businesses grade "offsite cloud backup".
Managed data is so forking expensive.
B2 is about as cheap as it comes and a year of 3.2PB of storage would be: $5/mo * 3,200TB = $192,000/year.
Meanwhile:
Dual Epyc is around $20k
3.2PB of drives is around $60k
JBOD box is like $10k
That's $100k one time purchase vs $200k a year. You can still afford an entry level tech to sit and watch it for $80k a year.
And most of that $100k can be sponsored by like WD, Kioxia and Supermicro with free or heavily discounted hardware.
A petabyte at something like backblaze b2 would be around $5-6k per month. Pretty expensive, but also probably less than a full time Sysadmin... I don't think they could expect anywhere close to the current performance though even with their 10G fiber connection.
tbh, 1 week worth of video from all LTT channel maybe could pay them all if it was not being used by other project
16:33 it's so nice they included the THX intro on that server
Jake: "Wanna see my wonderful cardboard?", 10:40. Fuzzy math.
Server videos are such good content, maybe I’m just a storage geek but man this is cool stuff. 😎
Just a friendly correction. The sata controller is not hot swappable.
On servers you can tell what's hot swappable by the releases being orange/red for hot, blue is when system is cold.
That’s not interchangeable with every server rack
A lot of this went straight over my head, but I loved it regardless
its interesting seeing linus look at something like this. I deal with these with my job (work as a cisco and sun/orcacle tech). and orcacle has some jbods and its very interesting how they work.
Can't believe it's been 7 years since the original petabyte project... time's just passing by like nothing
damn way to make me feel old lol
Holy Christ. Yeah, can't believe it's been that long. I still remember hearing Linus announce it on WAN show as hist "pet" project
The fact that Linus’ D%*# didn’t fall off from that shaft not being silver Is very re assuring. I’m more compelled to get one now
Watching Linus build things which I will NEVER BUY IN MY ENTIRE LIFE is so satisfying!
I'm surprised you didn't go the JBOD path a long time ago. The sheer magnitude capability and flexibility it allows is amazing.
Server drives go brrrrrr
they commented before the video was posted lmao
How?
oh no he wrote comment before video release
Indeed
UNDERSTATEMENT
Enough storage is just never enough for LTT
Apparently, induced demand applies to computers as well. 😆
Gotta have backups of every single thing they've ever filmed in 8k 60fps
Its been over a decade and still i open RUclips daily to see your video. Man i love this Channel ❤️
I love that you have finally done a episode on jbod and head server. I have been looking into this for a home or small business. There is not much on you tube on this that clearly show you how to build a home or a small business level. Would you ever considering on doing a episode on how to build both jbod and a head server and how to connect them.
Obviously depends on capacity or number of drives but for home or small business you would normally use internal disks. A server chassis with say 12x 3.5" drive bays and internal SAS or SATA controllers is way cheaper to buy than a server + JBOD + external SAS + cabling, it's less to worry about, less space, less power, etc. Also to be blunt if it's just file storage, then unless you want to nerd out for the sake of it - just go and buy a NAS from someone like QNAP or Synology.
I wonder if i can do something like the metadata storage on a Synology device somehow...
Ye
for my qnap Nas, i know that there are storage tiers which i think is something similar and you can assigne the speed of the drives, like hdds, ssds, and than nvme ssds
@@-argih It's not a SAN and they were not LUNs they were vdevs.The metadata storage thing is a feature of ZFS (they are using truenas), nothing to do with it being a SAN or a NAS. You're very confused.
Yes you can do something similar, the setting is called "pin all metadata to SSD cache", if you Google it synology's support article comes up
24:29 the way linus looks up 😂😂
11:28 Again I say... the screwdriver is a lie!
26 min video watched in full, still no idea what’s going on. Love these guys
Love this kind of server content!
What's Crazy is that Linus taugh me soo much stuff while entertaining each and every second. No University or a Tuition could teach how much you have taught me. Thanks a ton to the whole LMG Team You Guys Are Legends!
They could if you'd pay attention. Though products themselves are for you to discover. RAID and its alternatives are definitely taught
16:50 RIP Sound guy
omg Linus did a Albert Einstein at 10:09
Linus Servers: Petabytes
Restaurant Servers: Bites
And if one of the restaurant's guests has an angsty dog: Pet bites server
@@maighstir3003 peta bytes server
its hilarious how the employees yell at the boss for messing things up, LOL.
This upload pleases me as I was worried you wouldn't do yet another server video.
Regarding connector @ 8:44, I am fairly certain those are some variant of an Impact connector. Lots of companies, including TE Connectivity and Molex make them. They're used for routing high-speed differential pairs from a daughtercard to a backplane.
you may be able to use ledctl to identify drives. EX: ledctl locate=/dev/sda
Remember when Wonnak was a big project...
This chanel has gone so much and I hope it doesn't slow down anytime soon
Linus tech tips in 2035 :-
Building a zotabyte server for RUclips.
Honestly these segways to the sponsor just never get old. I still just get blindsided by them and dont know when theyre coming. And I laugh about everytime. And then I skip through the ad but i mean its still good stuff.
Linus every time he adds one tb to a server: "THIS IS THE CRAZIEST SERVER WE HAVE EVER BUILT"
11:23 "oh my gawd"
It's always a good day when we get to see Jake being SAS-sy to Linus
Having worked a lot of these types of servers in DC environment, you're going to want spares of: the external SAS cables, HBAs, drive sleds, the controller in the back of the JBOD and possibly the backplanes too. Also, if your rack isn't already floor mounted I'd recommend doing that as well since these pose a decent tipping hazard when extended in a rack.
May also be time to get a room cooling solution better than the AC
Agreed, I've never used SM storage (We use DDN SFA7990X's, bout 24PB worth in our DC) but yes..spare drives are a must, lol.
Spares are for worry wort rookies
Don't worry, they won't do any of that.
That server do be crazy tho
Linus is soon going to have his own warehouse of servers!
and every server full of raw material of building this servers :D
Glad to see LTT using some awesome Supermicro hardware!
The intro was pure gold. Laughed so hard!
I wanna see them doing a project with multiple of those 100TB SSD's that Linus only made 1 video about.
Linus: spending thousands of dollars on drives and software
also linus: I wonder what that port is!
24:30 *Linus` Internal Voice* "I want to say it, but I know I shouldn't." 🤣
Can we just appreciate how old he actually is compared back to whole room water cooling