Does it even really need an IP, or a physical connection? What about something like an Xorg display server, that apps on the same machine can talk to through a socket? It's surely just something that "serves" requests, but that maybe too broad and I guess they implicitly meant typical industry server hardware here.
CompTIA is technically correct. In the early 90s, engineers would do things like set up microcontrollers with just enough smarts to run an IP stack so they could telnet in to some or other TCP port to see how much a coffee pot weighed and thus, whether or not somebody needed to make a new pot. That's a server in action.
After 25 years of using desktop hardware for my home server (upgrading and replacing over the years), I finally put together a real-deal server. 2 CPUs, 72 threads, 96gb R ram (128gb on the next powercycle), 5 drives with 2 raid arrays, 4U case. It's GLORIOUS!!
I grabbed be an ex-business CPU+MOBO+RAM combo from eBay (there a lot on there, and /r/homelab and /r/datahoarder have recommendations about which to use). Thinking about upgrading my 2cpu, 40 thread, 128gb RAM server to one of the Epyc options available, which can be got for as little as 1k for a 32c Rome + MOBO + 128GB RAM, since I now have 180TB of space.
@@I_Am_Your_Problem hosting your own Netflix (Jellyfin), Google cloud (Next cloud), backups, surveillance. It's awesome. Also the learnings directly transferred to my job.
I moved to this about 4yrs ago. Got a Dell T110 series tower server. Its like a huge PC but almost silent to great for home use. And it has tons of drive bays, so was great to max out with storage and run ESX for all my VM's. At one point it had been on for 2.5yrs without being rebooted. Recently re-worked the whole thing and moved it onto Proxmox which was daunting at the time but now love it. Even seems more efficient at resource handling.
LTT actually did a video on how much system RAM can Google Chrome use. I think the OS stopped around 12GB of usage of RAM by just google chrome tabs. Right now Google Chrome seems to require more system RAM than when they did that video too but if you turn on performance in the browser settings then the system RAM required goes down a lot.
The only difference between a PC and a server is its purpose. If it connects to other computers, it’s a client, if it accepts connections, it’s a server, it’s as simple as that.
I'd say redundancy plays a big part and only if you deploy servers in numbers. Servers can take decades worth of beating before falling over while desktop components will just fail taking whole thing down. Servers have a lot of redundancy built in. A NIC, PSU, random fan, stick of RAM, whatever dies, when the environment is set up right, just let the host shutdown, fail over, and swap out the hardware and reboot it. Sometimes you can even just hot swap most of it. Depending on how your cabling job in the back of the rack is lol
@@Baulder13I don't think that what you said is necessarily important for a PC to fit the definition of a server. I connected an old 2014 cheap laptop to my router and forwarded some ports, I host a website, and a VPN, a password manager and other small services on it. By all practical means this is a server, but it's not the big racks upon racks of compute power with the heaps of redundancy you imagine.
They should have specified they mean dedicated or high performance servers. An old gaming PC, laptop or Raspberry Pie running some services are also servers, just not rack mounted, hpc machines.
Also, hot swappable drives, dual redundant power supplies, ILO board (integrated lights out), multiple NICs, the list goes on. Oh yeah, and lots of noise.
@@JJFlores197 Google does. I have seen them first hand. I'm an electrician and worked for a company that built one of their data centers from the group up, then stayed on for 3 months for QC. I have also worked in other data centers. Theirs is on a whole other level.
The difference is function not form. We had a dozen old desktops which used to function as PCs that we started using as print servers (of sort...it's complicated). They were formerly developer desktops and, when they were too old to be used as such we just started using them as interfaces between mainframe and printers (again, it's complicated). Nothing in the HW was changed. The function was changed. I've used old PCs as servers as well at home with no or little HW change so again, it's function, not form.
I mean technically server and pc are just applications, use-cases. Yes, a computer can be designed to run server or pc tasks more effectively, but you can also just use a pc as a server, for example a cheap Minecraft server for your buddies from an old office tower. Or perhaps picking up a decommissioned server to use as a pc, and just throwing a gpu in it. But, this was still probably the most thorough video explaining the differences yet.
That's a function of the software, though, not the hardware. A desktop PC from the 2000s can be put into service as a web server. It won't answer a million concurrent requests at reasonable speed, but it can definitely handle more than one at a time.
dear linus, please push this over to the main channel to roughly educate your viewership on the difference between desktop > workstation > server > computing cluster. (and while you're at it: a Mac is still a PC, as is a Raspberry Pi, or stone-age Personal Computers like a C64.)
TLDR, the software, that's it. The hardware configuration is completely dependent on use case, a workstation will be more like a server but in your office. A remote gaming server can use all gaming kit and be in a rack. There is no fundamental difference outside of software.
Your PC can be a server. Just depends on configuration of hardware and operating system. Also, intended use is part of what determines it is a server or not.
7:00 those passive C-Payne bifurcation boards are pretty nifty - i've used a similar one, a few years ago. Afaik, they're offering actively switched ones(ASMedia / PLX bridge) as well.
Not only the hardware but also the software. I used to work for the USA's largest bank which has pretty much all of its processing here in the UK (yeah thats odd). Anyway they have the biggest DC in the UK which is the size of 3 football fields. It is insane. Every server runs a special core of Linux and is immeadiately absorbed into a massive computing pool once they come online. So those 2000 + physical servers are all effectively one. It was super clever and very interesting to work with.
Memory errors in gaming PC's are more common than most people think. A single bir error, what ECC can detect and correct, doesn't necessarily mean that a program will crash. If you look at a gaming PC the executable code in memory is not all that large. Most memory is used for graphical elements, sounds, game maps and so on. Code that isn't really containing executable code. If a single pixel in a graphical element is the wrong color or a single sample of a sound has a bit error there is little risk that the game or OS will crash. In a lot of cases you won't even notice anything more than a blip in the sound, a strange frame in a video or a single pixel in some texture that has the wrong color. The code running through the processor is pretty damn small in memory compared to all the other assets used by the OS, the programs you use or the game you are playing. Now I did say that ECC can detect and correct single bit errors, and that's the bog standard ECC that's been with us for many decades. It will detect dual bit errors and more, but it can't correct them. A single bit error is something you will not notice when using a machine with ECC memory. All that happens is that there will be a note in the system log saying an error was detected and corrected. One such message is not worth thinking about. it's just ECC doing what it is supposed to do. A real problem is however there if you start seeing numerous ECC messages in the log. If they are occurring several times per second or so the log in Windows will just have a message that errors has occurred and been corrected but as they are so common reporting of them has been paused after some number has been reached. It's been years since I worked with this but I think it was in the two hundred errors before pausing. If this happens there is a real problem that should be addressed ASAP. The thing is that if a additional bit error occurs so there are two or more bit errors on the same read the OS will stop the machine as ECC cant repair the error. So why does bit error occur? Not all bit errors need to mean there is a real memory problem. Most of them are simply generated as particles of cosmic energy trike a memory cell and manages to change the value, making a "1" a "0" or the other way around. This is just things that happen, and we really can't shield computers against them. These particles can go through the earth so a lead shied or something like that wouldn't make much difference. So we use ECC memory in servers to try to keep the bit's in order. Now there are more advanced ECC and memory protection techniques. I remember that many years back some chip manufacturer designed a ECC based technique that would detect and correct up to dual bit errors. I can't remember the details though. There were also one that would map memory as unusable if it detected major errors in a certain memory segment. The intention was that if a machine crashed because of memory errors that the ECC couldn't correct it would map that memory as unavailable when the machine was restarted making it usable but with limited memory. I think it deactivated a whole DIMM at a time, but again this was a long time ago and I can't remember the details. Also this was way back when the chipset on the motherboard contained the memory controller. Unlike today the chipset manufacturers could invent their own ways to protect memory, something that they can't do today when the memory controller is integrated in the CPU. Some really interesting solutions has been lost with the new modern processors. But the added performance is considered to be worth it. Today it's the processors that has to support ECC, something Intel is bad at doing for their desktop processors. The Ryzen processors from AMD is a bit better at this, but even then not all models will support ECC and a lot of motherboards also lack ECC support.
The pace of this video is 1.5x what my brain can comprehend. I know tech stuff but wow this was fast pace. So many things thrown at me at once. Love techquickie but slow it down for the future just a bit.
This video should have been titled, what's the difference between server and pc hardware. Old pc hardware can be used for a server. The difference is function, not hardware.
The easiest way to do server for like home use or your own Plex, is a mini PC with an i5 10500t like hp elite desk which can be had for $120ish, then grab a das and fill it up with drives and their you go.
So this is the difference between data center and consumer not PC and server. It's easy to conflate the two. But they aren't the same. Anyone with a NAS or a wifi printer has a server on their home network. Server is just any machine that provides a service to other PCs on the network.
I used to be an IT guy at a non profit. We used AMD based 486 off the shelf parts repurposed from office computers in a regular case in our NOVELL 4 servers. They never went down. That was in the 90s.
Architecture used to be different, too. x86 machines rule the roost now but most Unix and Unix-like boxes weren’t x86 until sometime after the turn of the century. I have a uVAX II and ostensibly, it was a server at some point. But, I think the exception to the x86 might be IBM’s mainframes, now. Or ARM.
the RAM part is why i would want to get old server stuff for cheap... having a RAM drive with games installed in is something i always wanted to try! (modern games that is, cause for old games even a M.2 is instant loading.)
4:40 speed chart. the further you go, the faster and more expensive it gets. Hard Drives < 2.5 SSDs < NVME SSDs < Slow RAM < Faster RAM < L3 Cache in CPU < L2 < L1 that's all i know.
No mention of fault tolerance in parts, multiple power supplies, raid cache batteries, raid controllers and disc arrays with multiple redundancy. Also no mention of IPMI or iLo(HPE iLo is actually amazing for remote management).
Double ported enterprise storage so that there's basically two computers attached to it in one enclosure, so that even if the server completely craps the bed it can still keep on ticking with the other half...
None of these are requirements for a computer to be considered a server. HP, Dell and many other vendors sells server devices with... none of them. The bottom third of the Proliant series contains none of them. What on earth is a "disc array"? Is that when you have multiple DVD burners in RAID or did you mean to say "disk" with a k? Stop. Think. Post.
recently there's a push towards a compute density, even if it ruins the efficiency, in the AI race nobody considers the long term costs, but not having to open a new datacenter helps save a lot of time
Another key difference is remote KVM and IPMI integration. No one wants to stay close for long periods of time to these space heaters with turbines, or travel far just to turn it off and on again.
Oh, hope he talks about server workstations. Been dreaming of owning one for various AI things. You know how some look at boats, cars, homes, etc they won't ever own but dream of? Yeah, that is with me with server workstations since they cost 50-100s of thousands depending the custom build.
who/what is using it; if it's a person: it's a PC, if it's a PC: it's a server. also servers generally have more system resources, not necessarily better system resources.
Video suggestion - under a possible new "What's the difference Video Series???". As an old-school elder, I've never been a huge fan of mobile phones, but even to me, it's fairly common knowledge that you should "ideally" charge your phone from 20% to only 80% to help the battery last longer over time. So imagine my surprise, when my new Dyson Vacuum documentation recommends to me, to run the battery down to zero once a month or so, to help the battery last longer. It immediately sounded like a scam to me not knowing a lot about battery life - so you have to buy a new Dyson replacement battery when the old one dies, due to running it from 0% to 100% too often. But if it's NOT? a scam - then how about a video that compares the differences between PHONE / VACUUM / & SAY EV CAR BATTERIES, especially from the point of view of trying to extend their life as much as possible, and how that might differ from one type of battery to the other?
A PC is a computer you can shut down. A server is one you leave running. Yeah there are hardware choices people make that skew one way or the other but honestly the biggest difference is that availability. A server is expected to be available 24/7 because it is providing services used by multiple people simultaneously. A PC on the other hand generally only has a single user at a time. Patrick is great but he is very narrowly focused at times and I think this is one of them. He has a very distinct line between what is used by businesses and what is used by consumers even though there are situations when you actually do mix them a little.
Soon we'll be seeing videos in a tiny line. Why not keep the resolution adapted to existing screens? It's the same with movies. We have 16:9 screens and they film with ultra-imposing black bars. I don't understand the point of filming between 2 black lines... :)
I think lot of the problem is late 1990s and early 2000's when they used PC Desktop with multiple Hard Disks, CD or DVD-ROM drive were called Home Servers and Multimedia Servers. Granted when compared the silly amount of storage you can fit into todays ATX or Micro ATX case are at the end of day, a hopped up PC that have excessive Storage space.
Pc is becoming a more general term as many devices can act like PCs, while also having the ability to host servers, albeit still extremely very limited in comparison to desktop OS PCs.
And they didn't even get into redundant everything servers. Redundant power supplies, redundant mainboards with redundant networking and redundant access to storage (dual sas, for example)... just so that if something completely craps itself, the server still keeps on running.
@@tim3172 There is no such things as hard requirements to be a server. The video mentioned some hallmarks and I mentioned some more. Your comment is pointless.
While it's a good description of how datacenter systems differ from home ones, I think the video fails to address the actual difference between a server and client. After all, any computer can perform any task you program it to do. A computer isn't a server just because it's in a specific form factor or in a specific setting.
Actual answer, nothing. HOWEVER actual purpose built Server machines are designed more for networking workloads as opposed to performance. Think of it as the difference with having a fleet of Buses or work trucks vs. A single high performance sports car
Really the difference from your home PC and a server in hardware is most servers will have redundancy and most servers will have error correctly RAM. The rest depends on the use for the server as server CPUs can range from 8 very efficient at passing data from PCIe slot to PCIe slot and PCIe slot to system RAM to 192 CPU cores for heavily multi threaded on the CPU compute tasks. You then have the servers wtih maybe 128GB of system RAM and maybe a quad core CPU but 50 or more 3.5" drives. Etc. with different types of servers out there you even can turn a raspberry pi 2B into a server and if what you expect it to do it is able to it will run well. The point of that very low end computer example is to show server hardware doesn't have to have insane specs unless you are locking at how much space and power does it take which the raspberry pi is kind of good for. Just most enterprises do not like community supported hardware with most likely made by the company using it or community supported software. So in the low power is everything environments they still use x86 CPUs due to Business to Business support networks. The thing is in homelabs Raspberry pis and its competitors are used as good servers. Servers are very specialized into 1 thing while the home PC is build for general use. So no insane amount of anything but everything will work. Server software will run on both the made for the process hardware but also your home PC as long as you also have the hardware processors to run the correct instruction set or instruction sets.
Linus, what about the breaker boards like Nvidia has? Their Orin Boards are about the size of a credit card, and that’s including 64GB DDR5 and the largest NVME and SSD you can stick in there(besides PCI-express Considering they have multi module boards for AI and networking, could you cram More hardware in that way, say if you have 4x 7 module boards per shelf/rack? That would be 336 CPU cores, 1.792 TB Ram, 224 TB storage (not including PCIe or other connectivity) and 57,344 GPU’s- and each one can be dialed from 50 watts, down to 15 watts. That’s a grand total of just under 1.3 KW or under 500 watts (15 watts a piece) That’s more grunt and less power than an Epyc Server! Including GPU Computer!
Please consider buying weird military computers and showing them off. From the gear the soldiers wear to the unusual machine learning, data crunching vehicle mounted computers they put in the humvees.
@ in what way? Just showing off the tech isn’t telling how great it is to get killed overseas. By that logic, showing off rack servers is encouraging users to mortgage their house to buy a datacenter. Just showing off the computers should not be a problem.
I think they have done that. The results usually aren't always that impressive. Most servers are not meant for gaming and usually. have terrible graphics cards. That is unless you have servers with powerful graphics cards, but those are usually for specialized non-gaming applications.
Hey TQers. To clarify, our community post yesterday says "soon" and not "immediately" :)
Reading comprehension is apparently a skill not many people have mastered.
@@Sassi7997 This comment can't stop me, I can't read
hi linus
@Sassi7997 the problem isnt that we cant read its that why is the change happening in the first place
We dont care about that but why is the change happening in the firat place
According to CompTIA and Network + exam a server can be "anything that can be assigned an IP address"
Gotta love using an Xperia Arc as a NAS, best Server experience so far
There are a lot of CompTIA things that could be argued about...
Maybe that's true because it's something you want it to be. Nothing's stopping you.
Does it even really need an IP, or a physical connection? What about something like an Xorg display server, that apps on the same machine can talk to through a socket? It's surely just something that "serves" requests, but that maybe too broad and I guess they implicitly meant typical industry server hardware here.
CompTIA is technically correct. In the early 90s, engineers would do things like set up microcontrollers with just enough smarts to run an IP stack so they could telnet in to some or other TCP port to see how much a coffee pot weighed and thus, whether or not somebody needed to make a new pot. That's a server in action.
After 25 years of using desktop hardware for my home server (upgrading and replacing over the years), I finally put together a real-deal server. 2 CPUs, 72 threads, 96gb R ram (128gb on the next powercycle), 5 drives with 2 raid arrays, 4U case.
It's GLORIOUS!!
I grabbed be an ex-business CPU+MOBO+RAM combo from eBay (there a lot on there, and /r/homelab and /r/datahoarder have recommendations about which to use). Thinking about upgrading my 2cpu, 40 thread, 128gb RAM server to one of the Epyc options available, which can be got for as little as 1k for a 32c Rome + MOBO + 128GB RAM, since I now have 180TB of space.
To do what? Nothing worthwhile.
@@I_Am_Your_Problem hosting your own Netflix (Jellyfin), Google cloud (Next cloud), backups, surveillance. It's awesome. Also the learnings directly transferred to my job.
living the dream dude!
I moved to this about 4yrs ago. Got a Dell T110 series tower server. Its like a huge PC but almost silent to great for home use. And it has tons of drive bays, so was great to max out with storage and run ESX for all my VM's. At one point it had been on for 2.5yrs without being rebooted. Recently re-worked the whole thing and moved it onto Proxmox which was daunting at the time but now love it. Even seems more efficient at resource handling.
3 TB of RAM is crazy. But I still bet Google Chrome would find a way to use it all.
JVM would still complain of not enough ram.
Modded KSP probably
LTT actually did a video on how much system RAM can Google Chrome use. I think the OS stopped around 12GB of usage of RAM by just google chrome tabs. Right now Google Chrome seems to require more system RAM than when they did that video too but if you turn on performance in the browser settings then the system RAM required goes down a lot.
The only difference between a PC and a server is its purpose. If it connects to other computers, it’s a client, if it accepts connections, it’s a server, it’s as simple as that.
I'd say redundancy plays a big part and only if you deploy servers in numbers. Servers can take decades worth of beating before falling over while desktop components will just fail taking whole thing down. Servers have a lot of redundancy built in. A NIC, PSU, random fan, stick of RAM, whatever dies, when the environment is set up right, just let the host shutdown, fail over, and swap out the hardware and reboot it. Sometimes you can even just hot swap most of it. Depending on how your cabling job in the back of the rack is lol
@@Baulder13I don't think that what you said is necessarily important for a PC to fit the definition of a server.
I connected an old 2014 cheap laptop to my router and forwarded some ports, I host a website, and a VPN, a password manager and other small services on it.
By all practical means this is a server, but it's not the big racks upon racks of compute power with the heaps of redundancy you imagine.
Yeah, I would say that the biggest difference is the software.
They should have specified they mean dedicated or high performance servers.
An old gaming PC, laptop or Raspberry Pie running some services are also servers, just not rack mounted, hpc machines.
Also, hot swappable drives, dual redundant power supplies, ILO board (integrated lights out), multiple NICs, the list goes on.
Oh yeah, and lots of noise.
Totally.
@@MrPants-xy6db Maybe, but also maybe not. Some servers are water cooled. It depends partly on where they are installed.
@@christophernugent8492 I don't think that's very common in the server space to have watercooled servers.
@@JJFlores197 Google does. I have seen them first hand. I'm an electrician and worked for a company that built one of their data centers from the group up, then stayed on for 3 months for QC. I have also worked in other data centers. Theirs is on a whole other level.
Super cool! Glad to help on this one.
ok.
Probably a vid scheduled for the release earlier, before the hiatus post 😅
They sometimes put certain Videos are more side projects and may spend months on the back burner while being edited.
Where is that post ?
no more LTT content? NOO
@dariosantacruz the community tab on this channel
@@TKInternational76 LTT is staying, but TechQuickie, Mac Address and GameLinked are going.
Love the fact that the green screen has the same style as years ago!
Kinda surprised this is the first video on this. Good for people to know.
The difference is function not form. We had a dozen old desktops which used to function as PCs that we started using as print servers (of sort...it's complicated). They were formerly developer desktops and, when they were too old to be used as such we just started using them as interfaces between mainframe and printers (again, it's complicated). Nothing in the HW was changed. The function was changed. I've used old PCs as servers as well at home with no or little HW change so again, it's function, not form.
I mean technically server and pc are just applications, use-cases. Yes, a computer can be designed to run server or pc tasks more effectively, but you can also just use a pc as a server, for example a cheap Minecraft server for your buddies from an old office tower. Or perhaps picking up a decommissioned server to use as a pc, and just throwing a gpu in it. But, this was still probably the most thorough video explaining the differences yet.
For me a Server is a system that can handle multiple users at the same time. That's it.
User 'requests', is a bit more accurate
Sure that sounds easy, but I just keep running out of USB ports for all those mice...
Unless you're NCIS. Then two people can use the same keyboard at the same time... 😮💨
That's a function of the software, though, not the hardware. A desktop PC from the 2000s can be put into service as a web server. It won't answer a million concurrent requests at reasonable speed, but it can definitely handle more than one at a time.
@@argvminusone Yes, exactly. The hardware can be optimal or not for the task, but it doesn't make a PC or a server. The software/OS does.
dear linus, please push this over to the main channel to roughly educate your viewership on the difference between desktop > workstation > server > computing cluster.
(and while you're at it: a Mac is still a PC, as is a Raspberry Pi, or stone-age Personal Computers like a C64.)
It’s been a fun journey. Hopefully this channel will still remain.
What? Why would this channel not remain anymore?
@@DacLMKSee yesterday's community post. This channel is going on indefinite hiatus along with GameLinked and MacAddress
@@shishsquared nowhere did it say indefinite. Based on what it says it’s most likely temporary. I don’t see why they would get rid of techquickie.
Probably a video made before the hiatus notice?
"Damn, they don't know how to read"
WHAT hiatus notice??!?
@@QueMusiQTechquickie, GameLinked, and MacAddress are going on indefinite hiatus
@@shishsquared where you see that notice?
@@kmeanxneth the community tab in the channel
TLDR, the software, that's it. The hardware configuration is completely dependent on use case, a workstation will be more like a server but in your office. A remote gaming server can use all gaming kit and be in a rack. There is no fundamental difference outside of software.
7:16 - someone's getting fired in the morning!
Great timing on the video :) I've recently become obsessed with homelabbing. Also, what's with the PowerPoint transition on 0:26 😭
That hiatus was quick
This is the sort of video you should be producing. Stuff for everyone.
Your PC can be a server. Just depends on configuration of hardware and operating system. Also, intended use is part of what determines it is a server or not.
7:00 those passive C-Payne bifurcation boards are pretty nifty - i've used a similar one, a few years ago. Afaik, they're offering actively switched ones(ASMedia / PLX bridge) as well.
I feel like i've seen this from LTT before, but this way more concise!
Not only the hardware but also the software. I used to work for the USA's largest bank which has pretty much all of its processing here in the UK (yeah thats odd). Anyway they have the biggest DC in the UK which is the size of 3 football fields. It is insane. Every server runs a special core of Linux and is immeadiately absorbed into a massive computing pool once they come online. So those 2000 + physical servers are all effectively one. It was super clever and very interesting to work with.
Memory errors in gaming PC's are more common than most people think. A single bir error, what ECC can detect and correct, doesn't necessarily mean that a program will crash. If you look at a gaming PC the executable code in memory is not all that large. Most memory is used for graphical elements, sounds, game maps and so on. Code that isn't really containing executable code. If a single pixel in a graphical element is the wrong color or a single sample of a sound has a bit error there is little risk that the game or OS will crash. In a lot of cases you won't even notice anything more than a blip in the sound, a strange frame in a video or a single pixel in some texture that has the wrong color. The code running through the processor is pretty damn small in memory compared to all the other assets used by the OS, the programs you use or the game you are playing.
Now I did say that ECC can detect and correct single bit errors, and that's the bog standard ECC that's been with us for many decades. It will detect dual bit errors and more, but it can't correct them. A single bit error is something you will not notice when using a machine with ECC memory. All that happens is that there will be a note in the system log saying an error was detected and corrected. One such message is not worth thinking about. it's just ECC doing what it is supposed to do. A real problem is however there if you start seeing numerous ECC messages in the log. If they are occurring several times per second or so the log in Windows will just have a message that errors has occurred and been corrected but as they are so common reporting of them has been paused after some number has been reached. It's been years since I worked with this but I think it was in the two hundred errors before pausing. If this happens there is a real problem that should be addressed ASAP. The thing is that if a additional bit error occurs so there are two or more bit errors on the same read the OS will stop the machine as ECC cant repair the error.
So why does bit error occur? Not all bit errors need to mean there is a real memory problem. Most of them are simply generated as particles of cosmic energy trike a memory cell and manages to change the value, making a "1" a "0" or the other way around. This is just things that happen, and we really can't shield computers against them. These particles can go through the earth so a lead shied or something like that wouldn't make much difference. So we use ECC memory in servers to try to keep the bit's in order.
Now there are more advanced ECC and memory protection techniques. I remember that many years back some chip manufacturer designed a ECC based technique that would detect and correct up to dual bit errors. I can't remember the details though. There were also one that would map memory as unusable if it detected major errors in a certain memory segment. The intention was that if a machine crashed because of memory errors that the ECC couldn't correct it would map that memory as unavailable when the machine was restarted making it usable but with limited memory. I think it deactivated a whole DIMM at a time, but again this was a long time ago and I can't remember the details. Also this was way back when the chipset on the motherboard contained the memory controller. Unlike today the chipset manufacturers could invent their own ways to protect memory, something that they can't do today when the memory controller is integrated in the CPU.
Some really interesting solutions has been lost with the new modern processors. But the added performance is considered to be worth it. Today it's the processors that has to support ECC, something Intel is bad at doing for their desktop processors. The Ryzen processors from AMD is a bit better at this, but even then not all models will support ECC and a lot of motherboards also lack ECC support.
I'm actually going to miss this channel.
What's happening?
@mossy_6475 They're retiring it and a few others, part of a new corporate strategy.
@@mossy_6475 Check the community tab, anything that says is all the rest of us really know.
Definitely gonna subscribe for more tech quickies
Beautifully presented with clear explanations. Thanks!
The pace of this video is 1.5x what my brain can comprehend. I know tech stuff but wow this was fast pace. So many things thrown at me at once. Love techquickie but slow it down for the future just a bit.
This is my fav channel please dont stop❤
This video should have been titled, what's the difference between server and pc hardware. Old pc hardware can be used for a server. The difference is function, not hardware.
The easiest way to do server for like home use or your own Plex, is a mini PC with an i5 10500t like hp elite desk which can be had for $120ish, then grab a das and fill it up with drives and their you go.
Video suggestion - maybe for the main channel: building a new NAS/streaming server without going crazy with custom scripts or unobtainable hardware.
Awesome topic… I’m still waiting for the LTT Equinix vid to drop!
So this is the difference between data center and consumer not PC and server. It's easy to conflate the two. But they aren't the same. Anyone with a NAS or a wifi printer has a server on their home network. Server is just any machine that provides a service to other PCs on the network.
Thank you Linus for the video!
Now I can finally explain to non-tech people what I do at my job when I say “I sell servers…computers but industrial…”
I used to be an IT guy at a non profit. We used AMD based 486 off the shelf parts repurposed from office computers in a regular case in our NOVELL 4 servers. They never went down. That was in the 90s.
Architecture used to be different, too. x86 machines rule the roost now but most Unix and Unix-like boxes weren’t x86 until sometime after the turn of the century.
I have a uVAX II and ostensibly, it was a server at some point. But, I think the exception to the x86 might be IBM’s mainframes, now. Or ARM.
"What's the difference between a Server and a [personal computer]" is kinda answered in the name of them, tbh, but I'll watch either way.
the RAM part is why i would want to get old server stuff for cheap... having a RAM drive with games installed in is something i always wanted to try! (modern games that is, cause for old games even a M.2 is instant loading.)
Could you make a video on all the hardware that you would typically find in those enterprise server racks?
4:40 speed chart. the further you go, the faster and more expensive it gets.
Hard Drives < 2.5 SSDs < NVME SSDs < Slow RAM < Faster RAM < L3 Cache in CPU < L2 < L1
that's all i know.
Thanks for the video!
No mention of fault tolerance in parts, multiple power supplies, raid cache batteries, raid controllers and disc arrays with multiple redundancy.
Also no mention of IPMI or iLo(HPE iLo is actually amazing for remote management).
Double ported enterprise storage so that there's basically two computers attached to it in one enclosure, so that even if the server completely craps the bed it can still keep on ticking with the other half...
None of these are requirements for a computer to be considered a server.
HP, Dell and many other vendors sells server devices with... none of them. The bottom third of the Proliant series contains none of them.
What on earth is a "disc array"? Is that when you have multiple DVD burners in RAID or did you mean to say "disk" with a k?
Stop. Think. Post.
@@tim3172 Yes. Stop. Think. If you'd done that, you'd not have posted.
recently there's a push towards a compute density, even if it ruins the efficiency, in the AI race nobody considers the long term costs, but not having to open a new datacenter helps save a lot of time
Another key difference is remote KVM and IPMI integration.
No one wants to stay close for long periods of time to these space heaters with turbines, or travel far just to turn it off and on again.
Oh, hope he talks about server workstations. Been dreaming of owning one for various AI things. You know how some look at boats, cars, homes, etc they won't ever own but dream of?
Yeah, that is with me with server workstations since they cost 50-100s of thousands depending the custom build.
03:40 A.I researchers can you please add this layer already? We're ready for them to become conscious!
who/what is using it; if it's a person: it's a PC, if it's a PC: it's a server. also servers generally have more system resources, not necessarily better system resources.
Watching during a gaming break. I'm winning dad.
A server has a management interface (IPMI) to better access and monitor it remotely, even when the OS crashes.
Or you just don't want to sit next to the server in a loud and hot datacenter while doing firmware updates...
Not all of them.
HPE sells many servers without remote access as does Dell, Lenovo, etc.
1:13 u guys missed a great spot to plugin the seasonic sponcership part. i literally expect the ad here. but nevermind
Looking at the modern prebuilts at stores and the enterprise models, they look pretty similar underneath.
What's the difference between water storage tank at your home on the rooftop and the one at the the top of town's water tower? 🤨🤨¯\_(ツ)_/¯
do videos about diff types of server motherboards or how to choose build for diff types of server
As someone with six of those servers, I could confirm, they are pretty sweet, but pricey to run and buy and maintain😂
Video suggestion - under a possible new "What's the difference Video Series???". As an old-school elder, I've never been a huge fan of mobile phones, but even to me, it's fairly common knowledge that you should "ideally" charge your phone from 20% to only 80% to help the battery last longer over time. So imagine my surprise, when my new Dyson Vacuum documentation recommends to me, to run the battery down to zero once a month or so, to help the battery last longer. It immediately sounded like a scam to me not knowing a lot about battery life - so you have to buy a new Dyson replacement battery when the old one dies, due to running it from 0% to 100% too often. But if it's NOT? a scam - then how about a video that compares the differences between PHONE / VACUUM / & SAY EV CAR BATTERIES, especially from the point of view of trying to extend their life as much as possible, and how that might differ from one type of battery to the other?
I was expecting baseboard management controllers to also be a topic.
YES I NEED TO LEARN ABOUT THIS TOO!
Considering mini PCs nowadays- are they any more space, efficient or power efficient then servers, if you had dozens of them?
A PC is a computer you can shut down. A server is one you leave running. Yeah there are hardware choices people make that skew one way or the other but honestly the biggest difference is that availability. A server is expected to be available 24/7 because it is providing services used by multiple people simultaneously. A PC on the other hand generally only has a single user at a time. Patrick is great but he is very narrowly focused at times and I think this is one of them. He has a very distinct line between what is used by businesses and what is used by consumers even though there are situations when you actually do mix them a little.
I want LMG to do what is best for the company but I'll miss TQ! Looking forward to whatever you have planned though.
Please, can you do a video about full tower server cases? I'm looking to buy one for my next pc with s lot of drives.
'When I tried looking up this video, i ended up finding your video from 10 years ago, named "Servers vs Desktop PCs as Fast As Possible"
"As much memory as ssd storage"
Me chuckles at my 20 terabytes of SSDs.
I can beat that :p
12TB in my main PC and 24TB in my home server. And 4TB in the laptop and lots more smaller SSDs besides.
@Steamrick yeah I have 2 8tb Samsung evo drives (sata) and 2 2tb m.2.
@@ishmaelisaac4579 3x 7.68TB, 2x 3.84TB, 1x 4TB, 2x 2TB, 1x 1TB, various smaller SSDs. A mix of SATA and NVMe. :)
Soon we'll be seeing videos in a tiny line. Why not keep the resolution adapted to existing screens? It's the same with movies. We have 16:9 screens and they film with ultra-imposing black bars. I don't understand the point of filming between 2 black lines... :)
I think lot of the problem is late 1990s and early 2000's when they used PC Desktop with multiple Hard Disks, CD or DVD-ROM drive were called Home Servers and Multimedia Servers. Granted when compared the silly amount of storage you can fit into todays ATX or Micro ATX case are at the end of day, a hopped up PC that have excessive Storage space.
Pc is becoming a more general term as many devices can act like PCs, while also having the ability to host servers, albeit still extremely very limited in comparison to desktop OS PCs.
1 Petabyte i 1U?! In 2006 the entire RUclips catalogue only took up 40TB :O
And they didn't even get into redundant everything servers. Redundant power supplies, redundant mainboards with redundant networking and redundant access to storage (dual sas, for example)... just so that if something completely craps itself, the server still keeps on running.
None of these are requirements for a computer to be considered a server.
@@tim3172 There is no such things as hard requirements to be a server. The video mentioned some hallmarks and I mentioned some more. Your comment is pointless.
While it's a good description of how datacenter systems differ from home ones, I think the video fails to address the actual difference between a server and client. After all, any computer can perform any task you program it to do. A computer isn't a server just because it's in a specific form factor or in a specific setting.
Hey that nuke plant is in my hometown, Berwick PA! 1:04
Actual answer, nothing. HOWEVER actual purpose built Server machines are designed more for networking workloads as opposed to performance.
Think of it as the difference with having a fleet of Buses or work trucks vs. A single high performance sports car
Hi from South Africa
what's the whole hiatus thing? what's the context?
they have a community post about it
Ones for serving and the other is for personal computing. What do I win?
So is this a teaser for new new new whonnock?
Make a video on 3d monitors please?
Really the difference from your home PC and a server in hardware is most servers will have redundancy and most servers will have error correctly RAM. The rest depends on the use for the server as server CPUs can range from 8 very efficient at passing data from PCIe slot to PCIe slot and PCIe slot to system RAM to 192 CPU cores for heavily multi threaded on the CPU compute tasks. You then have the servers wtih maybe 128GB of system RAM and maybe a quad core CPU but 50 or more 3.5" drives. Etc. with different types of servers out there you even can turn a raspberry pi 2B into a server and if what you expect it to do it is able to it will run well. The point of that very low end computer example is to show server hardware doesn't have to have insane specs unless you are locking at how much space and power does it take which the raspberry pi is kind of good for. Just most enterprises do not like community supported hardware with most likely made by the company using it or community supported software. So in the low power is everything environments they still use x86 CPUs due to Business to Business support networks. The thing is in homelabs Raspberry pis and its competitors are used as good servers.
Servers are very specialized into 1 thing while the home PC is build for general use. So no insane amount of anything but everything will work.
Server software will run on both the made for the process hardware but also your home PC as long as you also have the hardware processors to run the correct instruction set or instruction sets.
is supercomputer a server ?
YEs!
Linus, what about the breaker boards like Nvidia has?
Their Orin Boards are about the size of a credit card, and that’s including 64GB DDR5 and the largest NVME and SSD you can stick in there(besides PCI-express
Considering they have multi module boards for AI and networking, could you cram More hardware in that way, say if you have 4x 7 module boards per shelf/rack?
That would be 336 CPU cores, 1.792 TB Ram, 224 TB storage (not including PCIe or other connectivity) and 57,344 GPU’s- and each one can be dialed from 50 watts, down to 15 watts.
That’s a grand total of just under 1.3 KW or under 500 watts (15 watts a piece)
That’s more grunt and less power than an Epyc Server! Including GPU Computer!
Please consider buying weird military computers and showing them off. From the gear the soldiers wear to the unusual machine learning, data crunching vehicle mounted computers they put in the humvees.
They've said they don't want to act as marketing for any military
@ they won’t be. Showing off military computers isn’t marketing for the military.
@@christophernugent8492 actually it totally is
@ in what way? Just showing off the tech isn’t telling how great it is to get killed overseas. By that logic, showing off rack servers is encouraging users to mortgage their
house to buy a datacenter. Just showing off the computers should not be a problem.
One is a big beepy thing that doesnt make sense (to me). The other is a small whirry thing that heats my room.
One serves the other is as simple as that 😂
2:34 next server build?
The power!!!
like the difference between a race car and a tractor. both will get you there but in different speeds and pulling different amounts of stuff.
Id like to see a vid where you game on a server just as a comparison
I think they have done that. The results usually aren't always that impressive. Most servers are not meant for gaming and usually. have terrible graphics cards. That is unless you have servers with powerful graphics cards, but those are usually for specialized non-gaming applications.
skibidi mistake !!!! you say 10 years of warranty at 3:25 while the screenshot says 12!!! fortnite sigma fanum tax loooooooool
Virgin
Server? I hardly know her
back from the dead PagMan
Is this from 9 hours ago or 9 years ago?
the tip
0:13 bro didn't mentioned the psu
A server is a program running on something, a pc is a personal computer, you're welcome
The point is the term server has different meaninsg so context is key.
Every PC can be a server, but not every server can be a PC
Im running that rizztech propylon ecc
But why mmo server still no improvement still has the same problem?
One prefers more cores, and the other prefers higher clock speeds
Awesome!!!