For those that get this recommended in the RUclips Algorithm, the way that this disaster could have been avoided: 1. Redundant A/C Units on different circuits are a must. 2. Environmental monitoring is an absolute must in any equipment room. Most datacentre grade UPS units offer environmental cards. 3. Implementation of monitoring software that can query the environmental cards and send out pages when things are out of whack. (Such as PRTG) 4. Implementing software that can automatically shut down hosts in the event of a power failure or environmental issue. By doing these things, not only could the IT team have saved money on replacing servers, but an outage could have been avoided. (Also, Novell in 2011? Yikes.)
My Qestion is, why the Server didn´t shut down themself? They have onboard heat sensors. When they detect too high temperature they should gracefully shut down the System. Or ist the Hardware to old for this? (Isn´t it implemented in every Server?) Or did they turn off the heat control? The Server could have died on other things too then. Like the fan of the CPU could have failed and killed itself without someone noticing. Please correct me, if i am wrong in some situations. But as i know is, that Data and Hardware have more priority than aviability. (At least on modern Systems)
@@marvinlueken25 Depends on the manufacturer. Working in I.T. myself I've seen some Dell servers that just throw up a high temp error on the front LCD but don't shut the system down. As weird as it sounds, I think this is to protect against data loss as if the server shut itself down without confirmation it could cause service issues. If you're implementing servers in any type of infrastructure where outages cost time and money (from schools up to big businesses) then they should be monitoring the environment the equipment is installed in.
and then you have a cascading failure of both and you are still up the creek without a paddle. meaning that no matter how much redundancy you build in there is always room for Murphy.
and energy fault can take both them down. So the bes thing in my opinion is to have room sensors hardware to let you know of this problems and then you can shutdown everything until you fix the AC. But yes two AC can give you better chances
Most places have the data room off the main air handler with a thermostat that uses the chilled water the rest of the building uses, and then have a mini split that is dedicated to it should for instance the chiller fail.
There should be some sort of AES (Automatic Emergency Shutdown) that would automatically make the servers save all the data to their hard drives and shut down if the temperature got too high,
And that is the reason why you don't have everything hooked up to computers. They are going to fail sooner or later, computers are complex and sensitive machines after all. For the best reliability, design a fault tolerant network and keep backups somewhere else. Have a Backup plan in case of total computer network failure.
For those that get this recommended in the RUclips Algorithm, the way that this disaster could have been avoided:
1. Redundant A/C Units on different circuits are a must.
2. Environmental monitoring is an absolute must in any equipment room. Most datacentre grade UPS units offer environmental cards.
3. Implementation of monitoring software that can query the environmental cards and send out pages when things are out of whack. (Such as PRTG)
4. Implementing software that can automatically shut down hosts in the event of a power failure or environmental issue.
By doing these things, not only could the IT team have saved money on replacing servers, but an outage could have been avoided. (Also, Novell in 2011? Yikes.)
have u seen the windows Xp logo in one of the classroom?
My Qestion is, why the Server didn´t shut down themself? They have onboard heat sensors. When they detect too high temperature they should gracefully shut down the System. Or ist the Hardware to old for this? (Isn´t it implemented in every Server?) Or did they turn off the heat control? The Server could have died on other things too then. Like the fan of the CPU could have failed and killed itself without someone noticing.
Please correct me, if i am wrong in some situations. But as i know is, that Data and Hardware have more priority than aviability. (At least on modern Systems)
@@marvinlueken25 Depends on the manufacturer. Working in I.T. myself I've seen some Dell servers that just throw up a high temp error on the front LCD but don't shut the system down. As weird as it sounds, I think this is to protect against data loss as if the server shut itself down without confirmation it could cause service issues. If you're implementing servers in any type of infrastructure where outages cost time and money (from schools up to big businesses) then they should be monitoring the environment the equipment is installed in.
I liked seeing the Sun logo without Oracle at 3:12
Or just have 2 AIR CONDITIONERS
and then you have a cascading failure of both and you are still up the creek without a paddle. meaning that no matter how much redundancy you build in there is always room for Murphy.
and energy fault can take both them down. So the bes thing in my opinion is to have room sensors hardware to let you know of this problems and then you can shutdown everything until you fix the AC. But yes two AC can give you better chances
Most places have the data room off the main air handler with a thermostat that uses the chilled water the rest of the building uses, and then have a mini split that is dedicated to it should for instance the chiller fail.
Novell Netware at 0:53 :-)
At first i thought this was a joke video because of how serious they were but turned out they went joking.
There should be some sort of AES (Automatic Emergency Shutdown) that would automatically make the servers save all the data to their hard drives and shut down if the temperature got too high,
Yeah it should but if you shutdown the server it might have some issues
Air Crash Investigation meets a school IT team
And that is the reason why you don't have everything hooked up to computers. They are going to fail sooner or later, computers are complex and sensitive machines after all. For the best reliability, design a fault tolerant network and keep backups somewhere else. Have a Backup plan in case of total computer network failure.
watcher206 It's not the computer's fault that the air conditioning failed.
This is why you have off site data centres.
+McRambro Especially if you're located in tornado country.
Bruh just turn it on at the Plotagon High School and install Windows Server 2012 R2
the motherboard died so u couldnt
Bruh just turn it on
the motherboard died so u couldnt
@@AIC69420 just fix it lol
@@randalfik7822 The servers were old anyway as mentioned so he recommended to change them
@@godlyghost3111 its actually not a woosh because its just an explanation and if you read it carefully its irony and satire