PirateSoftware Breaks Down CrowdStrike Computer Issue

Поделиться
HTML-код

Комментарии • 664

  • @matthewbohne7238
    @matthewbohne7238 3 месяца назад +604

    5:42 - Yes! It took me a while in the IT space to find the confidence to look my boss straight in the face and say "If you see me working like crazy, or in a panic... something is very wrong. It's being handled, don't bog me down with meetings and superfluous communication. If you want to help, I'll show you what you need to do. Otherwise, leave me alone and let me work."
    Now as a lead, I am the wall. If you see my guys working hard or in a panic, you don't bother them. You talk to me.

    • @viviorko
      @viviorko 3 месяца назад +25

      The greats manage both.
      With respect

    • @rollin340
      @rollin340 3 месяца назад +24

      It's folks like you who learn from experience, then put that experience to ensuring the new generation of workers at that level can do their jobs as best they can who are the best for management positions. Hiring people up from their existing positions, when they are able, will ensure that things work smoothly. Too often do we have managers who don't understand the smallest of nuances of a role, demanding outrageous shit as if it's normal.

    • @DeeDeeCeeCee
      @DeeDeeCeeCee 3 месяца назад +4

      Good egg, need more people like this as leads

    • @jacobbissey9311
      @jacobbissey9311 3 месяца назад +25

      "Maxim 2: A sergeant in motion outranks a lieutenant who doesn't know what's going on.
      Maxim 3: An ordinance technician at a dead run outranks _everybody_ ."
      -The 70 Maxims for Maximally Effective Mercenaries
      In an emergency you *always* defer to the person who actually understands what's going on, irrespective of the normal chain of command. Always.

    • @rollin340
      @rollin340 3 месяца назад +16

      @@jacobbissey9311 Tell that to management who cannot handle anybody of a lower grade within the company trying to correct them. There are sadly too many egotistical people in charge of things they do not fully understand.

  • @randommixedletters
    @randommixedletters 3 месяца назад +1141

    "This is the worst outage we've ever seen in our lifetimes".
    This is the worst outage we've seen in our lifetimes, *so farrrr* .

    • @allanrogersfj
      @allanrogersfj 3 месяца назад +4

      😂

    • @wilsonkoman2829
      @wilsonkoman2829 3 месяца назад +2

      😮

    • @CyanDumBell_MC
      @CyanDumBell_MC 3 месяца назад +4

      and we're still in 2024
      who knows what 2025 have waiting for us

    • @2o3ief
      @2o3ief 3 месяца назад +7

      Yeah man, we've is past tense, you can't see the future......
      Really thought you did something there

    • @randommixedletters
      @randommixedletters 3 месяца назад

      @@2o3ief humor receptor not detected, please return to facility for further equipment.

  • @kleedrac
    @kleedrac 3 месяца назад +77

    Thor hit the nail on the head with the IT industry. If everything works some exec is saying "What are we paying those guys for?" and if anything goes wrong there's more than one exec saying "What are we paying those guys for?!?!"

    • @asmosisyup2557
      @asmosisyup2557 Месяц назад +9

      Just keep downsizing the team till they can barely cope with the minimum workload, because surely nothing will go wrong ever.

    • @michaeldeats328
      @michaeldeats328 Месяц назад +3

      To be fair, asking the question what are we paying these guys for? Is literally the executives job.

  • @chrishendry1031
    @chrishendry1031 3 месяца назад +969

    The thought of a fossilized judge who can't use Excel will be presiding over a massive case like this, needing the lawyers to bring out the crayons to explain even the basics of network design makes me feel like it will be a complete coin flip on how it goes.

    • @listofromantics
      @listofromantics 3 месяца назад

      Umm... Not to give you nightmares, but the overwhelming majority of all politicians and executives are old AF, tech illiterate Luddites.
      Out ENTIRE WORLD is ruled and governed by Boomers.
      THAT should scare you far more than ONE JUDGE.

    • @myboatforacar
      @myboatforacar 3 месяца назад +36

      Well, at least SCOTUS still has Chevron deference
      Oh wait

    • @phaaroyt
      @phaaroyt 3 месяца назад +35

      It's going to be less about understanding network architecture and more about digging through decades of case law to determine who has what percentage of fault. A judge doesn't need to understand how a program works, they just need to understand what previous courts have said about programs not working. Outages have happened all the time and there's well-established case law about outages, so we're not looking at novel case law here. That will make it pretty straightforward for the judge.

    • @NYKevin100
      @NYKevin100 3 месяца назад +46

      There are good judges and bad judges. I remember in the Google v. Oracle case, the trial judge actually taught himself Java in order to properly understand the case (and then the appeals court promptly screwed it all up, but at least somebody in the system was trying).

    • @Feedmagoo92
      @Feedmagoo92 3 месяца назад +6

      It was both hilarious and heart breaking to watch the catastrophe ensue in the Rittenhouse case, where they had to discount "pinch to zoom" on iPhones, without anyone realising how badly that fucks up basically every court case involving digital video. - Kinda tempted to make an amicus brief, explaining that video compression throws out 90%+ of information in a video, so none of it can be trusted, just watch what happens.

  • @ehntals1394
    @ehntals1394 3 месяца назад +259

    This is why we don't allow day zero updates from external sources. Also medical devices are isolated and do not get updates. Can't risk an update breaking a critical medical system.

    • @MachineWashableKatie
      @MachineWashableKatie 3 месяца назад +26

      Yeah, why would an MRI machine need to be networked

    • @MachineWashableKatie
      @MachineWashableKatie 3 месяца назад +13

      Same for a dialysis machine, i know davita was up because my dad could still go on friday

    • @darylphuah
      @darylphuah 3 месяца назад +9

      likely sales trying to get nice big fat number of devices install, and hospital administration checking off checkboxes that their "medical devices" are all secured

    • @TheFPSPower
      @TheFPSPower 3 месяца назад +14

      Well that's where the big fuck up came for crowdstrike, many companies that did not choose to have zero day updates got pushed the faulty definitions update anyways. To me this is 100% on Crowdstrike because they fucked up on so many levels and Microsoft has perfectly documented the dangers of using kernel drivers at boot time.

    • @ehntals1394
      @ehntals1394 3 месяца назад +15

      @@MachineWashableKatie Well you still need to get the images off the computer into your patient records, but it shouldn't be talking to the internet. That's why they are run on a Medical Device VLAN and only get very specific access granted to reach out.

  • @jceggbert5
    @jceggbert5 3 месяца назад +223

    For the bitlocker issue: some people figured out how to manipulate BCD (Windows Bootloader) to put the system in something approximating safe mode - safe enough that crowdstrike doesn't load, but not so safe that Windows doesn't ask the TPM for the key. Probably 95% of the bitlockered machines can be recovered this way (my estimate).

    • @InternetsDadGaming
      @InternetsDadGaming 3 месяца назад +54

      I love someone created something that sounds like a hacker breaking in through the window, in order to repair the damage caused by the homeowner

    • @ThePamastymui
      @ThePamastymui 3 месяца назад +12

      That is some sort of next level Chinese-pot-balancing circus trick

    • @phobos258
      @phobos258 3 месяца назад +2

      IF you have the key this might work.

    • @jceggbert5
      @jceggbert5 3 месяца назад +15

      @@phobos258 most systems have the bootloader on an unencrypted partition and if you can get Windows to try booting into a recovery environment by failing boot 3x, you run a handful of bcdedit commands (most importantly, setting safeboot to minimal on the {default} entry) and reboot. BCD should be able to pull the key from the TPM because nothing important changed (no bios settings, no system file checksums) and boot into safeish mode. Then, you can delete the bad file, change safeboot back to the default setting, and restart.

    • @everythingpony
      @everythingpony 3 месяца назад +5

      Windows just updated this " vulnerability and it auto updates even on computers that are in a boot loop"

  • @AngryMan540
    @AngryMan540 3 месяца назад +102

    Having worked in IT, I always had the MBA's find out how much their departments cost to run per minute, and then account for how long a 3rd partly IT support company would take to respond. Now we can just point to this...

    • @AndyGneiss
      @AndyGneiss Месяц назад +3

      I enjoy your approach. Pointing out what time is worth and what downtime costs in order to advocate for keeping your support team around & equipped to support you seems obvious, but plenty of people seem to need it pointed out to them.

    • @AngryMan540
      @AngryMan540 Месяц назад +1

      @@AndyGneiss sadly, sometimes the snake in the grass is only obvious to you after it’s bit you

  • @Wiki7202
    @Wiki7202 3 месяца назад +109

    The channel Dave's' Garage (Dave is a retired Microsoft developer from the MS - DOS and Win 95 Days) did a good breakdown on one of the last questions Thor asked with why there was no check before the systems blue screened with a driver issue.
    Crowd strike apparently had elevated permission to run at a kernel level, where if there is a problem at the Kernel windows Must blue screen to protect itself and files from corruption.
    Dave's video will do a better job explaining it than I could ever hope to so I grabbed the link to it: ruclips.net/video/wAzEJxOo1ts/видео.html

    • @chrish4088
      @chrish4088 3 месяца назад +18

      There is also the aspect that CrowdStrike doesn't validate those changes through the whole WHQL process to go faster. This is purely CrowdStrike's failure to validate input in kernel level code and the fact that they didn't test properly. If you had done even one install test you would have seen that it tried to access an address that didn't exist and it failed. At that point Windows has no option, but to fail. There are plenty of things to talk about with how Windows has issues, but this is not one of them.
      Like the Microsoft update basically had nothing to do with the failure so I just hope this VOD is late to the party because many points they talk about if Microsoft contributed to the failure is basically counter to reality. You play in kernel level code, without WHQL validation, and fail to data input validate you fail. Even CrowdStrike's PIR basically says "Stress testing, fuzzing and fault injection and Stability testing". As someone that works heavily in the industry, they basically just admitted they don't do proper validation"

    • @jimcetnar3130
      @jimcetnar3130 2 месяца назад

      So a guy who collects a pension from Microsoft says it isn't microsoft's fault. Sounds about right lol

    • @CinnamonOwO
      @CinnamonOwO 2 месяца назад +4

      ​@@jimcetnar3130he knows what he's saying.
      if you use Windows, you are using his work all the time, in fact Task Manager was HIS program and he thought about selling it as a 3rd party (he had a clause that allowed this) but decided to donate it to Windows, he's also responsible for the format dialog, the 32gb fat32 limit, and the shell extension that made viewing the zips files just like any other folder possible. He's talking from knowledge.

  • @TylerOfTrade
    @TylerOfTrade 3 месяца назад +39

    You know it's serious when Thor doesn't pull out Mircosoft Paint and instead grabs the bigger gun.

  • @KilothATEOTT
    @KilothATEOTT 3 месяца назад +68

    What this issue showed us at my emergency service center is that we don’t have robust enough plans for operation without computers.
    It’s helped us improve our systems and we are to a point now where we can totally operate without any computer or internet systems. We’re more prepared than ever now.

    • @andyv2209
      @andyv2209 16 дней назад

      my dad is a doctor and for years he joked that 'computers are a fad and theyre going to go away" and haaaated their involvement in his work that just reqired him to do twice the paperwork most of the time... looks like, at least hwere you are, he could end up being right xD and i think its better that way. computers shouldnt be blindly trusted to securely hold such sensitive and private imformation, especially when the things being put on the computers are often things that sales and admin want to make movey and profits off os, as is pretty much anything that crosses their paths.

  • @dowroa
    @dowroa 3 месяца назад +148

    As someone who has QE'd... there is also "we told you, but the business never thought this edge case was important".
    This appears to be the everyday, common "that isn't our failure" type of design failure that no one solves as there is no ROI on pre-solving these instances.

    • @ArthurAtlas
      @ArthurAtlas 3 месяца назад +3

      My thoughts exactly

    • @brandonfrancey5592
      @brandonfrancey5592 3 месяца назад +17

      I see parallels in other industries. The owners need to balance putting out a perfect product and a profitable product. At no point is software ever "done." There is always another edge case that needs to be addressed, a new exploit discovered. At some point a product needs to go out the door, otherwise there is no profit and no one has a job.
      You can see the same thing in construction when everything is fine, until it's not. Sure a job is better with 10 guys on site, but that's not feasible, so it's done with 5 guys. Time lines are rushed, safety gets over looked here and there, but most of the time it's fine. Then one day a bridge collapses. A tower crane falls over. An investigation will show where they went wrong but if you ask the guys, they will tell you, "It was only a matter of time before this happened."

    • @Veretax
      @Veretax 3 месяца назад +1

      according to the RCA analysis CS put out, they had multiple points of failure in the process.
      Simulated/Mocked examples of these updates which relied on wild cards
      Next update didn't allow them (but the test with wild cards passed and noone caught it)
      Parameter out of bounds (was this in the kernel? or in Crowd Strike's sensor? Not clear on that)
      They call it a content update, but it relies on a RegEx engine, who hasn't seen Regex hose you when something seemingly minor changes?
      there's more I'm sure.

  • @jankrnac3535
    @jankrnac3535 3 месяца назад +53

    For those who want blame Windows. Same thing happened with Crowdstrike few weeks earlier on Linux, just then was no much devices demaged so no one cared.

    • @AQDuck
      @AQDuck 3 месяца назад +20

      Right, but the affected Linux machines were able to recover _much_ quicker because of the way Linux handles kernel modules (and the fact that a kernel panic gives a hell of a lot more information about what _actually_ happened than a BSOD probably helped too).

    • @Immudzen
      @Immudzen 3 месяца назад +12

      @@AQDuck They were able to recover quickly because you could see which driver failed and while the system still crashed you could blacklist the driver at startup.

    • @wesley_silva504
      @wesley_silva504 3 месяца назад +13

      Dude if
      Windows + Crowstrike = huge problem
      And
      linux + Crowdstrike = small problem
      Then Windows is the main cause
      (Edit: can't believe people took this seriously. Is obvious Crowdstrike is the problem)

    • @Immudzen
      @Immudzen 3 месяца назад

      @@wesley_silva504 Most Linux systems don't use encrypted drives and that made the problem MUCH worse. Linux has also dealt with bad drivers for a long time and provides a way to blacklist drivers to keep them from loading.

    • @tophatsntales
      @tophatsntales 3 месяца назад

      @@wesley_silva504 Yeah this is where im at with it.

  • @listofromantics
    @listofromantics 3 месяца назад +178

    This is why monopolies are bad. There's a VERY good reason the old adage of "Don't Put All Your Eggs In One Basket" has been around for as long as we've had chickens and baskets... it only takes ONE PROBLEM OR ACCIDENT to ruin everything.

    • @cloudyview
      @cloudyview 3 месяца назад +23

      Crowdstrike isn't even close to a monopoly, they have ~14% of their market
      Computer Operating Systems are largely a natural monopoly/duopoly... Developers don't want to create programs for multiple platforms, only the popular platforms get the apps, the unpopular platforms die (see windows phone, WebOS, OS/2, etc....)
      Linux has been trying to break in for years, it's arguably fairly complete, but no one buys in, because the platform support from app developers isn't there.

    • @fred7371
      @fred7371 3 месяца назад +4

      ​@@cloudyviewI believe he is referring to windows. It is true that some systems tend to be a standard, and windows is one, but the space has 2 other's. Monopolies might be good for the consumer at the start, but they quickly turn sour and more then just for the consumer. In ideal world, we would have diversifications with strong standards. But this isn't the ideal world.

    • @duo1666
      @duo1666 3 месяца назад +5

      @@fred7371 Problem is multiple different OSs either wouldnt change much between each, in which case theres no real point or difference in which OS is used anyway so this would still likely occur, or there would be many different OSs with extremely different structures and nothing would be compatible between any of them.
      Its already a massive pain developing for windows, linux and mac, and a pain dealing with every single possible configuration of base hardware that can be mixed and matched. Could you imagine adding another 30 different OSs? Security critical devices would still all homogenize to some extent and one OS and security program would still reign king.

    • @fred7371
      @fred7371 3 месяца назад +1

      @@duo1666 correction, a pain for windows and linux, mac and linux share the same structure. Yes your right, I am aware of the conflict of standards that could arise from multiple OS. But I also point out that if you enforce a basic of standards, this is less of the an issue, I added there for those that know what I am saying, you can see this in man industries.
      It is also less of an issue if you have to up the competition instead of imposing your way of doing things (funny that's where we are at currently). We saw that on the USB ports, and the fact Apple was trying to get ride of them, or the charger debacle in the EU. That's just some, there's google recently.
      Ofc it won't be easy, but that's the spirit of competition, to try and do better. That's why monopolies are awful, they ruin everything from the people creating the product to those left with no options.

    • @duo1666
      @duo1666 3 месяца назад

      @@fred7371 Monopolies are bad in capitalism. A centralized system to handle things on its own isnt bad. And capitalism and competition isnt exactly good either. Monopolies in capitalism exist because you can take the entire pool of cash then constantly roll back expenses that ensure a quality product because the investment to start up is large enough that you can make a lot of money before that happens, and then bankrupt the company, buy out the competition, and do it all over again.
      Realistically, the only real issue here is everything auto updated, so everyone was hit all at once, where as the problem would be more localized if everything didnt update at literally the same time.

  • @rangerreview1780
    @rangerreview1780 3 месяца назад +15

    This is much like what the NTSB goes through. When an airline or train disaster happens. You won't know a fault or failure point. Weather it's human, digital, or mechanical. Until an event happens. And why it takes them months to years, to solve. There's so many factors to look and potentially blame.

  • @jawredstoneguy6058
    @jawredstoneguy6058 3 месяца назад +12

    Oh wait what? This is the first time I've not seen a comment on a video like this lol. This really puts into perspective how bad this was for many people. I, fortunately was not as affected by it, but many people were. I'll be praying that the IT people can get a break/ are appreciated more as a result of this.

  • @kingofl337
    @kingofl337 3 месяца назад +34

    According to Dave’s Garage the issue was 100% Cloudstrike. They sent a empty package and the driver couldn’t handle the problem.

    • @Stonegolem6
      @Stonegolem6 3 месяца назад +7

      Yeah, this was recorded early in the debacle, first couple days, and info was not great.

    • @jeffwells641
      @jeffwells641 3 месяца назад +15

      Yep, Crowdstrike was doing fucky shit with their EXTREMELY SENSITIVE BOOT CRITICAL drivers.
      Even if a Windows update broke the driver, it broke the driver because Crowdstrike was doing fucky shit they weren't supposed to be doing with their driver.

    • @MotherScreng
      @MotherScreng 28 дней назад

      who wrote the driver with no error handling? MS

    • @kingofl337
      @kingofl337 24 дня назад

      @ Cloudstrike bypassed WHQL testing in the way they wrote the driver.

  • @vipast6262
    @vipast6262 3 месяца назад +27

    Why isn't there any blame on the Companies themselves? I work in IT , and my previous company use to test all windows updates, and software updates on a 48hr test before allowing it push out to the rest of the systems. The current company does not do this and was hit with the crowdstrike issue. My PC was not affected because I disable update pushes on my system and do them manually. I was advocating to start doing smoke tests before allowing update pushes ahead of time.. before this happen.. NOW after half the systems went down.. they decided to add it to the process.

    • @jasam01
      @jasam01 3 месяца назад +6

      The perception here is since it's /security/ software explicitly used to handle close to realtime issues, 48 hours is 48 hours vulnerable. If it was anything else this wouldn't fly. Nor would it have such a low level access to bugger it up in the first place.

    • @CD-vb9fi
      @CD-vb9fi 3 месяца назад +7

      You can't do that with Crowdstrike. The entire purpose of Crowdstrike is that you are paying them as a customer for that kind of "due diligence". I work in IT. If I have to start managing Crowdstrike like OS patching with staggered roll-outs then they suddenly become a lot less appealing to pay services for. Crowdstrike is all about being as fast as possible for Threat Analytics. You can't tolerate a lag because once and exploit hits... you need to get in front of it as soon as you can. This close to "bleeding" edge is just going to have that risk.

    • @vipast6262
      @vipast6262 3 месяца назад

      @CD-vb9fi ah.. but my good sir, you just ruled out your own statement, you stagger roll out of OS patching, if this was done with the OS patching then the hit would not have been bad as they would have caught the boot loop / OS update issue with crowdstrike latest patch.. as stated in this episode it only failed when coupled with latest OS patch not initial release. This would mean the due diligence would have been on the companies themselves for not verifying the latest OS patch did not have any conflicts. This is why Microsoft also does rolling OS patch deployments. The problem here is not only did MS and Crowdstrike fail QC, but so did the Companies IT QC processes. Crowdstrike should be testing on preview builds of OS deployments as well, if they are a partner, they have access to all builds future releases.. all in all this could have been easily avoided if the industry had better practices as a whole.

    • @vipast6262
      @vipast6262 3 месяца назад

      @jasam01 you guys seem to miss the nuance , of the OS patch. CROWDSTRIKE WORKED, then new patch releases , then bluescreen/boot loop. If you smoke test with configs for 24 to 48 hours with your common user test suite , before auto releasing OS patches to your IT infrastructure you would have been fine as you know that hey something happen after this OS patch. There is a reason Micrososft let's IT manage the patches, not all software plays nice with every PATCH.

    • @jasam01
      @jasam01 3 месяца назад +1

      @@vipast6262 You said OS Updates AND software updates, we were talking about the latter, because Crowdstrike has a relatively unique position from the standard expectations.
      It's worth noting that no amount of smoketesting the OS patch would of saved anyone if Crowdstrike were to do even worse and push a bluescreening update that occurred on the current update.

  • @Bzuhl
    @Bzuhl Месяц назад +2

    That's where IT reservist can be a good governmental program : you vet local IT professional and get them to be familiar with emergency services and critical infrastructure systems. Then they can jump in to support critical places who don't have enough of an IT staff to face a big attack or such a catastroph.

  • @seasnaill2589
    @seasnaill2589 3 месяца назад +3

    9:15 safety regulations are written in blood. Often times you don't know what you need to prepare for until after it happens.

  • @opensourcedev22
    @opensourcedev22 3 месяца назад +9

    I know of a 150B+ market cap business, which was doing full Windows recovery rollback to a version before the update was pushed out. The bit locked machines had to have a person on site accompanying the remote admins. Their billed time and losses are the the hundreds of thousands

    • @vollkerball1
      @vollkerball1 3 месяца назад

      Where I work we have test machines for updates. Only after the updates get tested they are released to the other machines (Apple and Windows alike)

    • @opensourcedev22
      @opensourcedev22 3 месяца назад

      @@vollkerball1 for sure. That makes sense. I the place I'm referring to, can't be named, it's that huge. It's also managed by pleb suits

    • @CD-vb9fi
      @CD-vb9fi 3 месяца назад +1

      The thing is... it was all avoidable if the did disaster recovery right. But who does that? I have been doing IT for over a couple of decades... two things are never taken seriously but always claimed to be serious... Security and DR.

  • @michaels.3709
    @michaels.3709 3 месяца назад +3

    10:00 - "It was a blind spot" makes sense from a QA perspective, but... clearly we, as a society, can't be having software systems with billion-dollar costs attached to them.

  • @Draenal
    @Draenal 3 месяца назад +13

    Bad idea to push this dedicated clip well after Crowdstrike released their RCA confirming they're at fault.
    Really should take this down or preface the clip with something indicating this was filmed within days of the outage occurring.

  • @neosmagus
    @neosmagus 3 месяца назад +1

    My personal feeling, being within the IT/Development space is that liability is also shared by the end users. Of course it sucks if an update brings your system down, I've been in those kind of situations when it was just an update for some small but important software we were using. But you need to have adequate backup plans in place to quickly recover, don't always force the latest updates immediately without testing, and have a system down plan in place where you can. I feel bad for everybody that has suffered because of this, but in terms of strengthening processes going forwards, this was an important lesson. We've handed our lives over to the concept of an always available infrastructure that can be brought down within minutes with very few alternatives in place.

  • @Martinit0
    @Martinit0 2 месяца назад +1

    About hand-written airplane tickets: About a decade ago my intercontinental flight had to be rerouted and they issued me hand-written tickets for the three legs it would have. At the time I expected I would get stranded in Bangkok - thought no way that ticket will be recognized as valid. But lo, there was no problem in Bangkok - in fact Thai Airways staff was waiting at the gate and hustled me to the connecting flight. On my next stop in Singapore they again had staff waiting for me at the arrival gate, let me bypass the entire 747 queue at the x-ray machine and drove me directly to the departure gate.
    I thought that was peak organization by the airlines involved (all Star Alliance). They even did not lose my baggage (which is more likely if connecting times are short).

  • @BosuDX
    @BosuDX 3 месяца назад +65

    This right here, is why a monopoly is bad. CrowdStrike has a monopoly on the IT security market when it comes to locking and managing systems. And it broke. Whether it was Microsoft's fault or Crowdstrike, it doesn't matter. Something broke that made systems with CrowdStrike go down, and the world stopped for a day because of it.

    • @brandonfrancey5592
      @brandonfrancey5592 3 месяца назад +3

      A monopoly in of it self isn't a bad thing, but with any monopoly there needs to be regulation and accountability.

    • @Skewrz
      @Skewrz 3 месяца назад +2

      Not entirely sure, but wasn't there a "SolarWinds" or some such that Google had a problem with a year and change ago? I remember a similar problem to this happening (though no nearly as severe) because ALL of Google services went down for a few hours and things basically stopped for the day as they removed it from their systems.

    • @alexholker1309
      @alexholker1309 3 месяца назад +4

      Even if you have multiple providers of this kind of service, any given organisation is going to use one or the other. You're not going to have half the hospital using CrowdStrike and the other half using StrikeCrowd unless you know that each has advantages in its niche that outweighs the added hassle of using two different service providers.

    • @cloudyview
      @cloudyview 3 месяца назад +6

      Crowdstrike isn't a monopoly, they have (had?) ~14% of the market. It's a substantial share, which is why this was so widespread, but it's not even close to a monopoly

    • @Illiminator31
      @Illiminator31 3 месяца назад +3

      Please don't talk when you don't have any Idea about the IT-Security landscape okay? Crowdstrike is as far from a Monopoly as McDonalds is from Healthy Food. Microsofts own Microsoft Defender XDR / Sentinel has way more Share then Crowdstrike has, heck even Kaspersky has more Marketshare then Crowdstrike.

  • @InvictusRed1911
    @InvictusRed1911 3 месяца назад +5

    Crowdstrike is a sample of what we thought Y2K was going to be.

  • @corvidshaman
    @corvidshaman 3 месяца назад +43

    For those wondering, this was from july. On Aug 6th crowdstike put out a blog post called "Channel File 291 Incident: Root Cause Analysis is Available" where they admitted they were 100% responsible. This actually is not the first time it happened. Its not even the first time *this year*.

    • @MunyuShizumi
      @MunyuShizumi 3 месяца назад +11

      This is misinformation. No part of that report admits to 100% of the responsibility. Out of the 6 described issues, only #6 (staged deployment) is something that is solely within Crowdstrike's domain. Based on the report alone, other issues can easily depend on external factors such as Microsoft-defined APIs that are not expected to suddenly change. The described mitigations can be seen as _additional_ precautions, not as something that is required of a Windows kernel driver.
      In fact, they explicitly mention passing WHQL certification, and it isn't specified if Microsoft's or Crowdstrike's update broke a specified API standard. Maybe Crowdstrike relied on something that wasn't formally defined but practically remained unchanged for a long time. Maybe Microsoft failed to specify which APIs are changing with the update and communicate it to partners on time. Maybe both messed up.
      While it does look (at first glance) like Crowdstrike's regex shenanigans are to blame, I can't help but remember the displeasure of dealing with Microsoft updates shutting down production due to server and client protocol updates being released simultaneously without a deprecation announcement, effectively killing all clients at once until the server gets updated (while provider admins are napping) or we blacklist a sudden Windows update. Extremely minor incident in comparison (~2h downtime), but Microsoft also breaks stuff from time to time, and it's entirely possible that Microsoft released something that was incompatible with their own WHQL certification.
      We really need more information before we blame Crowdstrike for absolutely everything. They definitely made mistakes, but it's possible it isn't entirely their fault. And so far they haven't admitted it's solely their fault.

    • @corvidshaman
      @corvidshaman 3 месяца назад

      ​@@MunyuShizumi
      When you say "it isn't specified if Microsoft's or Crowdstrike's update broke a specified API standard." here's literally the first line of the Root Cause Analysis:
      "On July 19, 2024, as part of regular operations, CrowdStrike released a content configuration update (via channel files) for the Windows sensor that resulted in a widespread outage. We apologize unreservedly."
      You said: "other issues can easily depend on external factors such as Microsoft-defined APIs that are not expected to suddenly change"
      Microsoft wasn't involved at all. All six issues were wholly within Crowdstrike's domain. If you read the RCA, you would know that the root cause was "In summary, it was the confluence of these issues that resulted in a system crash: the mismatch between the 21 inputs validated by the Content Validator versus the 20 provided to the Content Interpreter, the latent out-of-bounds read issue in the Content Interpreter, and the lack of a specific test for non-wildcard matching criteria in the 21st field.". I'm not sure where you're getting that a Microsoft API changed anywhere in that document. It was Crowdstrike software attempting to a read an out-of-bounds input in a Crowdstrike file sending Windows into kernel panic.
      You said: "Maybe Crowdstrike relied on something that wasn't formally defined but practically remained unchanged for a long time"
      Both the Falcon sensor and Channel Update file 291 are Crowdstrike software - not Microsoft. Issues 1, 2 and 4 described what Crowdstrike's Falcon sensor didn't do. Issues 3 and 5 are gaps in their test coverage (gaps is an understatement). Issue 6 they didn't do staged releases leading to a much more widespread issue.
      You said: " and it's entirely possible that Microsoft released something that was incompatible with their own WHQL certification."
      The WHQL certification is only certifying the Falcon sensor, not the update files - thus it's irrelevant to the root cause. The issue isn't that they didn't change the sensor software as that would require a new WHQL certification testing process, it's that they changed what the sensor was ingesting. It's like saying I have a certified original Ford car, but then I'm putting milk in the gas tank and wondering why the engine is bricked.
      You said: "We really need more information before we blame Crowdstrike for absolutely everything. They definitely made mistakes, but it's possible it isn't entirely their fault. And so far they haven't admitted it's solely their fault."
      They literally did. What more information do you need? They released the full RCA. There's no more information to be had. Crowdstrike pushed a buggy update that bricked millions of systems resulting in trillions in damages and almost certainly lead to deaths (emergency services and hospitals were offline for many hours).

    • @corvidshaman
      @corvidshaman 3 месяца назад

      ​@@MunyuShizumiNo, it isn't misinformation. Go reread the RCA. Literally the first line is "On July 19, 2024, as part of regular operations, CrowdStrike released a content configuration update (via channel files) for the Windows sensor that resulted in a widespread outage.
      We apologize unreservedly."
      Microsoft isn't involved in the incident beside the fact it's a windows machine.
      It was Crowdstrike software trying to access an out-of-bounds input from a bad config file that somehow passed all of Crowdstrike "testing". All aspects involved are Crowdstrike.
      The certification is irrelevant. That's only for the Falcon sensor itself, not the inputs. That's like getting a certified new car from Ford and pouring milk in the gas tank and blaming Ford for making a bad car.
      All issues are Crowdstrikes fault. None of the issues in the RCA have anything to do with Microsoft. 1,2, and 4 are what the Falcon sensor failed to do. 3 and 5 are (enormous) gaps in test coverage. 6 is the lack of staged releases.
      What more info do you need? There's not going to be any. This is it. This is literally the document that says what happened and why. Microsoft didn't change anything, Microsoft isn't really mentioned besides the certification and the pipes.

  • @MindbendStudios
    @MindbendStudios 2 месяца назад +2

    I work in IT. We don't use Crowdstrike, but this kind of an issue is not unique in this space. We use CarbonBlack and we've discovered that the people who control it's configuration have INSTANT access to all configs on on any PC that use it. One time someone caused the software to BLOCK critical software we use and could not run the business for 20 minutes until someone turned whatever setting they used off. Currently, they have blocked specific browsers, but we can't do anything with them, not even uninstall them. So they are broken pieces of software stuck on the PC.
    This problem is probably more related to lack of knowledge and communication to those that control the backend, and then we are moving to the cloud in a year or two, I absolutely HATE the state of IT right now, it's damn scary to have uninvested people in charge of our infrastructure.

  • @adriianeut
    @adriianeut Месяц назад +1

    I was there working the night shift for our EMEIA branch. shift's from 10pm to 7am. I left at like 1pm that day. first hours were frustrating because we had no instructions. then instructions were found in reddit, took a few more hours to approve and implement them. and yes, it lasted weeks because people were on PTO/vacation that day so they came back to work with this issue in their laptop. fortunately affected people were nice.

    • @adriianeut
      @adriianeut Месяц назад

      our help line queues were MASSIVE that morning and did not relent, as expected.

  • @LiminalityCarb
    @LiminalityCarb 3 месяца назад +2

    it would be helpful in the future to have a date marker for when these conversations occurred. This feels like it occurred shortly after the wake of crowdstrike's outage, but it doesn't appear stated anywhere (that i can see at least. I could be missing it).

  • @excubitor3440
    @excubitor3440 3 месяца назад +1

    This whole process about prevention Thor describes reminds me of episodes of Air Crash Investigations where something goes wrong in a niche case that in hindsight is completely preventable with the smallest change to something...but you never knew it needed to be changed because nothing like it had ever happened before.

  • @kalvorax
    @kalvorax 2 месяца назад

    I was one of dozens of Field techs doing contract work for The Men's Wearhouse. 200 of their stores had just had pinpads upgraded from USB to ethernet server connections. That server was affected by this. I personally went to the 6 stores I had upgraded and got them all fixed that same day.
    They could still do sales, but it was 15 minutes per customer until I got the server fixed. Once I got the bitlocker key for each server, it was a breeze, but for the first 3 stores, I had to wait 45 minutes to 90+ minutes PER store in the queue to get the bitlocker key. Was easy to fix, but whew....that was a fun Saturday.

  • @StingSting844
    @StingSting844 3 месяца назад +3

    A friend of mine had to courier his laptop back to his office to get it fixed. In total it was 5 days of time wasted

  • @Billis75
    @Billis75 3 месяца назад +1

    I was half involved in our company's recovery (I didn't perform the fix, but I prepped server restores from pre-update). It seems like "blame" should be easy to determine through contracts. I don't know if a jury or judge would need to be an IT person, they could literally just sit through a Thor-style MS Paint session, since the problem is logical in nature.

  • @timogeerties3487
    @timogeerties3487 2 месяца назад

    9:06 It's like the line in shooters and strategy games.
    "It's never a warcrime the first time."
    Warfare with chemical weapons wasn't an issue before chlorine gas filled the trenches. Posing as/attacking dedicated field medics wasn't ever a problem to be considered before the red cross came to be.
    And now, these IT-problems were too obscure for the end user and original developer to notice.

  • @BluewolfGameStudio
    @BluewolfGameStudio 2 месяца назад

    10:50 There was a saying in germany while the introduction of De-Mail from a computer expert: "For every technical problem there is a judicial solution". The law created basically stated that encryption of messages in transit on a server are not to be considered in transit for the purpose of deencrypting that message. (Otherwise the law would be in breakage of another law for data safety). And the next sentence was "No government is stupid enough to give their people a means of communication that can't be spied on".

  • @thedude4840
    @thedude4840 3 месяца назад +1

    2:04 can confirm in AZ the entirety of the fire/pd dispatch system was back to radios and manual calls.

  • @loop8946
    @loop8946 2 месяца назад

    as someone who works in i.t. at a medical office, there is a reason why i have the system setup not to update for a minimum of three days. That being said, we also have a few systems that haven't been updated in years, to be fair they're isolated but the point is that in the medical industry from everything i've seen (at least at smaller offices) updates are resisted and only done out of necessity.

  • @kellymoses8566
    @kellymoses8566 3 месяца назад +50

    He is wrong. The fault was 100% Crowdstrike. They changed a function to take 21 arguments but only gave it 20 and it wasn't coded to handle this error so it exited with an error code and since it was running in the kernel Windows stops to prevent data corruption and Crowdstrike is a boot driver which means windows won't boot if it doesn't boot.
    The stupidest part of this is not check pointing a working config and automatically reverting to it.

    • @huttj509
      @huttj509 3 месяца назад +5

      When was this interview, and when did the information you mentioned become available?
      Because I've been seeing clips from what looked like this interview for weeks, dunno how the timeline actually works out

    • @gljames24
      @gljames24 3 месяца назад +2

      Linux handled it fine and it was able to easily recover from a defective kernel module. Microsoft still has some blame for bricking windows if the kernel module fails.

    • @nulano
      @nulano 3 месяца назад +4

      @@gljames24 It was Crowdstrike that marked their driver as required for boot.

    • @321guyver
      @321guyver 3 месяца назад +6

      @@huttj509 The original podcast was on July 21st. So well over a month ago.

    • @jehhhGames
      @jehhhGames 3 месяца назад

      Great. So your info could be moments old. And this was broadcast the day of? Day after? Of course no one had all the info.

  • @badsequence
    @badsequence 3 месяца назад

    I work for my local county government and we use Crowd Strike thankfully we only have about 1400 desktops and about 500 laptops we had to fix. We were back up to 80% fixed by the end of the day Friday when this happened. That was with all 4 techs plus the radio comms techs which was 2 more and 2 sysadmins all going to each and every PC in the county. Some departments have offices 60-80 miles away from our main office.

  • @Nemesis-pe7mw
    @Nemesis-pe7mw Месяц назад +1

    In this case it was actually 100% preventable by proper processes, which we do not do due to costs... Any system that is this important should have a clone where updates get tested before they get deployed to the actual machine. This process is widely considered as DTAP Development/Test/Acceptance/Production. The patch should be automatically deployed on test and tested in a automated manner to see if it still runs, in this case it would've failed to even boot. Then in Acceptance to see if it still functionally works. Once you've done this you go to production.
    If you say this almost any manager now a days will tell you, yeah but what are the chances! Well not that high, but hey we've proven the impact is disastrous as we've proven once again!
    Honestly, I'd say the fault is how we approach stuff like this in the first place. We have companies creating imploding submarines, we have bowing, etc... At some point you have to ask yourself is this entity at fault or is it all of us for allowing them to be this faulty for the benefit of a few peoples personal wealth. Because that is what it comes down to in the end.

  • @torinira.9269
    @torinira.9269 3 месяца назад +58

    It's probably going to be shared liability between CrowdStrike and Microsoft, but it's what percentage each gets that is up in the air.

    • @quiotu
      @quiotu 3 месяца назад

      Oh... I guarantee you, most of the liability is going to be thrown at Microsoft at first. And it has nothing to do with their real liability or how much evidence there is of fault... it's that Crowdstrike has a net worth of $65 billion and Microsoft has a net worth of over $3 TRILLION... people are going to sue the company that's likely to throw them the most money to go away.

    • @goomyman23
      @goomyman23 3 месяца назад +8

      How is it Microsoft’s fault?

    • @estamnar6092
      @estamnar6092 3 месяца назад

      I don do that crystal math, but jus 100% both of em xD

    • @jaykoerner
      @jaykoerner 3 месяца назад +2

      Can Microsoft argue that they were forced to create a unsafe environment by the EU, do you mandated they allow third party companies to have colonel level access for the purposes of any malware and the like, Microsoft had a plan in place to allow third party antivirus to just hook into the same kernel level driver they use for Windows defender but that was rule monopolistic by the EU

    • @4P0CA1YP5E
      @4P0CA1YP5E 3 месяца назад +1

      @@goomyman23 How is it not Microsoft's fault? Is a better question.

  • @jacob_90s
    @jacob_90s 3 месяца назад +1

    When was this recorded? I just went back and reread CrowdStrike's post mortem and they don't mention anything about a windows configuration changing, just a regex issue.

    • @Necrowarp_
      @Necrowarp_ 2 месяца назад

      This was recorded pretty soon after it happened so the information was not fully out yet, at this point we know the fault was fully on crowdstrike.

  • @ronjohnson6916
    @ronjohnson6916 3 месяца назад

    Difference field (home inspections), but "If nothing happens, nothing happens" is very much on point here. We get new safeguards only after something damaging happens. This failure was predictable and I'm sure you can find plenty of people who predicted it. But it's extremely rare for action to happen because a very bad outcome is theoretically possible.

  • @UnstoppableSmooze
    @UnstoppableSmooze 3 месяца назад +54

    This is why i hate windows forcing updates on systems. You cant test stability on an isolated box before the entire office gets an automatic update pushed on every box

    • @FFXfever
      @FFXfever 3 месяца назад +17

      In the enterprise space, you can set when updates happen. So you can roll out an update, or test updates in a VM.
      The issue is that cloud strike have the ability to ignore that for some reason. Something about that particular update was considered critical update.
      So even if your IT team was doing their due diligence, windows and cloud strike fucked it up, cause as you said, they forced it to happen.
      It's fucking stupid that all users are essentially beta testers.

    • @dunderdotten
      @dunderdotten 3 месяца назад +3

      ever heard of wsus?!?!

    • @CD-vb9fi
      @CD-vb9fi 3 месяца назад

      @@FFXfever Everyone is a Beta Tester, spot on!

    • @anon-fq3ud
      @anon-fq3ud 3 месяца назад

      Immutable Linux is perfect for enterprise stability. It's basically impossible to break, and if you do manage to break it, simply revert to the previous system image.

    • @Deadgye
      @Deadgye 3 месяца назад

      @@FFXfever Even in an enterprise space you still cannot prevent windows from performing updates tagged "critical", unless you configure the OS to use a custom windows update server.

  • @xazorus9229
    @xazorus9229 3 месяца назад

    First stories: Looking at the actions people took with the benefit of hindsight. This typically attributes blame to individuals and not systems.

    Second stories: Looking at the system and seeing WHY mistakes were made. This approach makes the reasonable assumption that everyone is a rational actor and makes the best decisions they can with the information they have. Using this approach typically allows organizations to grow and improve, tackling the real root causes of problems instead of taking the easy route of blaming individuals.
    Karin Ray (on a discussion by Nickolas Means)

  • @sofielee4122
    @sofielee4122 3 месяца назад +1

    well *hopefully* this means that airports finally update their infrastructure. all of aviation is incredibly behind the times but especially so our computer infrastructure, its ridiculous
    edit: yeah that's the thing. processes and procedures often get written in blood, and I suspect this will be one such case. there are many like this, 14CFR is one of them that is chcok full of them, and they will continue to happen as long as technology improves

  • @nightmarishendeavors
    @nightmarishendeavors 2 месяца назад

    I am a senior systems engineer and thankfully we only had 30 severs affected because we primarily use Carbon Black - we did have a few servers in Azure and it is a pain in the neck to fix those. We unmounted the disks from the affected VM and attached them to another VM to get the file deleted then had to move them back.

  • @rjc862003
    @rjc862003 3 месяца назад +9

    crowstrike is to blame: their bad update definition caused their kernel mode driver to crash. there was no microsoft update that set this off the issue occured when the machines rebooted for a routine windows update and witch point the crowdstrike kernel mode driver ate its own tail and took the os with it. if microsoft has ANY fault its they should have put a stop of kernel mode drivers as a whole years ago. but the 3d party AVs bawked at that and cried antitrust and now people are dead because of it

    • @glenmonks4489
      @glenmonks4489 3 месяца назад +5

      This comment matches my understanding of the situation. I'm confused as to why this video is still up, as I can find no source relating to a related Windows update that caused, influenced or even corrected (after the event) this matter. If anyone has such a source, please share it. I'm aware of an unrelated Azure bug in the US Central region that day.

    • @cslwatkins
      @cslwatkins 2 месяца назад

      I believe the other issue was how Crowdstrike deployed the update to bypass the Microsoft verification, the Kernel program got the code from outside the Kernel, so that code didn't need the verification.
      I also believe that they bypassed the staging options that people had setup, which would have reduced this to virtually no impact.

  • @Skewrz
    @Skewrz 3 месяца назад +12

    The blame could lie in forcing updates that every company seems to wants to do nowadays, back in Win7 updating was optional, so far it still is with most software. I've been with Thor on his short about "I am the Administrator, you are the machine" bit as I HATE getting updates shoved on me, because it seems every few updates it breaks something and they seem to insist on shoving phone games and bloatware on your device, we would be well served in making updates a default-option again and you can schedule an IT guy to check every week or month or whatever their policy is to be.

    • @justicefool3942
      @justicefool3942 3 месяца назад +2

      Crowdstrike has requirements for using their systems by mandating updates. They do this because if you don't keep everything up to date, you are exposing your services to unnecessary risk that they may not be able to defend against, at least the non updated versions of their software can't defend against, and because of that, wouldn't be held liable if something happened.

    • @devinaschenbrenner2683
      @devinaschenbrenner2683 3 месяца назад +1

      ​@justicefool3942 but we come back to Windows updates are notorious for breaking random ass shit for an update that I could guarantee you didn't need to happen.

    • @TheFPSPower
      @TheFPSPower 3 месяца назад +1

      Forcing updates is a blessing for everyone, you don't value it until you meet a company that blocked updates and every machine is still on Windows 10 Redstone 2.
      You'll curse so many people the day you have to update all those machines because software doesn't work anymore and now 1 out of every 3 machines starts bluescreening or locking up because the drivers are completely fucked up and outdated.

    • @_Khaine_
      @_Khaine_ 3 месяца назад

      ​@@devinaschenbrenner2683remember wannacry, only out of date systems were affected. the broad attack came in may, but the fix already in March. most private machines were save, because of auto updates, but alot of company's were screwed, because they never got the fix.

    • @giornikitop5373
      @giornikitop5373 3 месяца назад +3

      not really. large companies use enterprise versions of windows, you can stop/start/schedule updates whenever you want. but crowdsource needs the updates for security or else what's the point in having a crowdsource software (or any other similar one) with an os full of holes.

  • @MJSGamingSanctuary
    @MJSGamingSanctuary 2 месяца назад

    The most difficult check with QA as a former QA video game tester myself is CROSS CHECK VERIFICATION. Its pretty clear that this probably was a situation where the update ticked properly EVERY box but the cold boot restart. Cause its 99% of the time the most easily explained answer with testing is the real issue.

  • @Corbs3
    @Corbs3 Месяц назад +1

    If I had to point to a likely blame, I would lean toward Windows. MacOS, prevents issues like kernel panics with features such as System Integrity Protection (SIP), Windows doesn’t have the same safeguards in place. I'm not saying macOS is inherently better, but it shows that it's possible to protect against these types of vulnerabilities.

  • @pencilcheck
    @pencilcheck 3 месяца назад +1

    i would like to see where Thor got his accounts of windows pushing an update that crashes the configuration of named pipe execution since that's what crowdstrike claims they did (updating the channel files)

  • @Tyberes
    @Tyberes 3 месяца назад

    I think how it's gonna play out is along the lines of "once a certain amount of people depend on you you need to start announcing your rollouts a certain amount of time in advance"
    Certainly from a liability standpoint MS gets to say "well we told them we were rolling this out 2 weeks ago or whatever, they should have checked"

  • @paulschaaf8880
    @paulschaaf8880 3 месяца назад

    For any kind of large scale upgrade, you test the update on 1 machine running whatever apps are important to you to validate the update. Only when that testing is successful do you ever update the rest of your stuff. You also always have a method of rolling back if it fails tested and verified to work. So a failed update should never disrupt anything for longer than a few minutes. Also terminal servers exist. Anything I can do consoled directly into a machine I can do just as well from 3000 miles away, other than maybe having someone physically power cycle it so I can interrupt the boot sequence to get in. There's no way I could do my job if I had to fly all around the country constantly to physically console into stuff.

  • @xantishayde-walker4593
    @xantishayde-walker4593 2 месяца назад

    With Thor talking about the courts deciding who's at fault, I just think of the saying "I didn't say you're the one who's responsible, Im just said im blaming you."

  • @MrMcbear
    @MrMcbear 2 месяца назад

    I was working 911 that night. We had no Computer, CAD or anything except phones. No text to 911, half our services were crippled. You should see how difficult it is trying to obtain people's locations during a traumatic event with them screaming mad at US that the systems were down. I was there with a pen and paper and google maps on my phone. Nothing we can do except deal with it until shift ends.

  • @PilviKujanen-xr6pc
    @PilviKujanen-xr6pc 3 месяца назад +3

    Does microsoft use crowdstrike and if they do how do they not test their own OS updates against their own systems? Pushing updates straight to production does not sound like something OS used for military etc purposes should be doing.

  • @apocryphgaming9995
    @apocryphgaming9995 3 месяца назад

    What they haven't discussed and is going to be equally relevant is that because this is/was a global issue, not a national one, SCOTUS is ultimately just one Court. There are lawsuits being filed in the UK, in France, in Germany, in Australia, Japan, China, South Africa, Argentina - they're not all going to be resolved by one court, and SCOTUS' ruling does not set a precedent outside of the US (though I imagine the verdicts of some of the courts that resolve this soonest will be used as part of the argument of whichever party the ruling favours).
    The investigation will take months.
    The legal fallout? Years. Closer to decades.

  • @Pretzulkj
    @Pretzulkj 2 месяца назад

    I work for a medical device company and while none of our devices were directly impacted by this issue (none of them are Windows machines themselves or require a Windows machine for them to be used), the work computers utilized by the vast majority of the company WERE impacted (very few Mac/Linux users because you need to receive special permission for IT to issue one).
    Among my team of software developers only about 10% of them had functional computers the morning it all went down. I was "lucky" enough to be one of them and all the others in a similar position had one thing in common - the night before when we left the office all of our computers were disconnected from the company network and instead connected on our own small personal networks without internet access used to separate off prototype devices.
    Everyone with their computers connected to the main company network had the updates pushed to their machines automatically that night. I don't fault the IT department for that at all because honestly some users' computers would never get updated otherwise, but it was the one instance where a near-universally beneficial IT policy happened to have an unexpected outcome that turned it into a liability instead.

  • @theotherdave8013
    @theotherdave8013 3 месяца назад

    Our checker machines died off, only could do cash. People turned into almost panic mode like "omg i need to get my cash out of the bank!" it was going to spiral out of control on retail, but ..i can see this having long lasting effects. Been a while since ive seen the public panic.

  • @tropiq
    @tropiq 3 месяца назад

    i read somewhere that crowdstrike has some in's with regulatory bodies so companies just use it by default to comply because otherwise they'd have to jump through hoops, that means people 'forcing' them to do this made the scale of this vulnerability much more of an issue and should also be held liable in some way

  • @somewhatnerdy8112
    @somewhatnerdy8112 2 месяца назад

    I remember a couple years ago someone hacked into a program that a hospital in my state used and in turn it infected all emergency services that used it in the whole state somehow, they loaded the service with some ransomware and it crippled the services for a couple months or so... hospitals, police, fire services everything connected to the service was completely locked out. My mother who's a nurse manager in a home health office here said they had to scramble to break out old paper files that hadn't been touched in years, pull others out of storage, or try to get more recent physical paperwork from other hospitals and care offices because no one could log into the online services... equipment like ekgs and central computers just stopped working

  • @flamerunner8016
    @flamerunner8016 2 месяца назад

    I love how we had probably the 2 worst cyber incidents withina month of eachother. In June we had the CDK cyber attack that took out about half of dealerships and cost billions, only to be one upped by the Crowdstrike incident that crashed nearly the entire world.

  • @thezouchcoop
    @thezouchcoop 3 месяца назад

    When was this? I’ve never heard of this

  • @PoRRasturvaT
    @PoRRasturvaT 3 месяца назад

    I had the assumption that releasing an update globally at the same time wasn't a good practice.
    I work in a large company and our application updates are always gradually rolled out to users.
    There is a cost and a tech challenge because several versions need to coexist and keep working.
    But crowd strike is a security company with major clients and ultimately lives at stakes. And preventing this kind of damage (from malicious intent) is literally their business.

  • @Vfnumbers
    @Vfnumbers Месяц назад

    I was working that night at my hospital when everything rebooted and bsod at 2-230 am. We thought it was a cyber attack and acted immediately. Our entire team went all hands on deck, and stayed all Friday and weekend to recover critical servers and end devices to keep the hospital running.

  • @1DigitalFlow
    @1DigitalFlow 3 месяца назад

    The crazy thing is in the 90's as computers became incorporated into industry.. industries had crash kits. so they could continue to operate if the network went down. credit card rollers to process payments, paper pads with forms that would normally be inputted into a computer.
    Hell in the restaurant industry they still have manual overrides. Ovens have computer boards that can change the temp add stem add time all on its own. but there is a manual override inside a panel. for if the motherboard fails.

  • @AGentooUser
    @AGentooUser 3 месяца назад +1

    Here in egypt nothing shutdown because we use paper, now I appreciate the dictator XD

  • @Asher_skyfall
    @Asher_skyfall День назад

    Back during the summer there was a different outage that affected the auto industry, CDK. What was "great" was going into a management role a month after that outage, being told they were still recovering from it, and that my pay was going to take a hit because they were still fixing sales numbers and recovering from a loss.. like cool, didn't even work here yet, I have an agreed upon salary, but sure pay me less.

  • @n0refuge
    @n0refuge 2 месяца назад

    My company pulled all the bitlocker codes and put them in an excel doc and did each laptop one at a time. Wild times.

  • @MichaelRiderRebornfromtheashes
    @MichaelRiderRebornfromtheashes 3 месяца назад

    It hit us hard. With Encrypted Devices I got into a pretty good rythm. But even with the tool I made it still took about 1.5 minutes per machine. I had thought of using something a bit better, but didn't have the time to make it. With so many people down I just didn't have the time to craft it. Still though regardless touching as many machines as I did it was aweful. I setup an triage center and just have people bring their machines to me. It was the fastest way to recover things rather than go to place to place.

  • @metgath
    @metgath 3 месяца назад

    I would have loved to catch this live. I watch a number of lawyers on youtube and from what I've learned, If one side is found say 25% responsible then they can also be held 25% liable.

  • @antivanti
    @antivanti 3 месяца назад

    Quality assurance. Especially testing with enterprise level security software because of how hacky they do what they do but also gradual rollouts of updates so if something goes wrong it doesn't go wrong on every machine at once so you can catch it and stop the update

    • @nigh_anxiety
      @nigh_anxiety 2 месяца назад +1

      My understanding of the cause from Crowdstrike's end (I admittedly haven't looked into this in a while) was that a file was empty that shouldn't have been. Another part of the software tried to read from that file and crashed with a Null Reference exception. It's possible the file itself was fine when tested, but something went wrong in their release process which corrupted the file and it isn't something that could have been directly caught by QA at that time.
      That being said, it seems like QA or Dev should have caught the "bad file causes null reference" problem as a Null check should always be done before trying to access the reference, not matter how sure you are that it can never happen. It may be ok to crash loudly in dev, but the prod release should always handle it gracefully.

  • @jmkqfnvyl87
    @jmkqfnvyl87 3 месяца назад

    If you don’t have a way to switch immediately to manual record keeping for at least a temporary span, then these things will shut down your business.
    Any retail store can keep written records of transactions and process them later like we used to do 20-30 years ago-even with credit cards or checks if you take down the info properly and have some level of trust with that customer.

  • @_lime.
    @_lime. 3 месяца назад

    To somewhat counter what Thor said, this was preventable for the companies affected. At least for the larger ones. If you have critical infrastructure and you need to update the software you test that in a dev environment first. As Thor said, the software providers won't test every possible combination and you, as an end user, don't know if your particular concoction of software will cause a compatibility issue with a new update. So you use a separate test machine, load the update, and monitor the stability. If it's all good THEN you push the update to all your machines.
    Like how is Crowdstrike of Microsoft gonna know that Burger King's order queuing software crashes with the new update? They won't, and if Burger King just blindly pushes that update to all their machines no one will be getting any Whoppers. In that example it would be up to a Burger King IT employee, likely at the HQ, to first load the new update onto a test machine and see if it all works well together before releasing the update to all their machines.
    Clearly a lot, and I mean a LOT, of companies weren't doing this.

  • @jdogwin
    @jdogwin 3 месяца назад

    I built a custom setup for our org that you just boot to the boot media put it in safe mode then run a bat with local admin once you get into windows. it still took over a week to really clean up with a solid 72 hours of pain trying to touch all of our 100k+ devices.

  • @skarlock5257
    @skarlock5257 3 месяца назад

    Regarding Cohh's point on judges or possibly even a jury deciding on who is responsible in this issue, judges are not experts on ballistics or blood spray patterns in murder trials. Judges are not mechanical engineers or chemists but rule on issues with car manufacturing defects and determine legal liability. Judges are not accountants or tax experts, but rule all the time on whether or not someone is guilty of tax evasion. This is normal, and this is why during a trial both sides are allowed to bring in their own experts on these things who give expert testimony, and both sides are allowed to cross examine these experts.

  • @jmkqfnvyl87
    @jmkqfnvyl87 3 месяца назад

    If you don’t have a way to switch immediately to manual record keeping for at least a temporary span, then these things will shut down your business.
    Any retail store can keep written records and process them later like we used to do 20-30 years ago
    I also worked ems in a rural area and legally it’s fine without digital records as long as you do have a way to digitize them eventually

  • @alexbourg4165
    @alexbourg4165 3 месяца назад +9

    #SecurityByObscurity Part of the reason this happened at such a global scale is because the entire tech industry trusted one entity en mass. This in effect also meant that if there was ever a major security vulnerability with CrowdStrike that vulnerability would have industry wide ramifications. Always control your digital destiny, take the time to manufacture solutions in your company that can save you're company money, time, and grief later down the road. You'll end up looking much better to your customers when you aren't affected by things like this.

    • @Illiminator31
      @Illiminator31 3 месяца назад

      Crowdstrike has not even 20% Marketshare, stop this en mass nonsense

  • @Reinkjaky
    @Reinkjaky 3 месяца назад +2

    If microsoft pushing updates to critical infrastucture pcs, why are they pushing it to every such system at he same time? Shouldn't it be preventable (at least on such a large scale) if you just push updates in waves?

  • @thefifthwall5259
    @thefifthwall5259 2 месяца назад

    cloudstrike is crazy because i was stranded in Connecticut while i live in IL while my mom was slowly having a stroke (we didn't know until we got home 2 days later)

  • @BeckOfficial
    @BeckOfficial 2 месяца назад

    The real issue is that vital machines and critical infrastructure are updated without testing the updates on an offline machine/test bench before updating the whole system, to ensure everthing still works after the update. If everyone did this the problem would never have happened because no one would roll out that windows update to their systems.

  • @RealTNSEE
    @RealTNSEE 3 месяца назад

    I think this is going to push for a lights out out of band management for every single laptop / desktop the same we do for servers. We can get into any server via iLO / iDrac / BMC / etc to fix this. But every desktop and laptop, we have to go hands on for.

  • @kairon156
    @kairon156 3 месяца назад

    In Canada Twice Rogers went down nation wide and they had to do some sort of "if our systems are down we'll allow other infrastructure to be used" Sort of situation.
    Canada isn't the whole world but it's pretty similar to the CrowdStrike situation in that it halted a lot of stuff in Canada for a day not once but two separate times.
    So it's not like this sort of thing never happened it's that our world is too connected via software and a few wrong forced updates can wreck a nation.

  • @shinji391
    @shinji391 2 месяца назад

    Y2K was 20 something years late, but it finally happened.

  • @Jo21
    @Jo21 3 месяца назад

    you can restore it.. but need to change SSD also you can get the key from account linked to the machine

  • @Will-ze6rq
    @Will-ze6rq 3 месяца назад

    To access the file they are talking about at my company we had to use the bitlocker key first then if it worked we could make the changes. If bitlocker did not work we reimaged the device. We 12-hour shifts going for 7 days to fix over 6 thousand PCs.

  • @kataseiko
    @kataseiko Месяц назад

    There's a funny side effect to poverty. The provincial airport I had to fly from had no outage because they could not afford CrowdStrike. Malwarebyte and machines that can be swapped out in 15 minutes. It's crazy how this is the "defense" against that monopoly. You need to break up CrowdStrike and limit how much critical infrastructure gets "protected" by one company.

  • @russelllong3561
    @russelllong3561 2 месяца назад

    What program does Thor use to draw on. Is it just paint? My paint doesn't look like that?

  • @stefand.5509
    @stefand.5509 3 месяца назад

    I assume most of this maschines are managed in an Active Directory. So from this AD you can assign and provide to run something at startup, so you can deliver with that some update that breaks the bootloop.

  • @Danbotics
    @Danbotics 22 дня назад +1

    It seems this has largely blown over and isn’t being talked about any more. Were any decisions made about blame/liability?

  • @Badger21288
    @Badger21288 3 месяца назад

    I used to do tech support for Apple 8-9 years ago, specifically for iOS. It was standard practice for us when someone phoned and complained about a 3rd party app not behaving as expected after one of our updates to refer them back to the app developer. Apples stance at the time (it might have since changed) was that the 3rd party was responsible for app compatibility with iOS, not Apple.

    • @themarshman
      @themarshman 3 месяца назад

      Except Crowdstrike Falcon is not an app...it is a kernel-level driver, so part of the overall operating system. Not saying the OS OEM is responsible for bad coding practices in drivers, especially boot sequence kernel-level drivers, however it isn't the same as a true "app" being incompatible with the OS at the user-level API.

    • @Badger21288
      @Badger21288 3 месяца назад

      @@themarshman I know there are big differences in the situation. Just saying if
      Microsoft work anything like Apple, I wouldn’t be surprised if it turns out Crowdstrike is on the hook for this.
      I guess we’ll just have to wait and see how the chips land to know for sure.

  • @CarbonFanatic
    @CarbonFanatic 3 месяца назад +18

    The fault was 100% Crowdstrike.A function was changed take 21 arguments but they only gave it 20 with no error checking so it exited with an error code

    • @cloudyview
      @cloudyview 3 месяца назад +6

      @@CarbonFanatic yeah, I almost wonder if this was an older episode from before the RCA was published or something? If not, it's simply ignorant to say things that are completely inaccurate...

    • @MachineWashableKatie
      @MachineWashableKatie 3 месяца назад +3

      @@cloudyview Im like 60% sure this is from like the day of the incident

    • @cloudyview
      @cloudyview 3 месяца назад +2

      @@MachineWashableKatie just found it - 7/21 was the pod release date, 7/19 was the outage, so it easily could have been day of or day after it started

    • @akanotyetdecided
      @akanotyetdecided 3 месяца назад

      ​@@cloudyview This was recorded on Sunday 10 PDT 1 EDT

    • @DrMixelpixel
      @DrMixelpixel 2 месяца назад

      The investigation showed they test in prod.... Chad move until it is not. 😅 The CEO should go to prison, not just fines, PRISON TIME, YEARS!

  • @Immudzen
    @Immudzen 3 месяца назад +1

    It is crowdstrike's fault period. Their kernel driver read some data and it faulted which took the system out with it. They are the ones that wrote their software so it mostly sits in kernel space and they are also having it read data files without sanity checking. If the software was well written it would have hit a user space layer first to try and read the file and if that failed it would have continued using the previous configuration or even disabling itself as worse case with a message.
    It also looks like they skipped normal practices with unit tests and continuous integration along with phased deployment. Don't defend their power quality software. The OS is designed to terminate if a kernel driver fails. It must do this as a protective measure because you can't just kill a kernel driver and assume that memory and the rest of the system is consistent.

  • @Skewrz
    @Skewrz 3 месяца назад +6

    I find the: "raising people in particular fields as heroes" thing to be a bit odd, I understand they do VERY important work but so do a lot of others to keep this giant balancing act we call society running, yes I am sure IT guys are treated like shit but so are MANY others. "Nurses save lives," but so do the people keeping the machines running, the power lines operating, the generators built to keep this modern era running, etc.
    "If everyone's a hero!.. No one will be."

    • @SenshiSunPower
      @SenshiSunPower 3 месяца назад +3

      Most people are heros when the time calls for it. Overpaid executives can never be.

    • @gracefool
      @gracefool 3 месяца назад

      @@SenshiSunPower Of course executives can be important too... regardless of whether they're overpaid.

  • @MattMcMatt
    @MattMcMatt 2 месяца назад

    It's also entirely possible that Crowdstrike and Microsoft tested the latest update in isolation and it worked, but broke when pushed to live end-users. Really curious to see how this unfolds

  • @o0shad0oo
    @o0shad0oo 3 месяца назад

    In theory it's the fault of the IT departments of the companies affected. They have corporate versions of Windows where they set policies on updates, and ought to be testing them before allowing them to be applied. If Microsoft isn't letting them test properly before updates get pushed through, then that's another matter. It was different back in W7/W8 where you would manually apply updates or have coprorate specifically push them.

  • @MatiasKiviniemi
    @MatiasKiviniemi 3 месяца назад

    Hospitals etc. are also a potential quilty party. If your job is to arrange literal life support systems, you need to prepare for computers crashing. Once you decide to install Bitlocker, you are responsible for maintaining the keys.

  • @PedroOliveira-sl6nw
    @PedroOliveira-sl6nw 3 месяца назад

    I have heard a lot about this topic, especially from ThePrimeTime and Dave's Garage, which I recommend.
    But as someone who has worked as Sys Admin at a university (400+ PCs) and someone who like QA & Testing, I just don't understand how this was deployed worldwide without anyone noticing this. Let's take two perspectives according with this video:
    [Crowdstrike's Update to blame]: How did Crowdstrike not have a vanilla Windows PC to run this live (production) and see it crashing?
    [Windows' Second Update to blame]: