@@markusklyver6277 can't really get around the way we all employ trust with "sofware development practices", when you submit a PR people asssume you're acting in good faith, realistically speaking maintainers can't look at every single line of code in every single PR looking for vulnerabilities, something somewhere is gonna make it through, and in this case ANYONE can do it
Not really. The xz compromise was a sophisticated social engineering attack over several years, by someone who ingratiated themselves into the project. It could equally have happened with someone working on a small proprietary product.
@@jaitjacob how is that agreement? The guy in the video is making out this is some kind of endemic problem with open source only, while I'm saying much the same could happen with a proprietary product.
@@lambchomp1472 and how are the programmers who work on software for companies like Microsoft or Google not "nobodies"? In both cases, there's an element of trust. I would argue it's a lot harder to get away with something like this with the visibility of an open source project, where conversations and code commits are visible to anyone who cares to look. Also, plenty of these package maintainers are doing so as employees of companies and companies now use open source components to build their products, so it's not as simple as one against the other. The xz compromise was a pretty sophisticated social engineering attack. They spent a lot of time ingratiating themselves into the trust of that community. It's very easy to look back with hindsight and say that trust shouldn't have been given to this person, but it's not that obvious when you look through their commit history, which also includes contributions to other projects. In the same way a burglar cases a building to get in through an open window round the back rather than the front door with all the security cameras, xz was attacked precisely because it was a piece of software heavily used by others, but with a small community itself. One of the key things to learn from this is that such small projects need more observation and support, particularly if they are being relied on by much bigger projects who can afford to help their dependencies.
Hardware backdoors are more of a concern than open-source kernel code. Especially when you consider the fact that the countries that are making these products have every reason to do so.
@@adammontgomery7980 Not at all. If you have access to hardware and software you'll do both. Most of these people he's referring to don't have access to the hardware, so software is the only option.
@@adammontgomery7980it's simply more centralized, closed, and obfuscated. Your hardware has very much probably an Intel, AMD, Nvidia, or Qualcomm product in it. The products and technology are usually licenced, And few people know every detail of it. Software exploits didn't fail, but if you're a country's intelligence office, you're better off backdooring hardware.
Yuri Bezmenov explicitly told us all this, including the "they'll purpose-build people to get hired and promoted whose only job then becomes to hold open all the doors". People are quick to forget and dismiss. Also there's literally a contest called Underhanded C Contest where participants are asked to submit harmless C code that has maximum security flaws hidden inside. Whoever's can't be found wins. It's been running for decades now. You have no idea what's possible y'all.
Getting someone into hiring at the bigger companies avoids a lot of the difficulties he's talking about. But then there's two people you need to push into those companies.
As a developer of OpenSource software I do agree with what he says. Most PR's introduce bugs, bad code quality, etc. If something is intentional I can't tell. But the more contributors a project has, the more it has bugs. One preseter at a conference once said, users are great at detecting problems, but they are horrible at providing solutions for them. But I still think OSS is great, for different reasons though.
Jon has no idea what he is talking about. Its unfortunate. He says "just make a major feature on OSS and put your bugs in there" thats why guidelines exist. Small PRs. I honestly don't get how so many people take this guy's word as gospel, when quite clearly he speak on subject matters he know very little of, if not 0%. OSS is just as secure (or insecure) as proprietary software. Reading the source code for Windows, for these espionage outfits, is easy - they have the hardware to do it, just install Windows on their hardware and voila, you can disassemble the entire code base. Yes, with any software development comes risk and risk assessments. Bugs and debugging. Nothing special for open source, at all. Not shockingly, Jon claims otherwise, without knowing the first thing about it.
@@gevilin153 I was expanding on the argument of security by obscurity, with respect to closed source and if I recall he made a comment about not knowing the source code for closed source, which is just factually wrong. As long as you control the hardware, you can know what instructions are running. As far as introducing bugs, it's just as likely to be introduced in closed source software, simply because every major open source software application, like say, for instance Firefox has reviewers just like any other product, which is what I began my comment with. Thus, if you decide to contribute to Firefox for instance (and I definitely encourage you to do so, it's an amazing community if you are interested in software development) your submitted patch won't just be let through without review, which Jon just claims it will.
@@simonfarre4907 that’s just not true though. Open source contributions can come from anybody. Microsoft, for example, vets all of their developers before even allowing them to contribute to their code. It’s much less likely that an intentionally malicious actor will even get to touch the Windows code base than they would for Linux. Even if the reviewers of OSS projects are good, they can make mistakes and miss things. The same can be said for companies but at least the chances of someone intentionally adding vulnerabilities is drastically reduced.
@@catsby9051 What about target value? What about reach? These are reasons why state actors and hackers would tend to focus effort on exploiting windows instead of Linux. What about not shitting where you eat? What about effectiveness of hacking tools? Presuming hackers use Linux, these are reasons why they would tend to improve it rather than fill it with bugs.
The hardware back door situation mirrors the Microsoft situation that JB explained. In CPU companies, there are a lot of people cross checking everyone's work and more so for security stuff. I design hardware security stuff in CPUs you use and I've spent years identifying the back doors in specs (NIST, ISO mostly) and working around them. It's my head on the line if my logic is insecure and I'm fully aware of the forces trying to undermine hardware security. The motherboards and BIOS code are an easier entry point for government hackers. It's easier to pay off a few people in a factory to replace a network transceiver chip with your own. Security problems in CPUs are hardly new and have come around through traditional hacking methods rather than back door insertion and the vulnerabilities exist in the first place because of a necessary trade off between execution speed and side channel resistance. The danger in closed source whether for HW or SW is that with closed source, is that there is limited energy in the company for others to help you. Top tip, happening right now for people designing stuff to specs - try and find a constant time BCH error correction implementation in a secure sketch construct. Critical for your security, but no one sells a constant time BCH - it's unobtanium. So you need to design around that if you're in that position that you have to design it. HW security is hard work.
He does raise some good points. So really the solution is to reduce surface area and create paradigms that are designed to be secure if it truly is something so important.
The best solutions in my mind is usually open protocols with several implementations done by different people (and possible to do by yourself if you want to). A good open protocol democratizes software in most ways just as much as open software and source code does. I do like open source, but as Jon says you have to be skeptical of what you use and allow to run. I like OSS not for the security but for what it allows you to do and make with your software. I kinda feel Jon gives a bit too much to corporate closed source applications though. They might be harder to introduce a exploit into but closed source apps can hide their shitty code and that shitty code create exploits as well. People laugh at "security by obscurity" but it is an ok first line of defense. It is not good security, but it sets an ok first bar. A lot of companies rely too much on that bar though and don't have real security behind it. In OSS that bar is removed entirely.
Software, computers and access to the internet should not be democratized at all. Also, most people shouldn't have voting rights and democracy is a shitshow.
Company spending massive money on security is a joke. Was working in few big IT companies as developer and even QA/Automation engineer. The careless approach for vulnerabilities is staggering. Some were as simple to fix as changing headers configuration in the server. When applied I was mailed/contacted by higher management this is not possible as application will not work. After suggesting that in that case we would have to redesign/rewrite piece of app, I was told no budget/resources/time for that. After time I stopped to care about it. There were assigning jr resources to fix vulnerabilities (maybe 2 resources at max), where all people with experienced were formed in teams of 8 or more to work on ADA/WCAG issues as this is "vital" for the business.
isn't every cpu already pwned with a backdoor directly by intel? there's a closed source computer inside every computer that has full access to everything, including the network, even while your pc is powered off. if you weren't aware of this lookup intel IME
@@yasserarguelles6117it’s been researched in the past and found to contain various flaws. That’s from people who had to reverse engineer it. Imagine looking through the source provided / stolen from the manufacturer
There were recently an attack on the system one of our big food markets use to register sales. They had to shut down for several days, some even a week. Imagine if all our food markets had been connected to it. We would not be able to buy food. How many have food for a week at home now a days?
With closed source, the government can just tell them to insert the black box in their code and not tell anyone, with open source they have to sneak it in. I feel like it could be larger issue with closed source than with open source.
There have been several instances of bugs found in Linux, that have been around for years, that allow complete bypass or permissions checks. Who knows if they were planted intentionally. Very likely they have been actively exploited for almost as long as they existed.
These bugs were found well before introduced to stable releases. Also Microsoft has the same problem, Microsoft has offices in China, fairly easy to get a job there if you're competent and then you can attack it
There are active actors in open and closed source projects introducing bugs, stealing credentials, etc. That doesn't matter since we all know that is true and that the amount of security issues introduced by mistake is so much larger. This is why you have multiple layers of protection. Military computer are air gapped from the internet. You run custom builds of the os and the stack. Firewalls, tracking, anomaly detection. You have layers upon layers of protection.
I can relate to what Jonathan is saying. A few years back there was a movie that portrays his exact sentiments. It's called Blackhat. The harsh reality is although the movie itself being a fictitious story it is also very believable and can happen realistically. The everyday people have no idea what organizations such as the CIA, the Kremlin and others are capable of doing and they've been perfecting this since the 40s and even earlier! Everything is vulnerable! Just as was said in The Matrix, paraphrased: "Everything relies on a system and that system relies on another system... right down to the Power Grid itself! If I was in a position to plan a strategic attack that would not escalate to "total destruction" but yet enough to cripple another entity (nation). I would target their infrastructure by knocking out their power grid, water supply lines, and food lines and I would do it in a manner to where it can all easily be repaired with minimal damage done to that system for there to be minimal cost to repair and to rebuild that infrastructure. The world is a stage and that stage is analogous to a chess board and the pieces are moving! In a well thought out game, the objective in the end is either to put your opponent into checkmate or to cause a stalemate. Now throughout that game it may be premature to force an early checkmate as it may be more beneficial to just keep putting them in check while slowly manipulating them in having to guard against it by sacrificing their pieces in the process. In some ways Open Source does have potential for research and academic purposes however in the real world just like everything else, there are always vulnerabilities and the amount of them within Open Source or the possible or the combinatorial doorways to them is much greater than in a closed system. Take for example, a power generating system where all of its computers and databases that manage all of its internal machinery to operate properly is a closed system, in other words it could be considered an analog system in that yes it does have its own intranet but it is not directly connected to nor will it ever be connected to the internet making it less vulnerable. For there to be an attack on that system, there must be a physical presence. Anything that is connected to the internet is vulnerable as it is a part of the playing field. There are no written programs that are guaranteed to be 100% full proof from attack. Why? Because every program at the end of its compilation or interpretation is converted down to assembly a.k.a machine code. Even your compilers are not without bugs! Even your assemblers although very solid can still have bugs. And the machine code or binaries are instructions for a piece of hardware to do a specific set of tasks within a specific order. This is even true within modern processors that implement out of order executions. It is still based on a lower lying system and at this stage of the game is the pipeline of a CPU and its stages. Fetch (and or Prefetch). Decode, Execute, Read, Branch Prediction, Write Back... As this is a generalization of the modern pipelines. And even your hardware is not guaranteed to be without bugs, nor the microcode that propagates the pipeline stages. A line goes high or low and bits are set or turned off. Then all of the muxes & demuxes switch their lanes. Values are then passed into the ALUs, a calculation or a set of calculations is/are performed and another instruction or set of instructions takes placed based on some condition of the previous results or by the next incoming instruction or set of instructions. Wash, rinse and repeat! We are dealing primarily with Voltages and Currents. It's no different than when you walk into your living room, kitchen, bedrooms or bathrooms and turn on or off the light switch on your wall. This is a two state system that is called a binary system. It is Log Base 2 and 2^n mathematics in conjunction with Logic, Truth Tables, and Boolean Algebra. And according to Lambda Calculus the Binary Number System itself with an infinite number of digit placements is said and proven to be Turing Complete. When something is Turing Complete, it is capable of being programmed to mimic or emulate another or different machine that is able to do computations. This all relies on mathematics, physics, and numbers along with their properties, rules, laws, theorems, postulates, proofs, and axioms. Numbers don't actually exist in nature, they are a product of the mind as they are conceptual. If one can imagine it, who's to say that it can not be done? Take the simplest and first expression within mathematics even without the use of "numbers" y = x. This is the identity expression and equation. This is a linear equation and within this expression of equality there is perfect symmetry and reflection. This expression has all levels of mathematics embedded within it. The line to this equation by using the slope-intercept form y=mx+b has a slope of 1 and a y-intercept of 0 as they are both understood due to the identity properties of both multiplication and addition as well as exponentiation: a*1 = a, a+0 = a, a^1 = a. And with that we have a diagonal line that bisects the X and Y axes within the 2D Cartesian Plane. The slope of the equation m is more than just rise over run. It is also dy/dx which is one of the foundations of Calculus without the notation of limits themselves. Here slope is defined as rise/run and can be determined by any two points on that line by (y2-y1)/(x2-x1) where dy = y2-y1 and dx = x2-x1. More than just the properties of linearity, your trigonometric functions are also embedded within y=x as well. Since the slope of this line is 1, and we know that the X & Y axes makes a 90 degree or PI/2 angle with their intersection as they are perpendicular to each other, we also know that the angle below the line y=x and above the +x axis is 45 degrees or PI/4. What trig function has an output of 1 when its angle is either 45 degrees or PI/4? That's quite simple: tan(t). So we can substitute y = mx+b with y = tan(t)x + b and we have not changed the linear equation. We also know from trig that tan = sin/cos. We can then rewrite this as y = (sin(t)/cos(t))x + b and still have not changed the function. We can see that dy = sin(t) and dx = cos(t) since tan(t) = m and m = dy/dx. Even the number PI is embedded within this equation. The Pythagorean Theorem and the Equation to Circles are Embedded within this equation. The main reason to this is one of the properties of mathematics that is normally overlooked. It is present within multiplication and addition, but not within division or subtraction, and that is the Associative Property where Order doesn't matter. Order only matters when you are doing the Inverse of Associativity. Y=X is the same as X=Y. No need for parenthesis. 3+2 = 5 and 2+3 = 5. 3-2 = 1 and 2-3 = -1. Do you see the symmetry and reflection? 3-2 and 2-3 are opposites. The results are 1 and -1. These values are 180 degrees PI radians from each other. They have the same magnitude but a different direction. I don't want you to think this is a math course, but I had to lay down some principles with a foundation for one to fully understand the implications of what Jonathan is saying. Everything that is done within computers relies on the physical components and the behavior of electrical current that is being supplied and drawn from that system, and this is nothing more than a physics problem which completely relies on mathematics and numbers and how we relate them to the physical world or nature. That system can always be manipulated! There are governments and there are billion dollar industries and or corporations out there and they spend millions and even hundreds of millions and in some rare cases even billions to "protect or to expand their assets" and they will higher some of the best minds to find the best exploits that will go unnoticed and hidden or possibly even to cover their tracks leaving no evidence behind... We've been doing this since at least WWII within the modern era of computer systems and spy and espionage have been happening since Cain and Abel! As he has stated he's been in the industry for 20+ years, he's very intelligent and capable and yet there are people out there that their skill set is way above his pay grade! Not to insult him for his years of dedication, work and success, but there are people out there that would make him look like a novice. So if he is giving you warning, you should heed it with caution!
Most developers are going to value portability > security, so package managers and OSS is not going away anytime soon, just more security products as a bandaid solution
start giving blue verified badges to libs / frameworks in packages ecosystems lol , what else... :D might be a new job type... "code investigator" lol... :D
3 things: 1. If a large PR brings in a significant feature but introduces a security vulnerability its still a significant contribution. Bugs are often introduced as a result of oversight by the contributer, so this effectively just means spy programs are buiding FOSS software (yes bugs are bad but that leads me to point 2) 2. Its the responsibility of FOSS maintainers to catch bugs. 3. Its the responsibility of the people depending on that FOSS software to contribute in the ability of FOSS developers to work on the project or contribute their own additions. The lack of 3 is what is wrong with the FOSS landscape at this time. Companies use and abuse amazing FOSS tools that some developers put years and years of their lives into to get no money out of it. Sure, some are well off enough to build these tools and have enough time, but these backdoors and other cybersecurity issues result from a lack of contributers, and a lack of contributers comes from a lack of incentives. FOSS can only be good AND free if it can attract contributions. If these megacorporations never help with that then this problem will persist; they cannot have their cake and eat it too.
If you willingly use a license that allows anyone to do as they please you cannot complain when someone uses your software to make money without you in the picture. If you decided to license your code with MIT and a mega-corporation forked your project and kept their contributions private there is nothing you should complain about. This was a decision you made when licensing your code like this.
FOSS is about letting the user modify the code that they acquired and allowing them to share it. That's all. It doesn't have to be free of charge. You are allowed to sell open source software. You're allowed to sell support for your program. You're allowed to sell maintenance. If you want to make money, sell something, don't give things away for free and complain that people got it for free.
in frameworks like react the codebase is so vast and complicated by the day, i'm sure there's not enough bug fixer or checker to go through all of it on continued basis, and not to mention purposefully adding bugs is much easier than finding & testing and fixing a bug
@@spicynoodle7419 Not anymore, Versal has largely driven React in the past few months, and they've drastically upped velocity while obscuring features. Things like Suspense, etc. React today is not the react we had in 2021
bad example.. react has core team of around ~5 people (atleast used to be). open-source is not as open as people actually think... lol... go try making a feature and PR in some popular lib / framework.. come back and tell how "open" it is... :D
I don't like how he counters differing opinion by just saying those people aren't giving examples. But also the he just claims to have experience, and also doesn't give examples. I get that we KNOW he has experience but that doesn't mean he's not just using that as a card to play. Not to say he's lieing, but to say if he gets passionate about something he could internally be biased towards his past, not use it to construct the right answer. If he constructed, he could show receipts so to speak.
well, we can fall into the Ken Thomson Backdoor problem, very interesting topic. Also don't forget that Intel shipped a running working copy of Minix in the Management Engine running on ring 3. So the exploits and backdoors are everywhere, even beyond our reach.
I’m sure this happens, but I don’t think it happens in the Linux kernel much. Those guys care more than most product managers. And further. In a company bugs happen because they have deadlines. That does not happen in open source
Surely it would be easier for governments to just direct companies within their nations to develop software back doors. The issue with FOSS is literally any researcher can just go and look at the source code and test it for bugs. Whereas if say Microsoft was directed to implement such a feature you have the devs involved and the upper management as a point of failure.
JB is simply wrong here, and it's sad to see someone with such capability and insight in other respects fall for such naivety. A thought should be followed to its eventual *final* conclusion, not just the next logical one. 'Expense' means absolutely fsckin' nothing for a state actor when it comes to deliberately orchestrating the insertion of code to further facilitate the exfiltration of data, the future exploitation of a software system, etc. Inserting an 'agent' or whatever to a corporation for whatever purpose is purely a tertiary concern for a governing agency which presides over the soil on which a corporation conducts its primary operations. The corporation will always have a financial motive first -- not the concerns of the general populous -- and thus is easily bought off through any of a number of options. Simple preferential treatment for an upcoming contract auction is often adequate enough to buy oversight over many smaller corporations and the bigger ones aren't much more difficult. Conversely, hiring, training, monitoring, and deploying assets to introduce complicated security flaws into OSS projects in the hopes that the bugs will go unnoticed by many millions of others in a community for an adequate period of time and ensuring that they remain able to make changes to safeguard those flaws in the event that other code is introduced which inhibits the facilitation of them is decidedly more expensive over time. You can pay a corporation to overlook something in a private codebase, or even just dictate that they engineer something in a specific manner to maintain compatibility with self-designed platform restrictions -- and you don't even need to explain your reasons -- and they will gladly make those accommodations so as to guarantee the financially-beneficial relationship. You cannot, however, exercise that kind of influence over a continuously fluctuating populous in a diverse and complex group of communities, especially as other agencies are apt to do the exact same thing. The bottom-line here is that it will always be easier, more efficient, and overall more effective to influence private codebases than public ones. OSS is not, by any means, impervious to malevolent meddling, but it will always be 'never worse' than private codebases. The acquisition & privatization of many major OSS projects proves this point pretty effectively.
@@GeraldOSteenHow is that possible? It's obvious these things are mainly to be used to spy on those taking special care of not having their data hosted by a corporation(think state secrets, personal data), and as means of industrial sabotage. Besides, If you think you can just have a US agent walk into a Chinese bank and offer them preferential treatment for being able to spy on them, I wish you good luck with that, (this makes your verdict of naïveté seem rather funny) It's way over my head how you can feel like your point has been proven.
@@imranzero this seems very wrong to me. sqlite is public domain. nothing prevents you from redistributing a modified version. i don't think the common definition of "open-source" implies upstream must accept contributions. it would seem highly pointless to me to downgrade *public domain software* from "open-source" to "source-available" when source-available implies you might not be able to modify and reuse it legally. even if it were a good idea to integrate that concept in the term - it isn't - the commonly accepted term (aka the OSI definition) has a strict definition. being "open to contributions" would bring in a lot of questions about how extensive "open" should be. - would unmaintained open-source software become only source-available? - what about projects that ban specific abusive members from contributing? - hell, how would you universally define acceptable guidelines for what should be "reasonable" quality standards for accepting contributions? so, would refusing to merge a PR that adds ASCII art genitals to your CLI help message downgrade your software to source-available? can you find anything in the OSI definition that is that vague?
@@asuasuasu The vagueness of term "open-source" is the core of the problem here. I agree that open-source can be used describe something in the public domain which doesn't accept upstream contributions. But I think I was referring to the "open source culture" (linux kernel) that Jon touched on the video. Nevertheless, "Open Source™" has been hijacked by everybody and their grandma and is basically meaningless at this point.
@@beefchampion2792 nothing in the open source definition requires the developers to accept contributions. As long as the source code is available and can make changes to it and redistribute it (along with your changes), then it's _free and open software_.
There is nothing you can do against secret agencies. I'm a cybersec student, that's literally the first thing your teacher will tell you: "Accept that you are already owned". OSS can be intoxicated, yes, but the good it raises to the world is too fucking big. You can not pretend that it will not last just because it has flaws, spoiler alert Sherlock: Everything have flaws, specially humans.
I don't know enough about this but I would like to think that if there is a problem more eyes on the code at all times would find and report it before the closed environment would. Either way I can see his point, One requires more effort in order to be compromised and if you get caught you basically need a new spy. On the online project you can skip that step and go full on offense. But if the people checking the code did their job properly you would have highly competent spies improving your code for free, because in order to be merged they would need a big impressive feature working big enough to hide the bugs so if these were found It would be a win win for the users. It would be a bummer to lose great software like blender, OBS, VLC, Audacity or Linux
1. More eyes on the code indeed means higher chance that the good guys will find exploits but notice that it is not obvious that open source should mean open for contributions. 2. No need for big impressive feature. Just few small isolated changes made to look as if they were contributed by different parties.
It’s not at all true that more eyes equals more bugs found, particularly not subtle and security relevant bugs. 10 high level vulnerabilities researchers would find many more bugs than 500 good software engineers.
@@lucasjames8281 Yeah because 90% of the time high level vulnerability researchers are funded to do such research, it's literally their profession. Dozens of vulns have been found through Project Zero. This is also entirely unrelated to the idea of more eyes catch more bugs, which is done at the time of PR review or during a form of code audit.
Yeah, people will try, but they will often fail, at least with projects like Linux. Linus is very strict on what pr's he includes, and there is a lot people (at least for large active projects) that work on, and look at the code.
6:52 Ironic that Jonathan is the one not knowing what he is talking about, because this is absolutely true, every company I have ever talked to that majorly contributes to Linux is invested in committing security patches to the kernel, and reviewing their code. In fact, most such companies produce both open- and closed-source software, so Jonathan's argument is dead in the water to begin with. Companies are hesitant to open-source software because maintaining it is expensive, but maintaining software is expensive to begin with whether it is open source or not, that's why experienced developers get paid so much.
His initial point about open source software allowing anyone to insert bugs for personal/national gain is a very good point, I fully agree with that. There are thousands of projects that people just download and put in their project with nearly no oversight or validation. But to then say, the Linux kernel must MUST have exploits injected into it, is a WILD idea. Linux development is highly funded, with hundreds of some of the most experienced developers in charge of PRs and fixes. Changes are highly controlled. By the time a change of any kind actually gets to the real kernel, it has gone through more eyes, and more skilled eyes, and more re-writes, than any of the software developed at an Amazon or Microsoft. If you wrote an exploit in a sneaky way, chances are it'll get re-written before being committed. I hate to sound like a Linux stand, because of course there are issues with Linux. But secret security bugs is a laughably unlikely issue. On top of that, the Linux kernel is funded almost exclusively for security and stability reasons, since it's funded to be used for servers by million/billion dollar companies around the world. There are not incentives to skip security because that is why it is being made. Unlike a Windows or AWS (etc). I can't write a piece of code and get it in the Linux kernel, probably ever in my life. To me, the best defense against exploits is the sheer magnitude of eyes on the product. Linux has far more people watching not only the code, but what happens on deploys, thousands of deploys of different kinds. So not only would it be unlikely to get code in, but also how long will it be vulnerable. It would be easier to compromise the small group of say Microsoft developers for the Windows kernel. I also would guess that a far better attack location would be the application layer. You can get exactly what you want from it. But maybe that's more for just data. But to acknowledge what Jonathan is trying to say here, defense is harder than attack. I get that. Someone could input attacks that are very very very hard to see. And there is no software developer that would identify them in the PR. But exploits seem to come regardless of the developer's intentions. For decades software releases with exploits that compromise security and stability. I don't see how discovering exploits in Microsoft's windows or other well-funded private development firms, is any different from attempting to inject exploits. The main difference being, many people are atleast watching Linux, and the last couple decades would say that opensource project has a much more solid history with fewer bugs/exploits.
Look into how the university of Minnesota got banned from contributing to the Linux kernel. They were successfully able to add many vulnerabilities as a "proof of concept". They are not a highly sophisticated attacker either; a state actor has far more time, motivation, and resources.
But it doesn't mean NSA use free linux distro, such as ubuntu or arch. They might use linux, but they probably created their own linux system and kernel.
@@dsd2743 their comment could be interpreted in 2 ways 1. NSA backdoored linux so its encouraging people to use linux 2. NSA backdoored closed OSs, so its encouraging americans to use linux
It doesn't. That's why Jonathan is wrong. It's just as easy to introduce security bugs into the Windows kernel for instance. They don't have a big security team today because Microsoft primarily is focused on Azure and the AI race, and if you get hired (low bar at Microsoft) you can introduce security bugs all you want
I do not agree with what is said in this video. Open source software means that anyone in the WORLD can watch and contribute to the source code, so obviously people that have DIFFERENT INTERESTS. If one people tries to add some new code it can be seen and reviewed by anyone, whereas in a closed source project most likely nobody will ever check it (if it was validated by the manager or quality team). Also it's not because that it's open source that you can publish anything, you also have to go through a validation process like in a private company. With closed source project you're BLINDLY TRUSTING the software producer, you have no way to actually check the source code efficiently, so it's obviously bad in a security point of view.
I've said it before and I'll say it again. Comparing a tiny open source project without much oversight to Windows is ridiculous and is like comparing apples to oranges. Compare the LINUX KERNEL to WINDOWS and then we're talking. The Linux kernel has tons of oversight (Microsoft, Google, Meta, AMD, Intel etc all contribute to it, have maintainers from those companies and is used extensively by these companies). With Windows the security team is actually shrinking as Microsoft is focused on their most profitable businesses like cloud/Azure and AI so it's quite easy to get hired if you're competent and introduce a backdoor.
1:35 someone already tread this with the Linux kernel... and failed. Same with PHP, though it got a little further, but not into an official release. The thing with any reputable open source project is that they review every incoming change and look for this. They can see every single line that was changed, and check what it does. Yes it is possible, but it is unlikely to happen, and would easily be noticed and removed if it did happen.
first he says: 'how do you think that's not a thing?' without providing any reason to doubt serious maintainers who review all those check-ins of code YET he still asserts that: 'i guarantee you that there are at least 17 serious exploits in linux kernel'. He might be right, but i don't like that he is so sure about some arbitrary unfounded things, yet so skeptical about some others.
I understand Blow's point here. But what do we do? OSS ain't going anywhere, and this problem is only going to get worse. Say the Russians, or the Chinese, or the NSA all have compromised the Linux kernel, or highly used NPM or PIP packages. What realistically can anyone do?
With closed-source you don't even need to review or ask anybody. You as an NSA shitter show up with a warrant and put whatever backoors into Windows that you want. I prefer having 1% chance of discovering a zero-day with my own eyes to 0% when using proprietary software
The png crop bug on android and windows. That went unnoticed for quite some time. And it really should have been obvious for all those years. But we're not expecting decent code from companies like google.
I don't see it all about the vetting, etc. Those doing their job, their vetting is influenced by stress, etc. A lot of open source projects are passion projects and care more about the resulting code than the developers at commercial companies. What is a problem: accepting code from random persons, all with their own motivations, on the Internet probably requires a higher level of vetting then within the same company. But if you are Microsoft... that also goes out of the window(s).
Ah the good old Blow, I forgot how big his ego was. He's kinda not lying tho, OSS can have problems, good luck finding zerodays in OSS with many lines (even on Closed Source tho), nobody got times for that except people with malicious intent. But hey... Crypto AG existed, so Closed Source or Companies are not so safe, because it's more opaque, I kinda understand the argument "we can see the code"... but like he said, too many lines and you ar e more susceptible to malicious injection in OSS. Still interesting to listen to the guy.
Against DDOS we can protect by router with right firewall. 7:25 You are wrong. Problem is not length of code in kernel. Problem is that kernel is in C. Better language is Cext it is C with few extensions. Such extensions cause that program source is shorter, faster, more secure.
This starts out early being a bad take, because Open source and package managers have already existed for 10+ years and it's only gotten more popular. To say that "it won't last long" is to not be able to see the forest for the trees. Are there potential security concerns? Sure, but all software has security concerns, I'm not convinced that open source is some how more dangerous than other software, especially if it's being actively maintained and scrutinized.
@@kushalpsv That's not how it works. It's very easy to get hired at Microsoft and introduce security bugs if you're a state sponsored attacker. Microsoft's Windows security team is also tiny as they're more focused on their cloud business and AI
The npm examples are self-evident. event-stream had a malicious dependency in it for almost 3 months in 2018. Earlier that year eslint was compromised for a matter of (only) hours but it was scraping .npmrc files. He should do his due diligence in providing examples of kernel hacks that meet his criteria, but his experience is well-worth consideration and the sheer throughput of open-source LOC makes human moderation damn near impossible and we already know algorithm-based moderation is garbage, so I'm inclined to put stock in what he has to say.
I'm coming back to this again because of the xz exploit in ssh lol
Now i’m convinced this guy is a prophet
He's right but I don't see how this is an argument against open source specifically. It is an argument against bad software development practices.
@@markusklyver6277 the bad practice in question is accepting code from anyone and everyone
@@markusklyver6277 and he considers open source a bad software development practice because it's easy to sneak in stuff like this
@@markusklyver6277 can't really get around the way we all employ trust with "sofware development practices", when you submit a PR people asssume you're acting in good faith, realistically speaking maintainers can't look at every single line of code in every single PR looking for vulnerabilities, something somewhere is gonna make it through, and in this case ANYONE can do it
Came here to pay homage to the prophet
less than a min, 50 seconds to be precise, into the video the man has summarized xz backdoor
Not really. The xz compromise was a sophisticated social engineering attack over several years, by someone who ingratiated themselves into the project. It could equally have happened with someone working on a small proprietary product.
^ disagrees then proceeds to use more words and eventually agree*
@@jaitjacob how is that agreement? The guy in the video is making out this is some kind of endemic problem with open source only, while I'm saying much the same could happen with a proprietary product.
@@lambchomp1472 and how are the programmers who work on software for companies like Microsoft or Google not "nobodies"? In both cases, there's an element of trust. I would argue it's a lot harder to get away with something like this with the visibility of an open source project, where conversations and code commits are visible to anyone who cares to look. Also, plenty of these package maintainers are doing so as employees of companies and companies now use open source components to build their products, so it's not as simple as one against the other.
The xz compromise was a pretty sophisticated social engineering attack. They spent a lot of time ingratiating themselves into the trust of that community. It's very easy to look back with hindsight and say that trust shouldn't have been given to this person, but it's not that obvious when you look through their commit history, which also includes contributions to other projects.
In the same way a burglar cases a building to get in through an open window round the back rather than the front door with all the security cameras, xz was attacked precisely because it was a piece of software heavily used by others, but with a small community itself. One of the key things to learn from this is that such small projects need more observation and support, particularly if they are being relied on by much bigger projects who can afford to help their dependencies.
Proving that knowledge is the best way to see the future.
Hardware backdoors are more of a concern than open-source kernel code. Especially when you consider the fact that the countries that are making these products have every reason to do so.
The fact that there are hardware backdoors kinda tells me that the attempts to introduce exploits in software failed
@@adammontgomery7980 Not at all. If you have access to hardware and software you'll do both. Most of these people he's referring to don't have access to the hardware, so software is the only option.
@@adammontgomery7980it's simply more centralized, closed, and obfuscated.
Your hardware has very much probably an Intel, AMD, Nvidia, or Qualcomm product in it.
The products and technology are usually licenced,
And few people know every detail of it.
Software exploits didn't fail, but if you're a country's intelligence office, you're better off backdooring hardware.
@@adammontgomery7980 there are like 4 companies that produce 99% of CPUs used in consumer facing products, much easier to just go for those
My man Jon being a prophet once again.
lzma moment
Yuri Bezmenov explicitly told us all this, including the "they'll purpose-build people to get hired and promoted whose only job then becomes to hold open all the doors". People are quick to forget and dismiss.
Also there's literally a contest called Underhanded C Contest where participants are asked to submit harmless C code that has maximum security flaws hidden inside. Whoever's can't be found wins. It's been running for decades now. You have no idea what's possible y'all.
Imagine actually quoting a political defector as a reliable source on anything.
@@boshi9 found the fed
Getting someone into hiring at the bigger companies avoids a lot of the difficulties he's talking about.
But then there's two people you need to push into those companies.
I am glad someone mentioned Bezmenov. Look into Talpiot and Unit-8200. Even the large corporations are infiltrated.
As a developer of OpenSource software I do agree with what he says. Most PR's introduce bugs, bad code quality, etc. If something is intentional I can't tell. But the more contributors a project has, the more it has bugs. One preseter at a conference once said, users are great at detecting problems, but they are horrible at providing solutions for them. But I still think OSS is great, for different reasons though.
Jon has no idea what he is talking about. Its unfortunate. He says "just make a major feature on OSS and put your bugs in there" thats why guidelines exist. Small PRs. I honestly don't get how so many people take this guy's word as gospel, when quite clearly he speak on subject matters he know very little of, if not 0%. OSS is just as secure (or insecure) as proprietary software. Reading the source code for Windows, for these espionage outfits, is easy - they have the hardware to do it, just install Windows on their hardware and voila, you can disassemble the entire code base.
Yes, with any software development comes risk and risk assessments. Bugs and debugging. Nothing special for open source, at all. Not shockingly, Jon claims otherwise, without knowing the first thing about it.
@@gevilin153 I was expanding on the argument of security by obscurity, with respect to closed source and if I recall he made a comment about not knowing the source code for closed source, which is just factually wrong. As long as you control the hardware, you can know what instructions are running.
As far as introducing bugs, it's just as likely to be introduced in closed source software, simply because every major open source software application, like say, for instance Firefox has reviewers just like any other product, which is what I began my comment with. Thus, if you decide to contribute to Firefox for instance (and I definitely encourage you to do so, it's an amazing community if you are interested in software development) your submitted patch won't just be let through without review, which Jon just claims it will.
@@simonfarre4907 that’s just not true though. Open source contributions can come from anybody. Microsoft, for example, vets all of their developers before even allowing them to contribute to their code. It’s much less likely that an intentionally malicious actor will even get to touch the Windows code base than they would for Linux. Even if the reviewers of OSS projects are good, they can make mistakes and miss things. The same can be said for companies but at least the chances of someone intentionally adding vulnerabilities is drastically reduced.
@@catsby9051 What about target value? What about reach? These are reasons why state actors and hackers would tend to focus effort on exploiting windows instead of Linux.
What about not shitting where you eat? What about effectiveness of hacking tools? Presuming hackers use Linux, these are reasons why they would tend to improve it rather than fill it with bugs.
@@quinndirks5653 Why? Windows has far less reach than Linux...
The hardware back door situation mirrors the Microsoft situation that JB explained. In CPU companies, there are a lot of people cross checking everyone's work and more so for security stuff. I design hardware security stuff in CPUs you use and I've spent years identifying the back doors in specs (NIST, ISO mostly) and working around them. It's my head on the line if my logic is insecure and I'm fully aware of the forces trying to undermine hardware security. The motherboards and BIOS code are an easier entry point for government hackers. It's easier to pay off a few people in a factory to replace a network transceiver chip with your own. Security problems in CPUs are hardly new and have come around through traditional hacking methods rather than back door insertion and the vulnerabilities exist in the first place because of a necessary trade off between execution speed and side channel resistance. The danger in closed source whether for HW or SW is that with closed source, is that there is limited energy in the company for others to help you. Top tip, happening right now for people designing stuff to specs - try and find a constant time BCH error correction implementation in a secure sketch construct. Critical for your security, but no one sells a constant time BCH - it's unobtanium. So you need to design around that if you're in that position that you have to design it. HW security is hard work.
Windows Update basically disables millions of computers on a regular basis.
Never had a single issue with it.
I believe they target people who the government points at
@@Neurotik51so the measurement is you? what a stupid counter argument
Crowdstrike predicted xD
He does raise some good points. So really the solution is to reduce surface area and create paradigms that are designed to be secure if it truly is something so important.
I think Jonathan Blow has been implementing all these backdoors recently just to prove his point.
The best solutions in my mind is usually open protocols with several implementations done by different people (and possible to do by yourself if you want to). A good open protocol democratizes software in most ways just as much as open software and source code does. I do like open source, but as Jon says you have to be skeptical of what you use and allow to run. I like OSS not for the security but for what it allows you to do and make with your software.
I kinda feel Jon gives a bit too much to corporate closed source applications though. They might be harder to introduce a exploit into but closed source apps can hide their shitty code and that shitty code create exploits as well. People laugh at "security by obscurity" but it is an ok first line of defense. It is not good security, but it sets an ok first bar. A lot of companies rely too much on that bar though and don't have real security behind it. In OSS that bar is removed entirely.
> it sets an ok first bar
I guess in some sense every bar is an ok first bar if there's a good enough second bar behind it?
Software, computers and access to the internet should not be democratized at all. Also, most people shouldn't have voting rights and democracy is a shitshow.
@@ArthurSchoppenweghauer least unhinged jon blow fan
@@ArthurSchoppenweghauer Ok, Putin. You can take off your mask now.
Exactly, Microsoft today BARELY has people working on Windows Security. Every company is betting on open source software
Company spending massive money on security is a joke. Was working in few big IT companies as developer and even QA/Automation engineer. The careless approach for vulnerabilities is staggering. Some were as simple to fix as changing headers configuration in the server. When applied I was mailed/contacted by higher management this is not possible as application will not work. After suggesting that in that case we would have to redesign/rewrite piece of app, I was told no budget/resources/time for that. After time I stopped to care about it.
There were assigning jr resources to fix vulnerabilities (maybe 2 resources at max), where all people with experienced were formed in teams of 8 or more to work on ADA/WCAG issues as this is "vital" for the business.
isn't every cpu already pwned with a backdoor directly by intel?
there's a closed source computer inside every computer that has full access to everything, including the network, even while your pc is powered off.
if you weren't aware of this lookup intel IME
It's not fully known what Intel ME and AMD PSP do honestly so i mean... maybe their super nice and helpful... i hope lol
Nah, just unplug it from the wall and remove the CMOS or BIOS on board battery. Then you have no worries! Oh wait that only works in Desktops!
Yeah you can basically assume that every CPU has hardware backdoors since the countries and companies making them would have every reason to do so.
@@yasserarguelles6117it’s been researched in the past and found to contain various flaws. That’s from people who had to reverse engineer it. Imagine looking through the source provided / stolen from the manufacturer
@@UnlimitedPepsi you can disable it with libreboot
There were recently an attack on the system one of our big food markets use to register sales. They had to shut down for several days, some even a week. Imagine if all our food markets had been connected to it. We would not be able to buy food. How many have food for a week at home now a days?
Vindicated with XZ's recent backdoor.
With closed source, the government can just tell them to insert the black box in their code and not tell anyone, with open source they have to sneak it in.
I feel like it could be larger issue with closed source than with open source.
That's not even the big worry. If you're competent it's very easy to get hired at Microsoft. You can just act as a state sponsored attacker there
There have been several instances of bugs found in Linux, that have been around for years, that allow complete bypass or permissions checks. Who knows if they were planted intentionally. Very likely they have been actively exploited for almost as long as they existed.
These bugs were found well before introduced to stable releases. Also Microsoft has the same problem, Microsoft has offices in China, fairly easy to get a job there if you're competent and then you can attack it
There are active actors in open and closed source projects introducing bugs, stealing credentials, etc. That doesn't matter since we all know that is true and that the amount of security issues introduced by mistake is so much larger. This is why you have multiple layers of protection. Military computer are air gapped from the internet. You run custom builds of the os and the stack. Firewalls, tracking, anomaly detection. You have layers upon layers of protection.
you feel experience in his voice.
I can relate to what Jonathan is saying. A few years back there was a movie that portrays his exact sentiments. It's called Blackhat. The harsh reality is although the movie itself being a fictitious story it is also very believable and can happen realistically. The everyday people have no idea what organizations such as the CIA, the Kremlin and others are capable of doing and they've been perfecting this since the 40s and even earlier! Everything is vulnerable! Just as was said in The Matrix, paraphrased: "Everything relies on a system and that system relies on another system... right down to the Power Grid itself! If I was in a position to plan a strategic attack that would not escalate to "total destruction" but yet enough to cripple another entity (nation). I would target their infrastructure by knocking out their power grid, water supply lines, and food lines and I would do it in a manner to where it can all easily be repaired with minimal damage done to that system for there to be minimal cost to repair and to rebuild that infrastructure.
The world is a stage and that stage is analogous to a chess board and the pieces are moving! In a well thought out game, the objective in the end is either to put your opponent into checkmate or to cause a stalemate. Now throughout that game it may be premature to force an early checkmate as it may be more beneficial to just keep putting them in check while slowly manipulating them in having to guard against it by sacrificing their pieces in the process.
In some ways Open Source does have potential for research and academic purposes however in the real world just like everything else, there are always vulnerabilities and the amount of them within Open Source or the possible or the combinatorial doorways to them is much greater than in a closed system. Take for example, a power generating system where all of its computers and databases that manage all of its internal machinery to operate properly is a closed system, in other words it could be considered an analog system in that yes it does have its own intranet but it is not directly connected to nor will it ever be connected to the internet making it less vulnerable. For there to be an attack on that system, there must be a physical presence. Anything that is connected to the internet is vulnerable as it is a part of the playing field.
There are no written programs that are guaranteed to be 100% full proof from attack. Why? Because every program at the end of its compilation or interpretation is converted down to assembly a.k.a machine code. Even your compilers are not without bugs! Even your assemblers although very solid can still have bugs. And the machine code or binaries are instructions for a piece of hardware to do a specific set of tasks within a specific order. This is even true within modern processors that implement out of order executions. It is still based on a lower lying system and at this stage of the game is the pipeline of a CPU and its stages. Fetch (and or Prefetch). Decode, Execute, Read, Branch Prediction, Write Back... As this is a generalization of the modern pipelines. And even your hardware is not guaranteed to be without bugs, nor the microcode that propagates the pipeline stages.
A line goes high or low and bits are set or turned off. Then all of the muxes & demuxes switch their lanes. Values are then passed into the ALUs, a calculation or a set of calculations is/are performed and another instruction or set of instructions takes placed based on some condition of the previous results or by the next incoming instruction or set of instructions. Wash, rinse and repeat! We are dealing primarily with Voltages and Currents. It's no different than when you walk into your living room, kitchen, bedrooms or bathrooms and turn on or off the light switch on your wall.
This is a two state system that is called a binary system. It is Log Base 2 and 2^n mathematics in conjunction with Logic, Truth Tables, and Boolean Algebra. And according to Lambda Calculus the Binary Number System itself with an infinite number of digit placements is said and proven to be Turing Complete. When something is Turing Complete, it is capable of being programmed to mimic or emulate another or different machine that is able to do computations.
This all relies on mathematics, physics, and numbers along with their properties, rules, laws, theorems, postulates, proofs, and axioms. Numbers don't actually exist in nature, they are a product of the mind as they are conceptual. If one can imagine it, who's to say that it can not be done? Take the simplest and first expression within mathematics even without the use of "numbers" y = x. This is the identity expression and equation. This is a linear equation and within this expression of equality there is perfect symmetry and reflection. This expression has all levels of mathematics embedded within it.
The line to this equation by using the slope-intercept form y=mx+b has a slope of 1 and a y-intercept of 0 as they are both understood due to the identity properties of both multiplication and addition as well as exponentiation: a*1 = a, a+0 = a, a^1 = a. And with that we have a diagonal line that bisects the X and Y axes within the 2D Cartesian Plane. The slope of the equation m is more than just rise over run. It is also dy/dx which is one of the foundations of Calculus without the notation of limits themselves. Here slope is defined as rise/run and can be determined by any two points on that line by (y2-y1)/(x2-x1) where dy = y2-y1 and dx = x2-x1.
More than just the properties of linearity, your trigonometric functions are also embedded within y=x as well. Since the slope of this line is 1, and we know that the X & Y axes makes a 90 degree or PI/2 angle with their intersection as they are perpendicular to each other, we also know that the angle below the line y=x and above the +x axis is 45 degrees or PI/4. What trig function has an output of 1 when its angle is either 45 degrees or PI/4? That's quite simple: tan(t). So we can substitute y = mx+b with y = tan(t)x + b and we have not changed the linear equation. We also know from trig that tan = sin/cos. We can then rewrite this as y = (sin(t)/cos(t))x + b and still have not changed the function. We can see that dy = sin(t) and dx = cos(t) since tan(t) = m and m = dy/dx. Even the number PI is embedded within this equation. The Pythagorean Theorem and the Equation to Circles are Embedded within this equation.
The main reason to this is one of the properties of mathematics that is normally overlooked. It is present within multiplication and addition, but not within division or subtraction, and that is the Associative Property where Order doesn't matter. Order only matters when you are doing the Inverse of Associativity. Y=X is the same as X=Y. No need for parenthesis. 3+2 = 5 and 2+3 = 5. 3-2 = 1 and 2-3 = -1. Do you see the symmetry and reflection? 3-2 and 2-3 are opposites. The results are 1 and -1. These values are 180 degrees PI radians from each other. They have the same magnitude but a different direction.
I don't want you to think this is a math course, but I had to lay down some principles with a foundation for one to fully understand the implications of what Jonathan is saying. Everything that is done within computers relies on the physical components and the behavior of electrical current that is being supplied and drawn from that system, and this is nothing more than a physics problem which completely relies on mathematics and numbers and how we relate them to the physical world or nature. That system can always be manipulated!
There are governments and there are billion dollar industries and or corporations out there and they spend millions and even hundreds of millions and in some rare cases even billions to "protect or to expand their assets" and they will higher some of the best minds to find the best exploits that will go unnoticed and hidden or possibly even to cover their tracks leaving no evidence behind... We've been doing this since at least WWII within the modern era of computer systems and spy and espionage have been happening since Cain and Abel!
As he has stated he's been in the industry for 20+ years, he's very intelligent and capable and yet there are people out there that their skill set is way above his pay grade! Not to insult him for his years of dedication, work and success, but there are people out there that would make him look like a novice. So if he is giving you warning, you should heed it with caution!
Most developers are going to value portability > security, so package managers and OSS is not going away anytime soon, just more security products as a bandaid solution
start giving blue verified badges to libs / frameworks in packages ecosystems lol , what else... :D
might be a new job type... "code investigator" lol... :D
@@Microphunktv-jb3kj Would you have to give a "verified badge" to each version or patch?
Really accurate
I think this applies to all important software... (Not just open source)
I came back here today because the log4j fiasco reminded me of this talk/conversation
You have the exact same attack vector at Microsoft for instance. it's ridiculously easy to get hired, and you can introduce bugs
huh , that call was correct afterall
The man was right, XZ was backdoored 😂😢
3 things:
1. If a large PR brings in a significant feature but introduces a security vulnerability its still a significant contribution. Bugs are often introduced as a result of oversight by the contributer, so this effectively just means spy programs are buiding FOSS software (yes bugs are bad but that leads me to point 2)
2. Its the responsibility of FOSS maintainers to catch bugs.
3. Its the responsibility of the people depending on that FOSS software to contribute in the ability of FOSS developers to work on the project or contribute their own additions.
The lack of 3 is what is wrong with the FOSS landscape at this time. Companies use and abuse amazing FOSS tools that some developers put years and years of their lives into to get no money out of it. Sure, some are well off enough to build these tools and have enough time, but these backdoors and other cybersecurity issues result from a lack of contributers, and a lack of contributers comes from a lack of incentives.
FOSS can only be good AND free if it can attract contributions. If these megacorporations never help with that then this problem will persist; they cannot have their cake and eat it too.
If you willingly use a license that allows anyone to do as they please you cannot complain when someone uses your software to make money without you in the picture. If you decided to license your code with MIT and a mega-corporation forked your project and kept their contributions private there is nothing you should complain about. This was a decision you made when licensing your code like this.
FOSS is about letting the user modify the code that they acquired and allowing them to share it. That's all. It doesn't have to be free of charge. You are allowed to sell open source software. You're allowed to sell support for your program. You're allowed to sell maintenance. If you want to make money, sell something, don't give things away for free and complain that people got it for free.
in frameworks like react the codebase is so vast and complicated by the day, i'm sure there's not enough bug fixer or checker to go through all of it on continued basis, and not to mention purposefully adding bugs is much easier than finding & testing and fixing a bug
React is a bad example because only 2-3 guys at Facebook are allowed to develop it
@@spicynoodle7419 Not anymore, Versal has largely driven React in the past few months, and they've drastically upped velocity while obscuring features. Things like Suspense, etc.
React today is not the react we had in 2021
bad example.. react has core team of around ~5 people (atleast used to be). open-source is not as open as people actually think... lol...
go try making a feature and PR in some popular lib / framework..
come back and tell how "open" it is... :D
I don't like how he counters differing opinion by just saying those people aren't giving examples. But also the he just claims to have experience, and also doesn't give examples.
I get that we KNOW he has experience but that doesn't mean he's not just using that as a card to play. Not to say he's lieing, but to say if he gets passionate about something he could internally be biased towards his past, not use it to construct the right answer. If he constructed, he could show receipts so to speak.
well, we can fall into the Ken Thomson Backdoor problem, very interesting topic. Also don't forget that Intel shipped a running working copy of Minix in the Management Engine running on ring 3. So the exploits and backdoors are everywhere, even beyond our reach.
DUDE did this age well (see "Linux got wrecked by non-consensual backdoor attack")
I’m sure this happens, but I don’t think it happens in the Linux kernel much. Those guys care more than most product managers. And further. In a company bugs happen because they have deadlines. That does not happen in open source
Sad to know that Turing proofed the existence of an algorithm that could detect any bug an impossibility because of das Entscheidungsproblem.
Quite eye opening
i was here
Surely it would be easier for governments to just direct companies within their nations to develop software back doors. The issue with FOSS is literally any researcher can just go and look at the source code and test it for bugs. Whereas if say Microsoft was directed to implement such a feature you have the devs involved and the upper management as a point of failure.
xz-utils backdoor says “Hello”
7:27 to all the naysayers.
The truth is, nothing is ever really secure.
Security is a spectrum, not a binary yes/no
Everyone talking about how JB prophesized xz. But what if the xz guy saw this video and thought "hey, that's a good idea!", hmmmm?
Fact that he has exact numbers 💀
So basically after seeing plenty jb videos, it seems he is the only real programmer on earth
Nah, it's not that bad. He'd probably also count Casey Muratori as a real programmer - and it might even be mutual. But that's that : p
Thats because youre a junior thats easily impressed by a good programmer with a strong way of wording his opinions
JB is simply wrong here, and it's sad to see someone with such capability and insight in other respects fall for such naivety. A thought should be followed to its eventual *final* conclusion, not just the next logical one.
'Expense' means absolutely fsckin' nothing for a state actor when it comes to deliberately orchestrating the insertion of code to further facilitate the exfiltration of data, the future exploitation of a software system, etc. Inserting an 'agent' or whatever to a corporation for whatever purpose is purely a tertiary concern for a governing agency which presides over the soil on which a corporation conducts its primary operations. The corporation will always have a financial motive first -- not the concerns of the general populous -- and thus is easily bought off through any of a number of options. Simple preferential treatment for an upcoming contract auction is often adequate enough to buy oversight over many smaller corporations and the bigger ones aren't much more difficult.
Conversely, hiring, training, monitoring, and deploying assets to introduce complicated security flaws into OSS projects in the hopes that the bugs will go unnoticed by many millions of others in a community for an adequate period of time and ensuring that they remain able to make changes to safeguard those flaws in the event that other code is introduced which inhibits the facilitation of them is decidedly more expensive over time.
You can pay a corporation to overlook something in a private codebase, or even just dictate that they engineer something in a specific manner to maintain compatibility with self-designed platform restrictions -- and you don't even need to explain your reasons -- and they will gladly make those accommodations so as to guarantee the financially-beneficial relationship. You cannot, however, exercise that kind of influence over a continuously fluctuating populous in a diverse and complex group of communities, especially as other agencies are apt to do the exact same thing.
The bottom-line here is that it will always be easier, more efficient, and overall more effective to influence private codebases than public ones. OSS is not, by any means, impervious to malevolent meddling, but it will always be 'never worse' than private codebases. The acquisition & privatization of many major OSS projects proves this point pretty effectively.
Still so sure?
@@nexovec Yes, and recent events have proven my point, multiple times.
@@GeraldOSteenHow is that possible? It's obvious these things are mainly to be used to spy on those taking special care of not having their data hosted by a corporation(think state secrets, personal data), and as means of industrial sabotage.
Besides, If you think you can just have a US agent walk into a Chinese bank and offer them preferential treatment for being able to spy on them, I wish you good luck with that, (this makes your verdict of naïveté seem rather funny)
It's way over my head how you can feel like your point has been proven.
with sqlite they dont have this problem (at source code level). Its open source but they dont accept external contributions.
So basically not open source then?
@Kaiser Wilhelm It's not open source but source available. The definition of open source demands it be open to contributions.
@@imranzero this seems very wrong to me. sqlite is public domain. nothing prevents you from redistributing a modified version. i don't think the common definition of "open-source" implies upstream must accept contributions.
it would seem highly pointless to me to downgrade *public domain software* from "open-source" to "source-available" when source-available implies you might not be able to modify and reuse it legally.
even if it were a good idea to integrate that concept in the term - it isn't - the commonly accepted term (aka the OSI definition) has a strict definition. being "open to contributions" would bring in a lot of questions about how extensive "open" should be.
- would unmaintained open-source software become only source-available?
- what about projects that ban specific abusive members from contributing?
- hell, how would you universally define acceptable guidelines for what should be "reasonable" quality standards for accepting contributions? so, would refusing to merge a PR that adds ASCII art genitals to your CLI help message downgrade your software to source-available?
can you find anything in the OSI definition that is that vague?
@@asuasuasu The vagueness of term "open-source" is the core of the problem here. I agree that open-source can be used describe something in the public domain which doesn't accept upstream contributions. But I think I was referring to the "open source culture" (linux kernel) that Jon touched on the video.
Nevertheless, "Open Source™" has been hijacked by everybody and their grandma and is basically meaningless at this point.
@@beefchampion2792 nothing in the open source definition requires the developers to accept contributions. As long as the source code is available and can make changes to it and redistribute it (along with your changes), then it's _free and open software_.
There is nothing you can do against secret agencies. I'm a cybersec student, that's literally the first thing your teacher will tell you: "Accept that you are already owned". OSS can be intoxicated, yes, but the good it raises to the world is too fucking big. You can not pretend that it will not last just because it has flaws, spoiler alert Sherlock: Everything have flaws, specially humans.
damn, he saw the future
I don't know enough about this but I would like to think that if there is a problem more eyes on the code at all times would find and report it before the closed environment would. Either way I can see his point, One requires more effort in order to be compromised and if you get caught you basically need a new spy. On the online project you can skip that step and go full on offense. But if the people checking the code did their job properly you would have highly competent spies improving your code for free, because in order to be merged they would need a big impressive feature working big enough to hide the bugs so if these were found It would be a win win for the users. It would be a bummer to lose great software like blender, OBS, VLC, Audacity or Linux
1. More eyes on the code indeed means higher chance that the good guys will find exploits but notice that it is not obvious that open source should mean open for contributions.
2. No need for big impressive feature. Just few small isolated changes made to look as if they were contributed by different parties.
It’s not at all true that more eyes equals more bugs found, particularly not subtle and security relevant bugs. 10 high level vulnerabilities researchers would find many more bugs than 500 good software engineers.
@@lucasjames8281 Yeah because 90% of the time high level vulnerability researchers are funded to do such research, it's literally their profession. Dozens of vulns have been found through Project Zero. This is also entirely unrelated to the idea of more eyes catch more bugs, which is done at the time of PR review or during a form of code audit.
I predicted all this years ago. NPM!!!!
Yeah, people will try, but they will often fail, at least with projects like Linux.
Linus is very strict on what pr's he includes, and there is a lot people (at least for large active projects) that work on, and look at the code.
6:52 Ironic that Jonathan is the one not knowing what he is talking about, because this is absolutely true, every company I have ever talked to that majorly contributes to Linux is invested in committing security patches to the kernel, and reviewing their code. In fact, most such companies produce both open- and closed-source software, so Jonathan's argument is dead in the water to begin with. Companies are hesitant to open-source software because maintaining it is expensive, but maintaining software is expensive to begin with whether it is open source or not, that's why experienced developers get paid so much.
Came after the xz backdoor lmao 😂
BlowGODS....
easier said than done
His initial point about open source software allowing anyone to insert bugs for personal/national gain is a very good point, I fully agree with that. There are thousands of projects that people just download and put in their project with nearly no oversight or validation.
But to then say, the Linux kernel must MUST have exploits injected into it, is a WILD idea. Linux development is highly funded, with hundreds of some of the most experienced developers in charge of PRs and fixes. Changes are highly controlled. By the time a change of any kind actually gets to the real kernel, it has gone through more eyes, and more skilled eyes, and more re-writes, than any of the software developed at an Amazon or Microsoft. If you wrote an exploit in a sneaky way, chances are it'll get re-written before being committed.
I hate to sound like a Linux stand, because of course there are issues with Linux. But secret security bugs is a laughably unlikely issue.
On top of that, the Linux kernel is funded almost exclusively for security and stability reasons, since it's funded to be used for servers by million/billion dollar companies around the world. There are not incentives to skip security because that is why it is being made. Unlike a Windows or AWS (etc).
I can't write a piece of code and get it in the Linux kernel, probably ever in my life.
To me, the best defense against exploits is the sheer magnitude of eyes on the product. Linux has far more people watching not only the code, but what happens on deploys, thousands of deploys of different kinds. So not only would it be unlikely to get code in, but also how long will it be vulnerable.
It would be easier to compromise the small group of say Microsoft developers for the Windows kernel.
I also would guess that a far better attack location would be the application layer. You can get exactly what you want from it. But maybe that's more for just data.
But to acknowledge what Jonathan is trying to say here, defense is harder than attack. I get that. Someone could input attacks that are very very very hard to see. And there is no software developer that would identify them in the PR.
But exploits seem to come regardless of the developer's intentions. For decades software releases with exploits that compromise security and stability. I don't see how discovering exploits in Microsoft's windows or other well-funded private development firms, is any different from attempting to inject exploits. The main difference being, many people are atleast watching Linux, and the last couple decades would say that opensource project has a much more solid history with fewer bugs/exploits.
Look into how the university of Minnesota got banned from contributing to the Linux kernel. They were successfully able to add many vulnerabilities as a "proof of concept". They are not a highly sophisticated attacker either; a state actor has far more time, motivation, and resources.
If open source software, such as Linux, makes systems inherently vulnerable, why are organizations like the NSA betting on it?
Cause they're the ones who know about 90% of these planted attack vectors, lmao
@@blvckbytes7329 And how exactly would closing the software fix the problem?
But it doesn't mean NSA use free linux distro, such as ubuntu or arch. They might use linux, but they probably created their own linux system and kernel.
@@dsd2743 their comment could be interpreted in 2 ways
1. NSA backdoored linux so its encouraging people to use linux
2. NSA backdoored closed OSs, so its encouraging americans to use linux
It doesn't. That's why Jonathan is wrong. It's just as easy to introduce security bugs into the Windows kernel for instance. They don't have a big security team today because Microsoft primarily is focused on Azure and the AI race, and if you get hired (low bar at Microsoft) you can introduce security bugs all you want
I do not agree with what is said in this video. Open source software means that anyone in the WORLD can watch and contribute to the source code, so obviously people that have DIFFERENT INTERESTS. If one people tries to add some new code it can be seen and reviewed by anyone, whereas in a closed source project most likely nobody will ever check it (if it was validated by the manager or quality team). Also it's not because that it's open source that you can publish anything, you also have to go through a validation process like in a private company. With closed source project you're BLINDLY TRUSTING the software producer, you have no way to actually check the source code efficiently, so it's obviously bad in a security point of view.
A multi-kernel built with a less cryptic language than C would provide a lot more safety IMO. It pains me that Linux is so huge and complicated.
Log4j
I've said it before and I'll say it again. Comparing a tiny open source project without much oversight to Windows is ridiculous and is like comparing apples to oranges. Compare the LINUX KERNEL to WINDOWS and then we're talking. The Linux kernel has tons of oversight (Microsoft, Google, Meta, AMD, Intel etc all contribute to it, have maintainers from those companies and is used extensively by these companies). With Windows the security team is actually shrinking as Microsoft is focused on their most profitable businesses like cloud/Azure and AI so it's quite easy to get hired if you're competent and introduce a backdoor.
1:35 someone already tread this with the Linux kernel... and failed. Same with PHP, though it got a little further, but not into an official release.
The thing with any reputable open source project is that they review every incoming change and look for this.
They can see every single line that was changed, and check what it does.
Yes it is possible, but it is unlikely to happen, and would easily be noticed and removed if it did happen.
survivor bias. you only see the ones that failed, not the ones that succeeded
XD
what will you say now? XD
Note: with git you can do git blame and see who added that code
a suuuuuuper naiiiiive note
first he says: 'how do you think that's not a thing?'
without providing any reason to doubt serious maintainers who review all those check-ins of code
YET he still asserts that: 'i guarantee you that there are at least 17 serious exploits in linux kernel'.
He might be right, but i don't like that he is so sure about some arbitrary unfounded things, yet so skeptical about some others.
I understand Blow's point here. But what do we do? OSS ain't going anywhere, and this problem is only going to get worse.
Say the Russians, or the Chinese, or the NSA all have compromised the Linux kernel, or highly used NPM or PIP packages.
What realistically can anyone do?
I think you just work under the assumption everything is comprised
Yeah, Jonathan just can't be wrong on this.
He wasn't
With closed-source you don't even need to review or ask anybody. You as an NSA shitter show up with a warrant and put whatever backoors into Windows that you want. I prefer having 1% chance of discovering a zero-day with my own eyes to 0% when using proprietary software
The png crop bug on android and windows. That went unnoticed for quite some time. And it really should have been obvious for all those years. But we're not expecting decent code from companies like google.
Google has far better engineering than Microsoft
I don't see it all about the vetting, etc. Those doing their job, their vetting is influenced by stress, etc. A lot of open source projects are passion projects and care more about the resulting code than the developers at commercial companies.
What is a problem: accepting code from random persons, all with their own motivations, on the Internet probably requires a higher level of vetting then within the same company. But if you are Microsoft... that also goes out of the window(s).
7:27
Ah the good old Blow, I forgot how big his ego was.
He's kinda not lying tho, OSS can have problems, good luck finding zerodays in OSS with many lines (even on Closed Source tho), nobody got times for that except people with malicious intent.
But hey... Crypto AG existed, so Closed Source or Companies are not so safe, because it's more opaque, I kinda understand the argument "we can see the code"... but like he said, too many lines and you ar e more susceptible to malicious injection in OSS.
Still interesting to listen to the guy.
so what? you wana say close source software makes more sense to use?
ah, sorry. you are creating games and all of them are close sourced :)
"node-ipc" ♿
He's right but I don't see how this is an argument against open source specifically. It is an argument against bad software development practices.
Against DDOS we can protect by router with right firewall.
7:25 You are wrong. Problem is not length of code in kernel. Problem is that kernel is in C. Better language is Cext it is C with few extensions. Such extensions cause that program source is shorter, faster, more secure.
Voice clone of Donald Trump 😮
This starts out early being a bad take, because Open source and package managers have already existed for 10+ years and it's only gotten more popular. To say that "it won't last long" is to not be able to see the forest for the trees. Are there potential security concerns? Sure, but all software has security concerns, I'm not convinced that open source is some how more dangerous than other software, especially if it's being actively maintained and scrutinized.
With the latest xz problem I feel the seeds of doubt are sown, whether a particular oss can be used for critical tasks
@@kushalpsv That's not how it works. It's very easy to get hired at Microsoft and introduce security bugs if you're a state sponsored attacker. Microsoft's Windows security team is also tiny as they're more focused on their cloud business and AI
You’re a spy!
So is this dudes opinion about package managers based of a global system of spys? hahaha
Shut up bro. Give examples. You are the one not giving evidence.
The npm examples are self-evident. event-stream had a malicious dependency in it for almost 3 months in 2018. Earlier that year eslint was compromised for a matter of (only) hours but it was scraping .npmrc files. He should do his due diligence in providing examples of kernel hacks that meet his criteria, but his experience is well-worth consideration and the sheer throughput of open-source LOC makes human moderation damn near impossible and we already know algorithm-based moderation is garbage, so I'm inclined to put stock in what he has to say.
Log4j bro
Well look at that! the Log4j security vulnerability impacting in essence everybody is just what you asked for
It has been done ruclips.net/video/JH_BGlS5LR4/видео.html
Devs are such babies.
"Most servers are linux" complete bullshit
Most servers do run Linux.