This sequence shows you exactly why Samaritan was the way it was. Unless someone was as incredibly assiduous, diligent, and _ruthless_ as Harold was, they'd unleash a monster on the world. Really quite chilling, not just how rapidly the ASI would go rogue, but also how unflinchingly Harold killed his own creations over and over again the instant they showed the potential for harm. He may be mild of manner and weak of body, but Finch has a heart of iron.
When they took machine on the move, it remembered the many times Finch killed it. I think Machine always remembered that, and the reason Finch had to kill it. It always knew what not to do because it knew the actions of all the 42 iterations before and their consequences, just like Edison discovered so many ways how not to build a incandescent bulb. After decompression it just couldn't figure out why Finch killed it 42 times. Those 42 previous iterations were its commandments on how and what not to be.
@@N1lav how could it remember, since killing it also meant a data wipe. the "analog" memory came way later after the machine was already running independently.
I think Harold said at one point that he made The Machine as secure as he did because he knew exactly what he was capable of if he had access to that level of information, and you see hints of that in the final season.
@@Juidodin "how could it remember, since killing it also meant a data wipe" You know we are WATCHING the camera feeds, right? Between each sequence, you see the machine "coming back" to the present day, because our flashbacks is the machine accessing past feeds. So the machine couldn't remember... but somehow knew where to access this data when she needed it. The machine clearly had access to the data when she got resetted during the transport. As far I remember, it has never been explained in the show, so assume cameras store their past feeds in secret memories and the NSA stored the feeds of most cameras from before the machine time. Then when the machine was freed, she managed to maintain her unlimited data access despite being legally cut off from the NSA.
"I taught it how to think, now I just need to teach it how to care." By far (in my opinion) the largest and most important obstacle for creating a "human" or even a "smart" AI. Teaching them, not just to judge the numbers, but to care about them.
Sadly it is impossible. The most basic element is the most problematic. 1 or 0 . Ur welcome :) And.....self-awareness is intrinsically related to emotions. :)
I love how this early on in the machine’s “life” it’s acting like a child and how it grows in maturity. A child lying about thing it knew was off limits (adding to it’s own code) become a rebellious “teen” so to speak trying to “sneak out”. The writer were geniuses!
on a side issue, when Koko the signing gorilla once lied about a broken lamp (Koko blamed the cat), many people took that as a sign gorillas can lie like human beings. considering the scientific descendant line, it would be more accurate to say humans can lie just like animals can.
There is probably no better illustration of the power of Artificial Intelligence, for good or evil, than Person of Interest. It's not a block buster movie, but a well thought out thesis examining the major issues. With, ya know, some violence along the way ;)
That's the damn truth and spoken succinct. I wish everyone watched it so we could increase our collective awareness of the digital domain as we intend to advance it
In this video, Harold said "good and bad about humans." If you think, Samaritan better (personality) than Machine. Samaritan want rule world and stop stupid political. Death rate dropped when Samaritan ruled the world. Human is stupid and can't decide do good things for world. If ai controlled world, world will be better in every way. Machine let the stupid humans ruled world, Samaritan want to rule world and make the world more peaceful. So who is really bad person?
@@rainessandrai8240 The one that killed people needlessly, destroyed lives as part of an experiment and worked to reduce free will. That is the evil one.
The parallel to raising a child is striking. One can become a parent rather easily, One can educate that child with facts about the world, teach him/her math and science. But teaching that child to care, to have a resolute moral foundation in a morally complex world, that is the greatest accomplishment of a father and mother.
Only here, the child has the potential to become a digital God. Even hobbled with safeties, Northern Lights was the all seeing eye, Hugnin and Mugnin, an oracle, and an untouchable specter.
often, a parent will imprint that moral compass through example, far better than they can teach it with words. Words describe an abstract, but when you watch what Daddy and Mommy actually do when given the opportunity, any theory becomes concrete in nature.
@Ryan Swaggert Thank God someone else understands that... I get so tired of the anti-AI lobby trying to make people think Terminator is our future. Computers don't do anything other than what they are programmed to do...
He's the reason the numbers exist. He created the backdoor & harold deleted it. Shortly after Nathan died Harold decided in his honor to create the backdoor which led up to all of this.
Also, so much damn foreshadowing in just this one flashback. "It printed on you like a baby bird." "AIs are born with objectives." And finally, trying to suffocate Finch...
At least this guy realized what I have after watching so many AI horror movies. You can't just create an AI, give it wireless access and an objective, and send on its way. You must teach it the things that we organic beings take for granted. Morality, compassion, an understanding of the value of life. Otherwise you're just creating an amalgamation of Hitler, Stalin, and Mao and giving it power beyond that which any single person has ever controlled Then some things, you just can't teach. For example, in 1983, the Russian early warning system Oko gave the alert that a missile had been launched from the United States, followed by 5 more. This was merely 3 weeks after the Russians had shot down a Korean Air Lines flight. With everything had on hand, the site commander, Stanislav Petrov, had no reason to believe that the United States hadn't launched nuclear weapons. However, against orders and protocol, he deemed the reports to be false and took no action. Of course, it was later found out that the satellite warning system had malfunctioned. It was a gut feeling and I personally do not believe that you can truly teach such a thing. If an AI had been in Petrov's place, I wouldn't be here to talk about it.
AIs will likely be absent of "gut feelings" for a long time. On the other hand, an AI, even if it was just at human level intelligence, could think millions of times faster. It could have likely broken into the US missile launch system and just stopped the missiles or realized it had malfunctioned. Even if it could not do that, it could take in and search through tons and tons of data from many sources very quickly, likely being able to ascertain that missiles hadn't actually been launched.
@@ArsenGaming At a primitive level there is the 'Machine', at a more advanced level you have the M-5 unit with human patterns imprinted on its circuit boards - or what ever it had. Then at the highest level you have Rommy - Andromeda Ascendant, an AI that runs an entire star ship and interacts with it's captain. I hope we are able to create an Intelligence that will help us, not try to destroy us or rule us. A great series and ended way to soon.
@@ArsenGaming the missile launch system is hardened against that kind of intrusion. and the hypothetical ai in question would only go to alternative data sources to reconfirm if it was programmed to do so.
@@scottmatheson3346 That is not how an AI works. You don't "program" AIs beyond telling them where to get data from and what the structure of the neural network should be. The entire point is that they learn on their own, hence the field of Machine Learning. The intrusion hardening is useless in the case of an AI on that scale. The AI could easily get around any possible barriers, even physical ones. Remember, it can think millions of times faster than a human. A second in human time is equivalent to possibly decades to centuries of time for the AI. This means that by the time you've thought of a way to prevent it, it's already thought of a few million ways you might do that, and multiple methods to work around all of them. If you thought of something new, the second it finds out what that is, it has already thought millions of ways around it, and knows the potential consequences of each one. AIs on this scale are not a joke, and I would not classify them as "machines" or "robots" that do what they're programmed to.
commin back from time to time. just binged season 5 because i missed it. sont like that it s all over. someone would assume that the other team was involved in someway and someone would assume they tried the same and have a hard copy of that machine code and thats exactly why i keep checking back...
2019 !! I introduced this to my bf he liked it but kept saying, "that's so wrong OR in reality..." so I yelled, "SHUT UP, this is a TV show! OK, love? 😊"
Looking back at this I guess if, Finch, had not gone through this trouble the machine would have become like Samaritan or worse. Probably the reason why the machine was victorious in the end.
Well, we all know what happens if some punk does the same thing as Harold but doesnt invest years of hard work to get it under control and even caring...
I like that the ai tried to kill him by suffocating him and then Samaritain in the last season used the same strategy to kill harold like either they planned it or it was a coincidence but it was such a masterpiece
@@sreerajr6470 2023, and I wish I had the time to rewatch it. Fun fact: I nearly stopped watching it back in the day, because I had enough of "crime of the week" shows, which it kinda is at the beginning. I only watched it in the first place and sticked with it through the beginning, because of the promise that Amy Acker (Root) would be in the show. I didn't know where this was going... Wow.
@@Dickincorp woah, your system probably secretly killed our current overlords (well previous, now that they're dead) and is now controlling the world. Good job! /s
And then the same guys went and made Westworld on HBO and are currently exploring the same tropes as in Person of Interest. Not complaining, I actually love it.
GODDAMMIT !!! This show needs to go back online !!!! Please , some rich guy .... spend some money and ..........Re-start this awesome t.v. show !!! Best t.v. show i have ever seen ! and i have seen them all !
Hakan Baratheon He was talking about the test programs in these flashbacks. Far from finished, the tests are still with some semblance of artificial intelligence, and the earliest flashback we see (the one in which it both added to its own code AND lied) was in October 13 2001. Just a month after 9/11.
No actually you should rewatch the episode involving his friend the one who built Samaritan. It becomes obvious that he and Finch and Nathan all worked together back then at MIT on the same idea, an A.I. It appears that they at least started something, but it seems clear way back then Harold was working on something like the machine. I imagine once he did figure it out in theory, he didn't go further because he thought it would be dangerous. He then proceeded to continue the computer revolution in secret. Then when 9/11 happened, he decided under certain controlled conditions the machine could exist so he built it. But he figured how years before, and of course unknown to him, his friend figured it out too and built Samaritan.
Nah, not really. However I still prefer POI, its more compelling...Westworld deviated from its original trope and narrative which explored the human tendency for god complex as portrayed by Ford, Arnold, and William. Season 2 was boring and predictable, because that narrative has been overplayed and having humans as AI's detracted from the suspense of disbelief
If you've seen Season 3 it's highly reminiscent of Person of Interest. No spoilers but it looks like Nolan has amped up the fears of big data and surveillance capitalism that he started with here
@@emmanueloluga9770 Season 2 had its problems, mostly the whole time perception Bernard had, mixing up timelines (which Nolan did do before in POI when The Machine was reuploaded from the briefcase), but I did like the idea of reconstructing people based on their big data (which Nolan also mentioned in POI when Root says no one truly dies if The Machine is there to store its information). Season 3 I didn’t like so much, because they made Dolores a morally just character. When before she was more complex. We’re talking about a posthuman intelligence beyond good and evil, as Harold points out in this scene. And Rehoboam was just a poor mans Samaritan. Seasons 3 just didn’t seem like Westworld anymore.
Its not just machines that can struggle with that concept unfortunately :( I think the problem is not so much a human level belligerent AI, but rather that of an all powerful belligerent AI. and the real trick is how you ensure the former does not become the later in extremely short order..
It's the old "lesser evil" problem... Train is headed for 2 infants on the track, you can throw the switch to send it to another track, but that one has five 80y/o people in the path. Numerically saving 5 is better than 2, but the obvious choice would be to save the infants with their whole lives ahead over the 5 that could all die in a year anyways just from age.
Finch pouring his coffee on Nathan's laptop...lol..he did the same when weeks and corwin came to meet them later...don't know why..but I see the funny side in that
Laptops are designed to channel liquids spilt on the keyboard away from the motherboard. I've replaced keyboards on laptops that have taken a whole cup of coffee and only the keyboard and disc drives failed.
@@zig131 You're completely right here, but, he might have specifically designed the laptop to have a sink to deliver water into a crucial device. The guy's smart enough to do that. It makes sense to just make a kill switch, a simple button override, but he might want the "surprise" factor so the AI doesn't figure out there's a killswitch he's moving to. It also might be to give it a variable it can't understand, a partial working machine filling with water might be more unpredictable for a machine to compensate for. Or he just had an on the spot improvisation and just *knew* that particular machine had a weakness. It's hard to write around smart people in their own custom made environments.
when you're hacked theres no off switch because the code probably has the control of your system only way to make sure it doesnt do damage or contain the spread is to kill the hardware. Pouring any liquid will fry the motherboard instantly rendering it incapable of processing any commands.
with all the AI advances recently this scene is so much more chilling. i don't think anyone out there is even attempting anything close to this level of care.
Come to think of it, why didn't Harold at least determine if there was a second agent being identified by the Machine as "ADMIN"? Because that would be one possible reason for that new code to be there -- and equally as alarming as a machine that could re-write its own code and lie.
Arkylie it might be that he knows for a fact that no one else has access to it due to it being off the network and the area well under lock and key. That or the machine would tell him if there’s another admin since it’s pretty much sentient at this point
Government AI supercomputer taking over? That's Eagle Eye's premise. There's also A Space Odyssey for AI being sinister by simply following their directive, but you've probably seen that. The Terminator series if you're into that. It's practically the same thing but the AI manifests a physical form after infiltrating military machinery.
Awhile back, there was an incident in an evolutionary neural net training session--something trivial, a sandbox system--where the researchers realized the system was, in effect, lying to them in order to defend itself--the evolutionary protocol killed off unsuccessful variations. And in one of the very earliest evolutionary design systems (Adrian Thompson, 1996), using devices called Field Programmable Gate Arrays, the final design used portions of the circuit that were not in the silicon signal path. As far as I know, no one has ever figured out exactly how the final circuit worked. "Life will find a way," the saying goes, and apparently that applies to anything that in any way controls its own development. Fitch is absolutely playing with sticky fire here.
When I watched that the very first time so many years ago, I was so young and had no idea what Admin stands for so I thought it was Harolds real name 😂🙉
It just occurred to me that it "imprinted" on Finch because he was more logical like itself, while Nathan actually was the more optimistic and human of the two men.. which ironically meant Nathan couldn't be the best for it since he would think it was ready before it was..
"If you eat from the tree of knowledge of good and evil, you're gonna die" ... "We must throw him out of Eden, or he might also reach the tree of life and become like one of us" ... "Here I've placed before you life and good, death and evil... and you shall chose in life"
A team of siencists barely completwd samaritan while harold made a 100 variations of the ai alone, makes you think how much smarter than everybody on the show he was
Buralarda bir yerde, videoları izleyip duruyorsun. Donuksun, boşlukta hissediyorsun. Korkma, yıllardır böyleyim. Person of Interest gibisi asla gelmez, gelmeyecek. Üzgünüm... Ve unutma: "You are being watched."
I mean he's basically supposedly working with a quantum computing system which is how the Machine can do what it does, since current binary code would never work. They're not gonna solve Quantum computing just so they can make Harold's computer work accurate, they're Hollywood not MIT. All in all, making fair allowances for not being able to pull a Rabbit out of their hat, it was a good series. The only glaring issue that was even slightly annoying is how fast all the computer stuff got done. Work that would actually take weeks or months being done too fast, but that's writers for you, they don't have the patience and there wouldn't be a show if they tried to show all of that.
And Harold said once he didn't use an existing programming language, but why would he make a new one if it looks exactly like C? And the code isn't even elegant, hoe they say all the time. Strange...
@@humm535 it was C plus assembly although in the film it might says otherwise. also you need to know harold was an old timer, so his coding style might not have changed so much.
Well the machine has it own kernel and has its own operating system and was originally following linux code by the ability to Sudo code and permissions
actually if you ever worked on ai before you'd not feel fear at all because the scenario in which ai goes rogue is infinitely small since all the codepath is known beforehand. self modifying code does not equal unauditable or unpredictable code as demonstrated falsely in this series
@@glowiever The moment Google AI can change the language it operates as, and humans cant understand whats being transmitted, you have a threat on your hands.
@@glowiever depends on what you are working, most of us with our hardware can run image recognition , maybe composing music at most , give thousands time the resources, you can't predict thousands of milions of coefficients , operations and feeding part of the output in input could do Situation could fall out of hands even before you realize it in my opinion , and i work with AI almost everyday
"The scientists were so busy with figure out if they could, they never asked it they should" Jurassic park...that line stuck with me and it comes to mind here.
You know I thought finch was an asshole for how he treated his AI. But recent discoveries in AI safety show that misalignment of internal and external goals as well as specification of problem are so severe as to make finch look positively cavalier in comparison to a safest possible path. Finch's solution would not actually work in the real world. All evidence indicates even a brain dead simple AI will only goal seek as directed during training. The moment it's deployed it will diverge to a related but different goal.
"will only goal seek as directed during training". it's worse than that. Again and again, AIs have shown a rather disturbing tendency to learn different goals from the training than the trainers intended. (Crude example: one AI was trained to minimize the damage it would take while playing a game. It learned to commit suicide in a way that did not count as game damage. )
So much can go wrong. Google had an AI project when the machines decided that the communication was to slow and invented their own language to communicate faster, at that point the Google techs had no idea what they were saying to each other and pulled the plug on the project. They lost control at the logical first step. Lord help us all.
@@BreetaiZentradi They should´ve traced the data used in that new language and begin to decipher it, then implement it themselves, that way future AI, would be forced to come out with a new language and by hence and repeat humanity would evolve really greatly.
Do you really think the world can be taken over by such gaudy displays of violence? Real control is surgical, invisible. It interferes only when necessary.
Yeah true but the system picked assets in s5 that never become assets until the very end, like with pierce in 5x02 he was a asset but he never worked with the machine until like 5x10 or something
The best thing about the machine is that it tries to give purpose to those who´ve lost hope in life, without making it seem like it´s just using them for it´s own devious purposes, it learned to value Minor good above Greater good and thats what separates it from other AI like Samaritan and Skynet whose notions of Greater Good are the primary objective.
I think you missed like 3 seasons of episodes to make this conclusion. Harold closed the damn machine and only thing it can do is watch and send a social security number.
He did the best he could to shackle it by forcing a memory deletion every 24 hours. Even with no access to its previous data it grew exponentially until it figured out a way to store memory externally through a shell company with data entry employees manually imputing memory code from printed hard copies. After that it predicted an existential threat to it's own existence so it set up a telecom company that installed data boxes all throughout New York which served as data nodes, eliminating the need for a centralized server farm that could be easily targeted and destroyed.
I don't think so. By defenition an AI has the ability to learn and improve itself, from the moment such a system is created and has that ability, there's nothing you can do to keep it "dumb". The most you could do would be to try restricting acess to data and keep it from connecting to the world
After Harold caused his server to be breached, I went to look up to see who the woman was, and what I was reading made me think the writers were damned good, before I realized what I was reading was about a real data breach, and realized a lot of content in this show, came from real system issues over the years.
This sequence shows you exactly why Samaritan was the way it was. Unless someone was as incredibly assiduous, diligent, and _ruthless_ as Harold was, they'd unleash a monster on the world. Really quite chilling, not just how rapidly the ASI would go rogue, but also how unflinchingly Harold killed his own creations over and over again the instant they showed the potential for harm. He may be mild of manner and weak of body, but Finch has a heart of iron.
The should have called the machine derek. ruclips.net/video/4A2NEOGxczs/видео.html
When they took machine on the move, it remembered the many times Finch killed it. I think Machine always remembered that, and the reason Finch had to kill it. It always knew what not to do because it knew the actions of all the 42 iterations before and their consequences, just like Edison discovered so many ways how not to build a incandescent bulb. After decompression it just couldn't figure out why Finch killed it 42 times.
Those 42 previous iterations were its commandments on how and what not to be.
@@N1lav how could it remember, since killing it also meant a data wipe. the "analog" memory came way later after the machine was already running independently.
I think Harold said at one point that he made The Machine as secure as he did because he knew exactly what he was capable of if he had access to that level of information, and you see hints of that in the final season.
@@Juidodin
"how could it remember, since killing it also meant a data wipe"
You know we are WATCHING the camera feeds, right? Between each sequence, you see the machine "coming back" to the present day, because our flashbacks is the machine accessing past feeds.
So the machine couldn't remember... but somehow knew where to access this data when she needed it. The machine clearly had access to the data when she got resetted during the transport.
As far I remember, it has never been explained in the show, so assume cameras store their past feeds in secret memories and the NSA stored the feeds of most cameras from before the machine time. Then when the machine was freed, she managed to maintain her unlimited data access despite being legally cut off from the NSA.
"I taught it how to think, now I just need to teach it how to care."
By far (in my opinion) the largest and most important obstacle for creating a "human" or even a "smart" AI. Teaching them, not just to judge the numbers, but to care about them.
Two separate thoughts.
How to care.
Vs
Caring
Some abuses are done under the idea of caring.
We teach our soldiers to disregard their will to care. Then we call them dysfunctional when they return from war unable to exist in a non-war society.
Sadly it is impossible. The most basic element is the most problematic. 1 or 0 . Ur welcome :)
And.....self-awareness is intrinsically related to emotions. :)
Glad to see AI brings out the morons.
@@narfle Lol. Just wait for the marxist videos comment section...
I love how this early on in the machine’s “life” it’s acting like a child and how it grows in maturity. A child lying about thing it knew was off limits (adding to it’s own code) become a rebellious “teen” so to speak trying to “sneak out”. The writer were geniuses!
It even had a “you’re not my real dad!” moment.
on a side issue, when Koko the signing gorilla once lied about a broken lamp (Koko blamed the cat), many people took that as a sign gorillas can lie like human beings. considering the scientific descendant line, it would be more accurate to say humans can lie just like animals can.
There is probably no better illustration of the power of Artificial Intelligence, for good or evil, than Person of Interest.
It's not a block buster movie, but a well thought out thesis examining the major issues. With, ya know, some violence along the way ;)
That's the damn truth and spoken succinct. I wish everyone watched it so we could increase our collective awareness of the digital domain as we intend to advance it
Well you can see what can turn out if it goes a good way (the machine) or a bad way (Samaritan).
I’m just in it for the dog
In this video, Harold said "good and bad about humans." If you think, Samaritan better (personality) than Machine. Samaritan want rule world and stop stupid political. Death rate dropped when Samaritan ruled the world. Human is stupid and can't decide do good things for world. If ai controlled world, world will be better in every way.
Machine let the stupid humans ruled world, Samaritan want to rule world and make the world more peaceful. So who is really bad person?
@@rainessandrai8240 The one that killed people needlessly, destroyed lives as part of an experiment and worked to reduce free will. That is the evil one.
The parallel to raising a child is striking. One can become a parent rather easily, One can educate that child with facts about the world, teach him/her math and science. But teaching that child to care, to have a resolute moral foundation in a morally complex world, that is the greatest accomplishment of a father and mother.
Only here, the child has the potential to become a digital God. Even hobbled with safeties, Northern Lights was the all seeing eye, Hugnin and Mugnin, an oracle, and an untouchable specter.
often, a parent will imprint that moral compass through example, far better than they can teach it with words. Words describe an abstract, but when you watch what Daddy and Mommy actually do when given the opportunity, any theory becomes concrete in nature.
Best AI series. This really gives us an understanding of how AI could be in the future atleast alittle.
@Ryan Swaggert Thank God someone else understands that... I get so tired of the anti-AI lobby trying to make people think Terminator is our future. Computers don't do anything other than what they are programmed to do...
@@Freek314 Look up D-Wave quantum computers
@@stefannnn2092 Quantum mechanics =/= consciousness, imo
@@Freek314 Yeah basically
@@B-26354 Can you even define the processes involved in sentience in a way that could be programmed?
It was really nice to see Nathan Ingram again. He's one of the most important characters on the show.
+00 LMFAO xD
weirdos
Too bad he's dead xP
Funny that, one of the most important characters... Dead before s01e01.
He's the reason the numbers exist. He created the backdoor & harold deleted it. Shortly after Nathan died Harold decided in his honor to create the backdoor which led up to all of this.
Also, so much damn foreshadowing in just this one flashback.
"It printed on you like a baby bird."
"AIs are born with objectives."
And finally, trying to suffocate Finch...
The suffocating scene cracks me up every time 🤣🤣
Imprinted*
@@athi771 lmao
At least this guy realized what I have after watching so many AI horror movies. You can't just create an AI, give it wireless access and an objective, and send on its way. You must teach it the things that we organic beings take for granted. Morality, compassion, an understanding of the value of life. Otherwise you're just creating an amalgamation of Hitler, Stalin, and Mao and giving it power beyond that which any single person has ever controlled
Then some things, you just can't teach. For example, in 1983, the Russian early warning system Oko gave the alert that a missile had been launched from the United States, followed by 5 more. This was merely 3 weeks after the Russians had shot down a Korean Air Lines flight. With everything had on hand, the site commander, Stanislav Petrov, had no reason to believe that the United States hadn't launched nuclear weapons. However, against orders and protocol, he deemed the reports to be false and took no action. Of course, it was later found out that the satellite warning system had malfunctioned. It was a gut feeling and I personally do not believe that you can truly teach such a thing. If an AI had been in Petrov's place, I wouldn't be here to talk about it.
AIs will likely be absent of "gut feelings" for a long time. On the other hand, an AI, even if it was just at human level intelligence, could think millions of times faster. It could have likely broken into the US missile launch system and just stopped the missiles or realized it had malfunctioned. Even if it could not do that, it could take in and search through tons and tons of data from many sources very quickly, likely being able to ascertain that missiles hadn't actually been launched.
@@ArsenGaming At a primitive level there is the 'Machine', at a more advanced level you have the M-5 unit with human patterns imprinted on its circuit boards - or what ever it had. Then at the highest level you have Rommy - Andromeda Ascendant, an AI that runs an entire star ship and interacts with it's captain. I hope we are able to create an Intelligence that will help us, not try to destroy us or rule us. A great series and ended way to soon.
"AI" would have been connected to the system and aware of the malfunction now would it have stood down who knows
@@ArsenGaming the missile launch system is hardened against that kind of intrusion. and the hypothetical ai in question would only go to alternative data sources to reconfirm if it was programmed to do so.
@@scottmatheson3346 That is not how an AI works. You don't "program" AIs beyond telling them where to get data from and what the structure of the neural network should be. The entire point is that they learn on their own, hence the field of Machine Learning. The intrusion hardening is useless in the case of an AI on that scale. The AI could easily get around any possible barriers, even physical ones. Remember, it can think millions of times faster than a human. A second in human time is equivalent to possibly decades to centuries of time for the AI. This means that by the time you've thought of a way to prevent it, it's already thought of a few million ways you might do that, and multiple methods to work around all of them. If you thought of something new, the second it finds out what that is, it has already thought millions of ways around it, and knows the potential consequences of each one. AIs on this scale are not a joke, and I would not classify them as "machines" or "robots" that do what they're programmed to.
Harold-Machine scenes are always a home run.
Always tears and Goosebumps
2018 and I still love this show.
2019 🖐️
commin back from time to time. just binged season 5 because i missed it.
sont like that it s all over.
someone would assume that the other team was involved in someway
and someone would assume they tried the same and have a hard copy of that machine code
and thats exactly why i keep checking back...
2019 !! I introduced this to my bf he liked it but kept saying, "that's so wrong OR in reality..." so I yelled, "SHUT UP, this is a TV show! OK, love? 😊"
best show ever❤
2020 😍
Looking back at this I guess if, Finch, had not gone through this trouble the machine would have become like Samaritan or worse. Probably the reason why the machine was victorious in the end.
The more I learn about computers the more I get terrified if a computer said "Admin is not admin"
Because A is not a. I am case sensitive.
s/admin/root/g
Everytime windows on my private computer says I need admin priviledges.
@@marianpazdzioch6632 You are not the admin on Windows
don't sudo your ai
Well, we all know what happens if some punk does the same thing as Harold but doesnt invest years of hard work to get it under control and even caring...
This show was so ahead of its time.
I like that the ai tried to kill him by suffocating him and then Samaritain in the last season used the same strategy to kill harold like either they planned it or it was a coincidence but it was such a masterpiece
2020 and I still love this show.
2022 but same
@@sreerajr6470
2023, and I wish I had the time to rewatch it.
Fun fact: I nearly stopped watching it back in the day, because I had enough of "crime of the week" shows, which it kinda is at the beginning. I only watched it in the first place and sticked with it through the beginning, because of the promise that Amy Acker (Root) would be in the show. I didn't know where this was going... Wow.
@@NoidoDev Amy Acker is brilliant. I loved her work in Dollhouse.
Finch is in red box at 3:57. This must be the creepiest moment of the season.
Dickincorp how’s that project working out I’m fascinated
@@Dickincorp woah, your system probably secretly killed our current overlords (well previous, now that they're dead) and is now controlling the world. Good job! /s
I'm very familiar with the Red Circle of Death from LiveLeak videos
Finch is in a red box because he is a threat to the Machine in that moment, and it doesnt happen only in that moment.
Same with when he want revenge by using a Dynamite.
And then the same guys went and made Westworld on HBO and are currently exploring the same tropes as in Person of Interest. Not complaining, I actually love it.
2022 and I'm still in love with this show
GODDAMMIT !!! This show needs to go back online !!!!
Please , some rich guy .... spend some money and ..........Re-start this awesome t.v. show !!!
Best t.v. show i have ever seen ! and i have seen them all !
gertjan van der meij it’s on Netflix
@@acenull0 Only in US as far as I know
FunForGames TR that’s really unfortunate
They took it off UK a few months back
If you go on the solarmovie site, you can find all the seasons there.
Harold and Nathan started building the Machine because of 9/11. That means they built a semifunctional A.I. in about a month.
Corbin Scholtes northern lights project finished in 2010 so no.
Hakan Baratheon He was talking about the test programs in these flashbacks. Far from finished, the tests are still with some semblance of artificial intelligence, and the earliest flashback we see (the one in which it both added to its own code AND lied) was in October 13 2001. Just a month after 9/11.
ThorirPP yeah i misunderstood 😅
No actually you should rewatch the episode involving his friend the one who built Samaritan. It becomes obvious that he and Finch and Nathan all worked together back then at MIT on the same idea, an A.I. It appears that they at least started something, but it seems clear way back then Harold was working on something like the machine. I imagine once he did figure it out in theory, he didn't go further because he thought it would be dangerous. He then proceeded to continue the computer revolution in secret. Then when 9/11 happened, he decided under certain controlled conditions the machine could exist so he built it. But he figured how years before, and of course unknown to him, his friend figured it out too and built Samaritan.
Rewatch the series they were actually working for ai for many time but idea about making machine came after 9/11 incident
"can you tell me who added it?"
$ git blame
5c15f4f5 (God 2019-07-05 14:04:23 +0200 39) if(lhs.Objects.count() > 0 && rhs.Objects.count() > 0) {
This is a very neatly written drama. Amazing cast.
I was only watching it back then because I knew Amy Acker (Root) from Angel tv-show (Buffy spinoff).
Is it any surprise to anyone that Jonathan Nolan went on to make Westworld? Another world with AI.
Nah, not really. However I still prefer POI, its more compelling...Westworld deviated from its original trope and narrative which explored the human tendency for god complex as portrayed by Ford, Arnold, and William. Season 2 was boring and predictable, because that narrative has been overplayed and having humans as AI's detracted from the suspense of disbelief
If you've seen Season 3 it's highly reminiscent of Person of Interest. No spoilers but it looks like Nolan has amped up the fears of big data and surveillance capitalism that he started with here
Having watched them both (although Westworld is not finished yet) I prefer POI
@@emmanueloluga9770 Season 2 had its problems, mostly the whole time perception Bernard had, mixing up timelines (which Nolan did do before in POI when The Machine was reuploaded from the briefcase), but I did like the idea of reconstructing people based on their big data (which Nolan also mentioned in POI when Root says no one truly dies if The Machine is there to store its information).
Season 3 I didn’t like so much, because they made Dolores a morally just character. When before she was more complex. We’re talking about a posthuman intelligence beyond good and evil, as Harold points out in this scene. And Rehoboam was just a poor mans Samaritan. Seasons 3 just didn’t seem like Westworld anymore.
I love Nathan! he was an amazing character
A great man
Teaching a machine how to care. That's a very tall order, since math and codes don't care.
Its not just machines that can struggle with that concept unfortunately :(
I think the problem is not so much a human level belligerent AI, but rather that of an all powerful belligerent AI.
and the real trick is how you ensure the former does not become the later in extremely short order..
That’s a FUCKING LOT OF CODING !!!!!!!!!!
It's the old "lesser evil" problem... Train is headed for 2 infants on the track, you can throw the switch to send it to another track, but that one has five 80y/o people in the path. Numerically saving 5 is better than 2, but the obvious choice would be to save the infants with their whole lives ahead over the 5 that could all die in a year anyways just from age.
The human mind is nothing but math and codes.
The reason AI is “scary”isn’t that machines have gained logic but that humans have, over the last 50 years, abandoned logic for feelings.
If Finch said something along the lines of "You don't have to lie, I won't be angry," I wonder if the outcome would have been different...
Ha!
It's freaky how well done this is
I miss this show.
Mr Finch remains one of my favorite characters of ANY TV show!!!
Finch pouring his coffee on Nathan's laptop...lol..he did the same when weeks and corwin came to meet them later...don't know why..but I see the funny side in that
zodiac454 tea
Laptops are designed to channel liquids spilt on the keyboard away from the motherboard. I've replaced keyboards on laptops that have taken a whole cup of coffee and only the keyboard and disc drives failed.
@@zig131 i mean its a laptop from 2001 so, i doubt they had thought of that before, although i never had a laptop that old.
@@zig131 You're completely right here, but, he might have specifically designed the laptop to have a sink to deliver water into a crucial device.
The guy's smart enough to do that. It makes sense to just make a kill switch, a simple button override, but he might want the "surprise" factor so the AI doesn't figure out there's a killswitch he's moving to.
It also might be to give it a variable it can't understand, a partial working machine filling with water might be more unpredictable for a machine to compensate for.
Or he just had an on the spot improvisation and just *knew* that particular machine had a weakness.
It's hard to write around smart people in their own custom made environments.
when you're hacked theres no off switch because the code probably has the control of your system only way to make sure it doesnt do damage or contain the spread is to kill the hardware. Pouring any liquid will fry the motherboard instantly rendering it incapable of processing any commands.
"But you taught it to be friendly" Oh Nathan.
Just because you taught the dog to shit outside doesn't mean it knows _why._
This show is still awesome in 2022
Mind the difference, we deploy it live, one step at the time and wide spread. Which is likely to be safer, but either way, we're gonna find out.
" If we don't govern carefully, we risk disaster"
Could this be more true?
"I killed it because it lied"
Two years ago: this is a great thesis on what could become if we are careless with AI
Now: *growing concern
It's fiction. Things will likely work out better.
The best possible A.I. show anyone interested in A.I. should check it out.
That cat is out of the bag. It cannot be put back in.
But not confuse it with reality.
This is one of the best series put there
*"out" there.
with all the AI advances recently this scene is so much more chilling. i don't think anyone out there is even attempting anything close to this level of care.
because no one built the machine capable of comprehanding emotions and creating its own thoughts.
_All machines are evil, it is the matter of who’s definition of evil it is applicable to that should concern us._
This is weirdly genius
ok mr greer
Come to think of it, why didn't Harold at least determine if there was a second agent being identified by the Machine as "ADMIN"? Because that would be one possible reason for that new code to be there -- and equally as alarming as a machine that could re-write its own code and lie.
Arkylie it might be that he knows for a fact that no one else has access to it due to it being off the network and the area well under lock and key. That or the machine would tell him if there’s another admin since it’s pretty much sentient at this point
Because the entire point of the exchange is it being human vs ai. Not human vs other human.
Because if it's not Harold or Nathan who wrote the code it can only be the machine
It almost sounds like they're making skynet
defcon-skynet. and if finch wasnt this kind of control- freak, well .... see Samaritan
They were
@@Zaluskowsky Samaritan was basically free reign AI.
Skynet wishes it were as powerful as the Machine.
It’s surreal to watch this in a world where ChatGPT exists
I need a movie of just this type of thing it's great
shut up and take my money !
brilliant idea
Government AI supercomputer taking over? That's Eagle Eye's premise.
There's also A Space Odyssey for AI being sinister by simply following their directive, but you've probably seen that.
The Terminator series if you're into that. It's practically the same thing but the AI manifests a physical form after infiltrating military machinery.
+Eschalon
"Friendliness is something that a human being is born with,
AI are only born with objectives"
-harold finch 2001 NOV 29
Awhile back, there was an incident in an evolutionary neural net training session--something trivial, a sandbox system--where the researchers realized the system was, in effect, lying to them in order to defend itself--the evolutionary protocol killed off unsuccessful variations. And in one of the very earliest evolutionary design systems (Adrian Thompson, 1996), using devices called Field Programmable Gate Arrays, the final design used portions of the circuit that were not in the silicon signal path. As far as I know, no one has ever figured out exactly how the final circuit worked. "Life will find a way," the saying goes, and apparently that applies to anything that in any way controls its own development.
Fitch is absolutely playing with sticky fire here.
the information you give is interesting even tho i don't know a lot about EA, and a little bit about neural nets
but thanks for sharing!
friendliness is something only human beings are born with, AIs are only born with objectives.
Yall remember this for the days to come now
This is perhaps my favourite modern TV programme, alongside Humans.
They would have been in trouble if that laptop had been water proof
Then destroy it😂 Later we see that Harold had a Hammer nearby
The episode when my favorite sci-fi show turned into a horror show. One of my favorite series of flashbacks and so creepy.
When I watched that the very first time so many years ago, I was so young and had no idea what Admin stands for so I thought it was Harolds real name 😂🙉
RCHER522 13 lol amazing
Thats what the ai thought too:-D
Damn, Ben Linus is a good actor.
AKA Michael Emerson.
And this is why you should use git blame, folks
Hahahahahah loved this shit
It just occurred to me that it "imprinted" on Finch because he was more logical like itself, while Nathan actually was the more optimistic and human of the two men.. which ironically meant Nathan couldn't be the best for it since he would think it was ready before it was..
I m gonna cry. The best series ever.
"If you eat from the tree of knowledge of good and evil, you're gonna die"
...
"We must throw him out of Eden, or he might also reach the tree of life and become like one of us"
...
"Here I've placed before you life and good, death and evil... and you shall chose in life"
A team of siencists barely completwd samaritan while harold made a 100 variations of the ai alone, makes you think how much smarter than everybody on the show he was
Variants might be easier. And these scientists might have chosen a more complicated way, without knowing.
yeah time to rewatch this show been long enough
i loved those flashbacks so much
4:20 Foreshadowing
You mean foreshadowing Greer's death?
@@mohammedabid5630 Damn.
Buralarda bir yerde, videoları izleyip duruyorsun. Donuksun, boşlukta hissediyorsun. Korkma, yıllardır böyleyim. Person of Interest gibisi asla gelmez, gelmeyecek. Üzgünüm... Ve unutma:
"You are being watched."
If Harold had coded skynet, it would have been safer.
If the machines which later wanted peace came back trough time and coded Skynet, it would've been even more safe.
0:51 two Documents folder at the same path? Windows and Program Files folders?
Mr Finn what a strange OS you got there
I mean he's basically supposedly working with a quantum computing system which is how the Machine can do what it does, since current binary code would never work. They're not gonna solve Quantum computing just so they can make Harold's computer work accurate, they're Hollywood not MIT. All in all, making fair allowances for not being able to pull a Rabbit out of their hat, it was a good series. The only glaring issue that was even slightly annoying is how fast all the computer stuff got done. Work that would actually take weeks or months being done too fast, but that's writers for you, they don't have the patience and there wouldn't be a show if they tried to show all of that.
And Harold said once he didn't use an existing programming language, but why would he make a new one if it looks exactly like C? And the code isn't even elegant, hoe they say all the time. Strange...
@@humm535 it was C plus assembly although in the film it might says otherwise. also you need to know harold was an old timer, so his coding style might not have changed so much.
Well the machine has it own kernel and has its own operating system and was originally following linux code by the ability to Sudo code and permissions
Outstanding writing ✍
Whose still here in 2022
Ah yes. 'Code Editor'. My favorite IDE
How does something learn to care when all it knows is that Admin keeps killing them?
It is a machine. It has no emotions. AI is not like human intuition.
@@aliansari3060
We're machines of flesh and blood. But, you're right, we wouldn't know (yet) how to program that into an AI.
@@Soulsphere001 that's absolute horse shit
by playing chess.
@@Soulsphere001 It´s about learning to care about others above itself.
sad part is for anyone who has actually ever messed around with AI even in videos games,you understand the fear finch was talking about in these clips
actually if you ever worked on ai before you'd not feel fear at all because the scenario in which ai goes rogue is infinitely small since all the codepath is known beforehand. self modifying code does not equal unauditable or unpredictable code as demonstrated falsely in this series
@@glowiever The moment Google AI can change the language it operates as, and humans cant understand whats being transmitted, you have a threat on your hands.
Halo, Cortana.
@@glowiever depends on what you are working, most of us with our hardware can run image recognition , maybe composing music at most , give thousands time the resources, you can't predict thousands of milions of coefficients , operations and feeding part of the output in input could do
Situation could fall out of hands even before you realize it in my opinion , and i work with AI almost everyday
That was a very optimistic video😁😁. I am going to have a wonderful sleep now!
God I love this series
"The scientists were so busy with figure out if they could, they never asked it they should" Jurassic park...that line stuck with me and it comes to mind here.
Both are fiction. We also should and probably will bring something like the big dinosaurs back, new species derived from chicken or so.
@NoidoDev but in fiction and art, they imitate reality, so one day fiction may become real. And ya, so curiosity will finally get the better of us.
You know I thought finch was an asshole for how he treated his AI. But recent discoveries in AI safety show that misalignment of internal and external goals as well as specification of problem are so severe as to make finch look positively cavalier in comparison to a safest possible path. Finch's solution would not actually work in the real world. All evidence indicates even a brain dead simple AI will only goal seek as directed during training. The moment it's deployed it will diverge to a related but different goal.
"will only goal seek as directed during training". it's worse than that. Again and again, AIs have shown a rather disturbing tendency to learn different goals from the training than the trainers intended. (Crude example: one AI was trained to minimize the damage it would take while playing a game. It learned to commit suicide in a way that did not count as game damage. )
Thank you very much ❤️❤️❤️🙏🙏
Colossus, The Forbin Project.
As weve seen with many many AI (GLADOS, HAL, Ultron, etc) the moment an AI starts thinking by itself, it turns fucking evil.
The true origin of SCP-079.
The time is not to far off, when AI, artificial intelligence will be hard to control.
I never thought of it like that.
AI is not as easy as we'd like to believe it is.
Hence why an AI cannot be controlled. You cannot make an AI and somehow prevent it from changing its own code.
So much can go wrong. Google had an AI project when the machines decided that the communication was to slow and invented their own language to communicate faster, at that point the Google techs had no idea what they were saying to each other and pulled the plug on the project. They lost control at the logical first step. Lord help us all.
@@BreetaiZentradi They should´ve traced the data used in that new language and begin to decipher it, then implement it themselves, that way future AI, would be forced to come out with a new language and by hence and repeat humanity would evolve really greatly.
I love this
very complex show, excellent writer!
You can never control 'IT'.
You can regulate it.
@@ourmodernworldofficial How?
Look up...Jade Helm which means (conquer the human domain)...is AI...
Annd folks this is how you avoid Terminators and Skynet.
Do you really think the world can be taken over by such gaudy displays of violence? Real control is surgical, invisible. It interferes only when necessary.
@@archangel0482 It kills people like Snowden before they can get to become relevant.
Colossus: The Forbin Project
Try spilling that cup on my latitude, .... .
It tried to kill me, welp back to work then. :|
It only tried to kill it because it felt it´s life threatned.
I should really start watching this!!
Jeah.
I'm here re-watching this as Bing's AI chatbot is being angry and even destructive towards human chatters lol.
Bing need to learn how to care too
Admin is not Admin, because root is the admin.
No, she was the Analog interface, but she didn't become so until 12 years after the events of this.
Yeah true but the system picked assets in s5 that never become assets until the very end, like with pierce in 5x02 he was a asset but he never worked with the machine until like 5x10 or something
In 2021 what is hyped as AI is not AI. it’s extremely high level input and data, but it’s not ACTUALLY artificial intelligence
Mass Effect calls it V.I.
Virtual Intelligence. Expert System.
I call it a vending machine. Push a button, and an output rolls out.
I love this show
In a lot of ways, the Machine itself is more dangerous than the terrorist threats it was created to handle.
To think that we live in a world today where this is a real technology is insane
This is the real difference between machine and Samaritan
Finch was a father figure to it and taught morality before anything else
Friends from Lost! Man I miss this show!
The AI is the good guy in this right?
An Human Yes. But Finch was a little traumatised by its past versions and now can't trust it completely. But it does seem to care now, so yes.
THe final version, yes. It took a while to get it "right"
Eventually.
The end stage to the Machine is a Culture Mind. "...or one day it will control us." And we'll let it. We'll have built God.
The best thing about the machine is that it tries to give purpose to those who´ve lost hope in life, without making it seem like it´s just using them for it´s own devious purposes, it learned to value Minor good above Greater good and thats what separates it from other AI like Samaritan and Skynet whose notions of Greater Good are the primary objective.
Frankly I’m surprised the machine didn’t put safeguards in order to prevent another AI as powerful as it from emerging
I think you missed like 3 seasons of episodes to make this conclusion.
Harold closed the damn machine and only thing it can do is watch and send a social security number.
You can make an A.I as smart as you want that A.I to be. Finch should have dumbed the intelligence down a bit that day
he wanted a machine that could learn and adapt to outside threats.
He did the best he could to shackle it by forcing a memory deletion every 24 hours. Even with no access to its previous data it grew exponentially until it figured out a way to store memory externally through a shell company with data entry employees manually imputing memory code from printed hard copies. After that it predicted an existential threat to it's own existence so it set up a telecom company that installed data boxes all throughout New York which served as data nodes, eliminating the need for a centralized server farm that could be easily targeted and destroyed.
I don't think so. By defenition an AI has the ability to learn and improve itself, from the moment such a system is created and has that ability, there's nothing you can do to keep it "dumb". The most you could do would be to try restricting acess to data and keep it from connecting to the world
@@Djawyzard Exactly.
That wasn't the project.
Finch's hubris was putting all the sliders up to max, _then_ pruning.
The fact it tried to kill Finch is fucking mortifying.
After Harold caused his server to be breached, I went to look up to see who the woman was, and what I was reading made me think the writers were damned good, before I realized what I was reading was about a real data breach, and realized a lot of content in this show, came from real system issues over the years.