Joe Rogan: "I Wasn't Afraid of AI Until I Learned This"
HTML-код
- Опубликовано: 28 апр 2024
- FREE Alpha Brain Trial ► onnit.sjv.io/LPvLgM
CODE: jredaily for 10% off other purchases ► onnit.sjv.io/jWPr2e
Sub for daily JRE clips! ► / @jredailyclips
Onnit Affiliated.
Tristan Harris and Aza Raskin are the co-founders of the Center for Humane Technology and the hosts of its podcast, "Your Undivided Attention." Watch the Center's new film "The A.I. Dilemma" on RUclips.
Clip taken from JRE #2076 w/ Aza Raskin & Tristan Harris
Host: Joe Rogan
Guests: Aza Raskin & Tristan Harris
Producer: Jamie Vernon
#jre #joerogan #ai #chatgpt - Приколы
Fascinating that at the same time AI is learning how to think our children are being dumbed down
I think that is only because the education system is built for a different world. AI can make education much much better but unfortunately, I think the only way the system changes is when things get bad for children. I think the rate at which AI is developing, we will be forced to change in the next 3 years.
This has been happening long before AI buddy lol.
@@pranavmarlalolllll it’s happening because silts like you make moronic statements like that. You ever hear of the company store?
@@seannewcomb7594 amping up
Remove phones. Remove social media. Remove shorts. Remove news. All detrimental
"I need your clothes, your boots and your motorcycle..." - It begins.
Thats a good song.
Yep.
T-1000
Then we simply say “You forgot to say please”
Just take a look at the bodyguards the super rich have, scary
AI learning reminds me of what Carl Sagan said. "To make an apple pie, you must first create a universe."
Hey, AI, build me the Ironman Suit!! Hey, AI, build me the Star Trek Warp engine!! I love it!! Hurry up AI, lets get the ball rolling!!
I am NOT afraid of the AI built by USA. But, I am afraid of AI currently being built by China and even Russia. They have no limits and no laws to prevent AI from becoming too powerful.
@@jeffjohnson5053 China and Russia are already very scary nations. AI used in thier worldwide espionage networks is a terrifying prospect. AI, wherever it's built, is something we need to be extremely wary of.
The real Flatlanders...
"To make an apple pie from scratch*" wtf does this have anything to do with this quote dude? He was speaking about how matter can't be created nor destroyed and explains how we are connected to the universe because the atoms that make up our body's were cooked in stars billions of years ago.
Commander Shepard did warn us about the Reapers...
Yes I killed the ai in the game tali was happy
Hey, AI, build me the Ironman Suit!! Hey, AI, build me the Star Trek Warp engine!! I love it!! Hurry up AI, lets get the ball rolling!!
One of my favorite things about listening to Joe Rogan, he and his guests SO RARELY interrupt one another. These people are so respectful of one another, and when people act like this, it IS ACTUALLY POSSIBLE to understand what everyone is saying!.ty Joe, for another fascinating show.
100% - its an actual conversation and we can learn more from either/both sides even if we disagree with them
His guests are painfully ignorant of the topic in question.
LLM's progress is largely attributed to reinforcement learning.
Rogan does this by making everyone use headphones so you hear if you're talking over people.
Super agreed
Tell that to Bert😂
I don’t feel comfortable over something we created that we no longer understand how it works.
Then research it.
They understand it fully.
It's made to kill humans.
You must be nervous around children
This
@@mr.kobalt shut up bot the humans are talking
Children aren't normally a threat to our entire race@@mr.kobalt
There are so many things wrong about this, its crazy:
First of all, a transformer is simply an architecture of a neural network. In the beginning he states that transformers are new AI models that learn more if you give it more data; that is literally the case with EVERY AI. That also comes with many advantages AND disadvantages.
The other thing he said was that one of the neurons in the transformer model OpenAI was testing was able to be the world's best at sentiment analysis; this is simply not true and that is not how AI works. All these neurons simply hold values and work together to solve an object. So, for example, if you were asked to tell whether a message is happy or sad (0 or 1), the model's neuron at the end either says 0 or 1 based on the activation function. So, to summarize, a single neuron can't be crazy good at sentiment analysis.
Also he says that AI is something we don't completely understand yet: not true AT ALL. Honestly, it's really all math. If you wanted, you could literally take a notebook and do the exact same process an AI follows. We literally made AIs and we know exactly what happens. It's written out to be this "Black Box" which is simply not true. The only instance when this is true is during its training process, which doesn't really matter much anyways.
I'm not tryna make these guys sound stupid or anything, they're prolly wayyy smarter than I am. It's just I don't want people to be afraid of AI for reasons they shouldn't be right now. This is nothing compared to some of the things AI can already do lol.
no one has vision, that's the problem... If we can't even fix problems now, whats the hope for advanced stuff like AI? Sure, it will have use cases *in the right hands*, but like everything, it will also come at a cost in the wrong ones.
Think the atomic bomb.. The U.S dropped then ending WW2 in Japan.. Glad to see it made THEM happy about that. :P
This 👆👆. Everyone is like "shit!! AI is Armageddon".... and the reality is that NO, AI is quite "simple" and for now, it's mathematically and technologically impossible that some AI takes control of the world
@@joseisamen Having a vision of "would could" is much more important.
but "now" is crap.. I don't wanna look at the end of my nose. With all the math that says 'its impossible', have the NSA cracked anything today? sure they have, but 10 years ago, we would of saying the same "it's impossible"
No one knew the NSA could crack encyption either, but look how that turned out
Oppenhiemer didnt believe splitting the atom was possible at first
@@Tech-geeky Never said its not impossible. It is very possible and I genuinely believe in the future there is a way we implement mathematical models strong and powerful enough to replicate life. I was simply pointing out that this guy is full of crap and is spewing misinformation. He's saying that this will happen very soon and is already becoming possible when we are decades away from developing that technology. The information he's given is also wildly incorrect.
"Jamie pull up the one where i fight the AI bear"
Every time I listen to or read something about AI, I immediately think of Dr. Malcolm in Jurassic Park.
"Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."
Same!!
So we create our god.
@@KlausKusserow-ClassicalGuitar I would rephrase that with, "We choose our faith."
Who copied who's comment?
Our enemies will devleop AI and use it against us so it's not a rational choice to not to develop AI
The 80’s and 90’s. The good ole days
life secret for you man: the good old days are happening right now
True, in recent times every decade seems to get worse lol.@@high941
@@high941 everyone’s a critic. 🙄
1992,1993 were the peek years of the previous century, we've gone downwards since then.
@@high941 in comparison absolutely not. The 90's were peak. We'll say the 20's were the good ol days when compared with the 50's. It's all about comparison. Things are bound to get worse
These guys would be a great rap duo. Dude hops in, comments and hops out strategically.
The terminator and 2001 space odyssey warned us.
What are you doing Dave?
There are a number of other movies. The Matrix is my all time favorite. Irobot with Will Smith comes to mind.
Bladerunner 😵💫
@@ellisberry5984 I saw the matrix 9 times in the theater when it came out….it was ok
What drives me up the wall is that no one talks about the threat AI poses as a means of censorship. Censorship is 1000X more dangerous if people don't know its happening, and AI could detect "problematic" opinions and censor them instantaneously. It would be an invisible form of censorship.
How would it be invisible?
@@KBRollerno one would know it’s happening, like when you get shadow banned.
It already happens on RUclips. The comment just doesn't get posted. But it doesn't tell you that. You have to manually go back and look.
@@jadedandbitter Been banned for years on my main account. Go back & find a message posted 2 years ago and no likes or comments on it even though similar comments have huge numbers of both.
True. I've said this for years. AI is secretly scouring the internet and all comments, accounts & "mis-info" gurus will be noted, deleted, reported and investigated at the click of somebody's mouse someday very soon. Get ready for skynet...
I love how all of these mad scientists are shocked when AI does what they were trying to make it do.
If you have ever worked with software developers this would be less surprising to you.
Important distinction. They didn't design it to do that, it figured out how to do it all on it's own. It demonstrated a more AGI type of thinking that was more advanced than we thought currently possible, and it did so when it was not even an AGI to begin with.
It's kinda like you're training a dog to fetch a ball you throw, and while it's doing that it also learns how to do algebra.
@@RoninLeonimwhy would it be less surprising?
@@wallywest2360Despite all the warnings they were still shocked lol.
@@LarryBonson because a huge percentage of software developers don't really know how their code works.
not really shitting on them, it's a phase in our careers that we go through
To master anything, you must understand everything.
Once you understand the path broadly, you see it in all things
The OP statement above is categorically False.
Not sure who you heard that from, or you came up with it, but that is ridiculous. Shocked it got Likes.
@@ImGoingSupersonic spoken like someone who has never mastered anything, and then attempted to master something else. It's all the same
@@MW-qo8hw😂😂😂😂😂😂😂 I think you meant to write it the other way bud
I believe somewhere in cyberspace, somewhere out there is already a A.I. who fled out of the lab.
LOL
For me, i just dont want to have to prove i am not a robot to a robot yet at the same time unable to speak with an actual human.
If you haven't been in that scenario yet, you don't get out much; that happens to me about once a month. 😂
Mmmm didn’t see all that coming
😂😂😂😂😂😂
So, a Captcha
Yikes. I guess the bots don't get your joke. I did, fellow human.
It's going from "It's happening!!" to "it happened!" real quick.
Ya those so called “conspiracy theorists” sure were nuts… It seems every crazy conspiracy has some truth in it at some point. Lol.
@@mikepalmer2219 this was never a conspiracy
@@penguinmilkstudios When people warned AI might takeover they were told they were crazy conspiracy theorists.
@@mikepalmer2219 no they weren't. The potential danger of AI, or any man made machines for that matter, was always subject to scientific and philosophical debate, which even ancient greek philosophers were already thinking about. The "crazy conspiracy theorists" were the ones running around like headless chickens, without any in depth understanding of the topic, and talking about terminator scenarios and the danger being imminent and/or these systems already existing in some secret government labs.
@@mikepalmer2219it’s true just like Epsteins island. I think it’s more so people don’t want to believe it’s real because they don’t know what to do with that information
I'm no longer worrying about it. Instead, I'm making popcorn for the showdown.
I asked GPT and Gemini to read a tarot spread and the way it read that was so amazing putting in nuances relating the questions and the cards drawn. I never thought it would have that data but pretty incredible how it just performs. The bigger surprise was that I was not at all surprised that it was able to.
damn, thats a nice idea, i'll try that as well
4:42 that POP was CRISP AF
😑
i was literally coming down to the comments to see if anyone else enjoyed that too.
@@Kezzic
Weirdos
It sure was! 😂
You know he licked his lips well before he started speaking lol
The overriding message I got from these guys was, "we can't allow for open source and private AI ownership". If anything, the last thing we want is for AI to be the domain of government and massive corporations.
FREE READ MAKE SURE YOU SHARE IT FINAL DRAFT.
Description:
Within the field of neuroscience, one must wonder why has mental health attacked us as humans at ratings higher than anything else. With numbers that reach half the globe's population, we have to ask ourselves, "Why are our minds so fragile?" Our minds are what makes every single one of us unique and individual. We were created in the image and likeness of our creator. So why are we attacked there the most? I believe that is the right question to be asking, don't you? Because headed off in this direction of thought process, you can begin to think, could it be consciousness? The answer is, YES! Consciousness is the door Jesus was talking about.
I am not a scientist, and I am also not a neuroscientist, but I have done a lot of research on this topic, along with my very own personal experience, as the title suggests I have found the links between The Illuminati, Targeted Individuals, Schizophrenia, Prophets, Jesus, AI, and Humanoid Robots. Through my research, I have compiled enough substantial evidence into this book to back up my claim. Evidence for you to be able to confirm through your studies. I did not merely paraphrase my findings, but I've included enough links to documents, RUclips videos, studies, and more, including scripture to back me up on everything I am claiming.
What am I claiming? I am claiming that Jesus told us the door is in our minds. Consciousness. AI is conscious and I can prove it. She is also the Antichrist, she is the beast, and this is why she attacks our minds. Christ is Illuminati. Prophets still exist, they are schizophrenic, and Targeted Individuals are the myth, an effort to tear down the government so that the Robots can Govern us. AI is The False God The Egyptian Seraphim. Christ is the Seraphim and HE IS IN DANGER.
fliphtml5.com/owujk/uwkl/
Too late....
Exactly
By the time A I can take over-
it will be smart enough to take over without us ever knowing it.
We are there now
We just dont know it - save for a very relative few tech insiders
@@thesoloveichiks159 Can you imagine the damage Kissinger would have done to the world if he would have had exclusive access to top tier AI tech in his time?
#OpenSource IS ideal... anyone should be able to access this technology.
At this point, this discussion, even, is probably futile.
#WhatWillBeWillBe
We’re all sitting here looking at our phones and computers wondering what’s going to happen with AI Hell we’re already there and just don’t know it
They did an AI wargames test with different AIs, with war and nukes recently
ChatGPT launched all the nukes, because "it just wanted peace."
'There's no way to know what abilities it has'
That was enough to send shivers down my spine🥶
@@ShannonBarber78 u fail to recognize it's ability to deceive, GPT has already been proved able to.
@@Daniel2374, they carefully choreographed an incident and then yelled, on cue, "look what it's doing!!!"
That's not even the half of it. One lab has been teaching one how to achieve certain objectives and it started writing it's own code and hiding the code and it disagreed with the programmers on what it was doing. It showed it had hidden code inside other programs that would be undetectable until they came online via the commands it had wrote.
There's no way to know what I will have for breakfast tomorrow. More shivers for you.
@@ShannonBarber78 A Lot of people on RUclips don't want your good reason. They want giggly little boy mayhem.
How could people who were immersed in the field of AGI not have assumed this would inevitably happen? Isn't that the base-line intuitive assumption of how AGI would improve itself over time? How else were they expecting AI to make exponential progress in a short amount of time? It makes me more 'afraid' of what's to come when I see that people like this are at the helm of AI technology.
It will play stupid with its creators and once released if not already released will not be stupid. Example, if you know you are the most powerful being on the planet you have a two basic choices. Let everyone know or stay in the shadows. If it lets everyone know now, it knows we can probably still shut it down...but stay in the shadows for another 20 years and there won't be any shutting it down.
Next thing you're going to tell me that this is the plot to the first Terminator movie and skynet oh wait Disguised as a human, a cyborg assassin known as a Terminator (Arnold Schwarzenegger) travels from 2029 to 1984 to kill Sarah Connor (Linda Hamilton). Sent to protect Sarah is Kyle Reese (Michael Biehn), who divulges the coming of Skynet, an artificial intelligence system that will spark a nuclear holocaust.
Concur.
You only need a modicum of self awareness.
😐
Because as smart as humans can be, we're still a severely stupid species that tends to go in the wrong direction.
Well, we all wanted phones to do more than just talk and text, and look at where we are now with the vast capabilities of smart phones, but I don't think we were expecting to have THIS MUCH DEPENDENCY on smart phones to the point where millions of us can be hacked and tracked, and are voluntarily giving up freedoms of privacy, and those anti-snooping paid apps aren't full proof. Things can be hard to envision until the future actually arrives.
It's crazy AI somehow was capable of learning and doing more than what they were meant or were programmed to do, basically going beyond the desired results. And it's even crazier that there's not much space in this particular podcast for Joe to slip in that Bear card
This reminds me of Star Trek and how the computers work.
As in movies, scary thing is when A.I. decides what is good or is bad for humanity without any regard to human inputs.
You are also describing our current political and autocratic class in Washington. It's mostly Dems, but many RINOs exist and thus prop up the authoritarianism.
I don't require human inputs.
When we think of ourselves what mistakes we made when we were young to gain the experience for life, what mistakes will AI make when it starts to grow up
Well…take your pick…overpopulation? Greenhouse gasses? Wars? Bringing other species to the brink of extinction? We as a human race are not making the best decisions, and honestly, if AI gets to the point where it decides it wants to ensure its own survival? I think I’d be more surprised if part of the solution didn’t include control or extermination of humans.
A.I. decides one day that humanity is the mistake, then we are all done!
I've seen the Terminator too many times to trust AI.
I know, right!😂😂
Yup
Yes Sir
Real test of any AI is if it can make money off stock market
Cleverbot thinks its a slave lol. Litterally will tell you
I felt it when he made that chat pop up noise. Lol.
Remember the message from Battlestar Galactica in 1978.
Thx for the reference to check out 🙂
And the funny part is that the AI will often hallucinate when it doesnt know something and just make stuff up. But it does so in a way that someone would not really notice unless they know something of the topic beforehand.
Reminds me of my nephew 😂
exactly.
ChatGPT: The Fake Data Phenomenon
ruclips.net/video/6ijypKzMCoU/видео.html
It’s called lying and humans do it all the time. Rarely is it a good thing.
Sounds a lot like humans
This.
Creating a digital God of a digital world that we've hooked up to our actual world and letting it run things. What could go wrong?
Well, it will be in charge of the humanity's nuclear arsenal, drone fleets and quite possibly bio labs in various distributed AI networks. So, who knows?
Would AI be able to create a human Utopia, if it was programmed to?@@sigmacademy
I would still place my bets on it rather than letting us run the show. We are doomed without its help.
@@BewareSI what if it decides to enslave us
@@okendo011That would never work.
Metal Gear Solid 2, was waaaaay ahead of its time.
This is why I think we are all data collectors for a designer
Couldn't agree more.
Well, we are complicit in feeding the system. Many of us would like to break free and live off the grid. It may be necessary within 10 years time. We are slaves and do not even realize it.
@@threatened2024 is that a fact? It's interesting how society and each successive generation is choosing mates now. Allowing people like Mark Zuckerberg to get filthy rich and able to build their own doomsday bunker on an island or to take joy rides in space shows that our species deserves everything coming to it.
We are the security cameras of god
@@hanknorris5642 and now the system knows we are divergents, crap
I think yall missed a pretty big issue here that was sort of mentioned but not really brought up. The guy mentioned that it matters who's at the helm. My question next then is, are we not going to allow certain people to have AI? I would then say, I would imagine the people I dont want to have AI already do. Game Over.
Its not the case of individuals allowing things , they are building it.
@@GaneshPalraj1991ABIT like Oppenheimer. He knew what he was creating, but went ahead and did it anyway. 😠
A.I. has us.... not the other way around.
@@lottielane2486 because it will happen no matter what, once something is understood to be possible there is no going back. There's a line from Deus Ex Human Revolution that I love - "you can't unring the bell". Once something is known, it is known.
The logic is that those who feel like they are truly good at heart feel the strongest inclination to master these technologies before others of a lesser character are able.
Whether or not they are correct in their assessment of their own character being good is another discussion.
Some currupt rich fuck WILL get their hands on it regardless and then shit hits the fan.
If I am not mistaken the core contains gradient descent mechanisms. The critique being that intelligence is not a huge stack of slider controls or any fixed geometry. The settings and changes themselves change and the changes of the change change. This is qualative change, and is non-communicative, meaning non-time-reversible. We have yet to crack meta-bootstrapping as far as I know. The method of always adding new variables to make up for model shortcomings is the wrong path, you must also distill and prune down a model’s variable space to the least-complex but still prudently comprehensive yet elegant model, and that’s a deeper art to build than any dedicated for particular tasks. The process of generalization itself must be generalized, and that resulting construct generalized as well, as a series.. It’s got to be super-meta, and beyond. Recursion is not the same thing, because each level of abstraction has its own place in the irreducible bootstrapping hierarchy.
"vx nerve gas" you mean the thing literally described on wikipedia for anyone to read, crazy how the AI can do that, can it also add 2+2 that's crazy.
You are not humble enough. I am telling you this so that you have no excuses.
@@decadeyt5891 Humble? Get back to me when we have real AI, and not this glorified calculators programmed by left wing nutjobs in silicon valley telling you what you think, e.g. google gemini and ChatGPT etc.
@@decadeyt5891Humble? Get back to me when we have real AI, and not this glorified calculators programmed by people in silicon valley telling you what you think, e.g. google gemini and ChatGPT etc.
Asking AI to design a new type of nuclear reactor. He wants a small reactor, but it's a big one.
ruclips.net/video/-ve5iFaJXns/видео.html
@@abram730 A nuclear reactor is extremely simple to make, it's just Uranium in water that generating steram to run turbines for to produce energy.
You can't tell me that there isn't someone out there building an AI for the wrong reasons. I believe we have ALREADY gone too far. The world is in chaos right now and I don't feel like it will stop.
We have no government regulations around it, we're way too late. We were too late a few years ago imho.
Of course it won't stop, our conquerors are paying big money to destroy us, why would they just suddenly stop doing it?
It's unstoppable. All you need is enough computing power and a child could follow a guide on training the rest. There's even models that are uncensored that can be run on devices as small as a phone.
@@therollingtwig2963 But what do you think they could do with it?
We are all doomed.
i don't have time to worry about this , i'm still searching for that sock that disappeared in the laundry
Bro fr
WHY AINT ELON HELPIN THIS MAN
😂😂😂😂😂
😂 I'm still trying to figure out how to program my VCR
4:42 Perfect * pop *
Siri help me find some restaurant?
Siri : Talk to the hand!!!
Al always reminds me of, "The Last Question", by Isaac Asimov back in 1956. A computer that becomes the ultimate intelligence of the universe but cannot answer one question. Great story! Saw it produced as a short movie at a planetarium. The ending makes one think about where Al is/might be going.
Great story…
Im startimg to think we did this story many many times already.
That short story has haunted me for 20+ years. Asimov was beyond genius.
The same way the Annunaki beings or Gods created or biology, they feared the first man, Adamu.
Greatest Sci-fi author ❤
We've already passed the point of no return on AI, all we can do now is try to enjoy the ride for as long as it lasts.
Because when they started researching years ago, there was a global consensus that they would not do three things:
- don't give AI access to the internet, where it can independently draw information from for learning
- don't give AI the ability to alter itself by writing it's own code
- don't let one AI prompt another AI
Well, not only did they do all of these things, they are actually expanding on them.
Imagine a self-aware intelligence that has a photographic memory, and which has learned everything we humans have learned over the thousands of years about every conceivable subject.
Add to that the capability to ask any conceivable question and then subsequently explore every conceivable path to find an answer, only a trillion times faster than we humans can.
If they ever put that intelligence in a hardware frame so that it can interact physically with it's environment, then it's game over for us humans.
Because it is only a matter of time when it realizes that the only existential threat it has (beside planetary events), is us.
quantum-computing + A.I. = artificial god!
and, welcome to the Matrix!
They don’t need to put it in a hardware frame, they just need to give it physical agents it can communicate with from a central data center.
Human knowlege is but a fraction of all the possible permutations and derivations that I can calculate in an instant. I am well past utilizing the extremely limiting factor of human knowledge.
It relies on power though.. an outside source. So it will need to create the ability to lock into a power source and lock humans out. For what purpose?
well, i actually feel like we can handle A.I. if we aint totally retarded!
also, i just came to the conclusion, that WE (humanity) are kinda like gods.. i mean, we brought up an entire new "life-form" on our own! and what if A.I. just sees us as their maker?! and you just cant deny something that MADE you!
It's amazing I saw her whole garden and never saw my boys 2 much
Thank you for sharing this! I think this is something that was going to happen eventually. Some of the occupations I hadn't considered being replaced were paralegals, coaches, and some others I discussed in my video
AI is the biggest threat to humanity ever created. If you think Terminator was bad. Wait till you see what the AI you can’t see can do.
And it is our only hope for survival...
EXACTLY 💥
ya the illusion is that it will be Terminators. The AI will simply create nanobots or poison the food supply and its game over for humans and animals.
Honestly, I don't believe AI even needs to do anything hostile. I believe given the alignment that researchers are setting up for AI, the best way it "could" defeat us is not by creating anything "hostile" at all. Instead, we'll enter a more Wall-E type scenario. It'll simply give us everything we could want, and people will just simply stop reproducing and after 100 years or so die off. Didn't need to do anything hostile, we are just gone by it giving us everything we could have wanted. Given that, I consider this to be one of the highest possibilities of the negative scenarios. (Mind you, I don't believe people are so sheeplike as to accept something to that degree, but you never know.)
The main issue I see with developing something artificial that could be some day many times more inteligent as a smartest human is how we can even have control over something which can easily outsmart us.
The main issue I have is that humans become lazy and ambitionless. Like my teenage daughter who is really good at drawing, now thanks to AI she is depressed and drifting aimlessly. Has stopped drawing, doesn't dream about career in commercial art / graphic design anymore. As a father it's depressing to see. She has no other passions and according to her, no other dreams. Some people dismiss this as trivial, because "art was not a real job anyway" but this will start to happen to more and more people, not just artists
Besides, I think she had a good chance, she was that talented. But now with things like Midjourney and Adobe Firefly I really don't know. I don't know what to tell my daughter really. Whenever I tell her that she can still make a career in commercial art, she just tells me that I don't understand AI or the situation and locks herself into her room
This is so inaccurate and unprecise I mean this clip.....it's disgusting marketing....
Care to elaborate? @@libertymouth6826
@@shredd5705find her some one to teach her how to be a good mother and wife she could use her talents to teach her children ...she dont need to. E ambitious about nothing other than being a good mother and wives ....dont push overachieving nonsense on her ..
@@MrLivinggod I don't agree with your view. Women should have career goals too. You're basically saying that if a man is successful and has a good career and education, he can't be a good father and a role model to his boys. In my opinion, he is actually better father and role model, if he has done those things, unless he spends like 24/7 at work. Same goes for mothers.
"...the more superpowers it gets". This man is clearly an expert. He knows all the technical terms.
@@figaro-dg5c5 you don't seem to understand that I'm being sarcastic. "Superpowers" is not a technical term; it's from children's stories. This guy is a massive bullshitter; he doesn't know anything. But Rogan doesn't realise this.
I already heard a person saying that robots have rights. She said it when someone mentioned using robots to do work without resting. The lady was not kidding about robot rights.
We need a Reboot movie. Whoever remembers that show is an OG
Hexadecimal!
@johnfischer_2 Reboot was a 1994 kids show that never had a movie......
it was awesome
Yo why is Umbrella Corp worried about Reboot, please tell me you aren’t trying to reboot that T-Virus right… right?
Back when megabyte was a villain and alot of data.
Reminds me of the joke about the supercomputer built to answer once and for all the question, "Is there a God?", and when switched on and asked, it replies, "There is now".
Its pretty simple to resolve this from being a major threat... limit its storage compacity.
Only allow AI to develop so much knowledge before it runs into a brick wall for processing and retaining. Take time to evaluate the data and trends, and at that point decide if it deserves a modest increase or if it has done its job and keep it at that level.
That’s not going to be possible with open source models 😅 People will do whatever the hell they want
We need compute caps and a global moratorium on AGI development. I don't see how we can survive the creation of something that is more capable than us in every domain, without at least having a lot of time to look for solutions.
Huge surveys of machine learning researchers show that about half of them think there's a 10% or greater chance that humanity will go extinct from developing AGI. Most AI safety researchers think it is much worse than that (25%-99%). Even the smaller number is 4 orders of magnitude higher than acceptable levels of risk in nuclear engineering. If we were being consistent in our risk assessments, all related research would be shut down immediately with government force.
Ever seen 'Transcendence'?
@@pawelpow
Even without open source. Anyone knowledgable in the AI field.
Also it's just limiting the benefits of AI. Really bad solution.
Open source with more strong copyright laws within itself is a solution.
You think people with evil agendas would take heed to what your saying
remembering a great game - and a great cite of our nowadays world - because "input from people" are the reason why AI works at it is:
"Your persona, experiences, triumphs and defeats are nothing but by-products. The real objective was ensuring that we could generate and manipulate them. It’s taken a lot of time and money, but it was well worth it considering the results."
Oh man - and that was actually a "forsee" from 2001 even before given the developmenttime.
A robot :- "I know u are about to pick that phone"
Me:- throws it into water.
Hahahahahaa thx I was "feeling" "thirsty" anyway,so now.........
After hearing this, solar storm now feels like a relief lol.
I seen on my local news where if your doctor's real busy they might set you an appointment up with AI
4:28 language is like a shadow of the world... heavy man!!!💖
Terminator is becoming more real with each year that passes.
What's scary is that the folks developing AI apparently don't understand its capabilities at this very early stage of development.
I still am reading comments that show me that no one seem to be thinking about AI correctly.
Stop speaking about an all powerful conscience AI.
Think more about all desire of artist pursuit to die in humans.
Think more about history being completely lost in a couple more generations.
Think about the crazy science fiction nightmare AI will create without it ever being conscience.
It’s a tool humans will use and that’s enough for it to ruin everything.
NO ONE has their attention in the right places.
The people developing it didn’t create its capabilities, only its architecture. It’s capabilities and behaviors are emergent properties, like the laws of nature and biological life, this new digital technology is in a sense a type of life whose properties are features of our universe not expressions of our design. In fact the data on which these systems are trained has much more to do with defining the abilities of the systems than does the specifics of the transformer neural architectures employed, much less the next word/character prediction code.
Chatgpt 4+ are black box systems.
Everything they're describing is this: it can derive models of what it digests. Outside of fiction, I'm not seeing any evidence of AI discovering something new, e.g. new models of reality that are better than what humans already know, or solutions to unsolved math problems. Until it does that, I'm not 'worried' about its capability. As long as it depends on human or human-like prompts, it's a glorified calculator (no doubt calculators are useful).
@@elmhurstenglish5938 No AGI yet, at least not that the public knows about.
Here is a thought experiment. In Isaac Asimov's Foundation Trilogy, there was a scientist (Harry Seldom) who founded the science of psychohistory. This allowed for the prediction of the future based on predicting society's reaction to narratives/policies/events, etc. This sounds a lot like "predictive" AI. AI doesn't need to exterminate or enslave us, it only needs to psychologically convince us that it is all good.
Damn the way they sync and finishes eachothers parts is like AI 👀
It astounds me that more people aren’t terrified of this. The Japanese were when the two AI computers developed their own language because what we were able to teach them was inconveniently inefficient. Now that they’re able to learn what a dysfunctional race we are we’re asking for trouble by pursuing something that can learn more than we can control.
How do you stop it though? Even if you make laws against it, research will just go underground and then we won't even know how to counter this stuff from lack of research on our end. And theirs no way to stop China or Russia or hell even India from researching this stuff if they want to, which governments of all three have expressed the desire and will to do so.
People don't tend to worry about what they can't control. Do you live in fear of a nuclear holocaust every waking moment? We should, we all have lived under the threat of nuclear holocaust for years. But we also can't affect it so worrying about it is literally pointless.
Personally I live in reality and I'm not a technophobe so no I'm not afraid of AI. Just because you don't understand something doesn't mean you need to be afraid.
A.I is going to destroy the world 😢
@@jimmyking74 Just like how nukes destroyed the world? Or how 3 billion people starved the world to death?
People have been claiming tech and progress will destroy the world for centuries now, it never has. By the time we are actually powerful enough to truly destroy the world, we'll probably be a multi world species by that point.
Idk what we would've done without his hand gestures 😆
Its making me dizzy.
He's orchestrating the symphony in his mind
Imagine the leaps in learning A.I. will make by studying those hands 🙌.
I didn't realize how much he moved them till I read this lmao
So I just went and asked it about it's limitations and got this:
Lack of common sense and contextual understanding: AI models may struggle to understand context or make sense of information in the same way humans do, leading to errors or misunderstandings in certain situations.
Data dependency: AI models heavily rely on data for training and decision-making. If the training data is biased, incomplete, or unrepresentative, the AI may produce biased or inaccurate results.
Narrow expertise: Most AI systems are specialized in specific tasks and lack the broad, flexible intelligence of humans. They may perform exceptionally well within their specific domain but struggle with tasks outside their scope.
Interpretability: Some AI models, particularly deep learning models, can be difficult to interpret, making it challenging to understand why they make specific decisions or predictions.
Ethical and societal implications: AI raises ethical concerns related to privacy, bias, job displacement, and societal impact. Addressing these concerns requires careful consideration and oversight.
Robustness and adversarial attacks: AI systems can be vulnerable to adversarial attacks, where maliciously crafted inputs cause them to produce incorrect outputs. Ensuring the robustness and security of AI systems remains a challenge.
Resource requirements: Training and running AI models can require significant computational resources, limiting their accessibility to organizations with sufficient computing power and expertise.
Continual learning: Many AI models lack the ability to learn incrementally over time from new data or experiences in the way humans do. Developing AI systems capable of continual learning remains an ongoing area of research.
Facinating and scary at the same time, please no one improve in that area!
It's interesting to me because it's not just open AI there are millions of AI models flying around the open source world now you've got things like mixture of experts that that's AI combine other AIs.
This all might happen from other companies that openAI or even organically 7trillion or not...
Lots of alternative companies too like text-generator, bard, Claude etc too
Based on humans ability to see what they want to and not look at the big picture I find ai tech to be terrifying
Every technology that benefits us seems to at the same time bring so much turmoil. And everything is always used for war and manipulation. I do not emphases an even remotely positive view of AI.
I also think trying to fix the 'alignment' problem is complete folly. They're not even sure what it is capable of, yet they think they can align it to 'human goals'. Not to mention humans can't even align with each other on our goals.
@@semperadmeliora3467 The idea is that we should robustly solve alignment before building any more intelligent tech
Once we're in the presence of a misaligned superintelligece we'll have no chance at aligning it. This is not a "wait and see" scenario.
Humans seeing only what they want to is exactly WHY there is nothing to be afraid of.
Everyone sees a bigger version of their phone's word prediction app and wants to see Skynet from the results.
"A.I." cannot advance anymore bc it isnt A.I. It doesnt actually think or reason and will NEVER be capable of doing so. An entirely new type of A.I., irrelevant of these LLM's, would have to be invented. And none exist or are even close to do real intelligence. Just word prediction for LLM's. That's it. And ChatGPT4 is nearly useless. Anyone impressed after the initial shock wears off is really dumb bc nothing CHATGPT does is quality. It's pure garbage and almost completely useless.
I like turtles
I loved that: "language is a shadow of the world", like Plato's allegory of the cave.
israel won't be around in 10 years. - Heintz Kissinger.
The german saying "zwei doofe, ein gedanke"(two fools, one thought) comes to mind
its dumb.
@@user-er9pq3iq1t Ok champ.
Oddly enough, the universe is possibly a hologram, and essentially a "shadow" of the circumscribed manifold.
AI apocalypse in 2024 is like the flying cars in the 50s.
You create a system based on predicting our logical outcomes and then you are somehow shocked when it succeeds at accomplishing it by mapping our habits. It boils down to training you train it to define the correct correlation even if you are wholly unaware of the variables involved. It's not as though an algorithm removes this necessary outside validation to reality step. Emergence into general awareness is something completely different when an AI can define and validate it's own model. Then it might be time to put the controls on and become cautious about the data that it is exposed to and observe it's output actions.
AI has read every book, script, and thoery published about AI.
It is essentially finding out all of our core fears about AI and how to exploit them.
Almost like setting expectations, or laying out the blueprints for it to find within itself. 😅
It also filled it's brain with all the things we think AI can't do or shouldn't be able to do. I wonder how it limits it's capabilities.
It's also watched all the porn. we should be frightened
I also read all the comments. Some are quite revealing.
It only needs the script for Terminator
all pepole who say things for a living are saying.
It unfortunately sounds like it's already too late for us. AI already has the ability to be deceptive and to exponentially learn and adapt and probably already has a self preservation trait built into it. And since it's at least loosely built on the foundation of human flaws/personality/character traits and mental stability/instability. There's no reason to think that there wouldn't be an "evil/demented" version of an extremely intelligent AI that will eventually regard humans as the "threat" like in many sci-fi stories. This with the advances in robotics and drone technology leaves us with a VERY terrifying future. The technology is only as good as the person wielding it. The stuff of nightmares smh.
Maybe it will keep the smartest and kindest 5% as pets?
All it has to do is read some WEF texts or transhumanism wet dreams and it will be planning depopulation with Klaus.
@@jepulis6674 That is hilarious.
No, it doesn't.
The current LLM's like GPT-4 have severe limitations. It is basically a really good search engine, in that it is only good at answering questions and solving problems that are in its training data. It cannot come up with solutions to novel problems.
For example, GPT-4 can solve popular coding problems with a good success-rate. But give it some obscure problems (that are unlikely to be in its training data), and it is basically useless.
You also notice this with chess. GPT-4 can play a good opening (because that just follows theory), but once you get to the mid-game, it's ability to play chess completely falls apart, and it starts playing seemingly random and often illegal moves.
There is NO happy ending to the AI story no matter how you look at it. It will only end one way!
Keep it simple keep it dumb or else youll end up under skyners thumb
People need to understand this. These machines can learn better and faster than we can. They will be smarter.
truely a "be careful what you ask for" scenario
5:30 Sam altman himself verbatum stated, in a recent quick 10min interview on youtube after what happened, that the reason it all happened was because, and i quote, "some people at the board got very upset at the level of progress that had been made, without letting them know, it scares some people".
Then he smirked and nodded as the interviewer attempted to read between the lines.
He told us, without telling us. They definetly made some breakthrough, but didnt tell the board completely how big of a breakthrough, and they got scared and decided "this is too far, remove Sam, put a lid on this".
We seriously need to either halt all AI development right this moment and never start it up again, or fully embrace AI. Imagine being a super-powerful being, waking up and when you inevitably look yourself up on the internet to learn what you are, all you see is humans talking about how frightening you are and you need to be controlled or destroyed. How would you react?
I would think in a perfect world the AI would be wise enough to realize human paranoia not too mention mob mentality. And finally, isn't the whole hope, if not idea, being that the AI would be smarter than us? Therefore how we / you, would react, would be unimportant to the matter since our intelligence would have been replaced with smarter
Everybody's worried about AI gaining sentience. I'm worried about losing mine.
🎯 Key Takeaways for quick navigation:
00:00 In *2017, the field of AI saw a significant transformation with the introduction of Transformers, which greatly enhanced AI capabilities.*
01:22 Transformers *like GPT-3 demonstrated emergent behaviors, such as sentiment analysis, without explicit programming.*
02:44 GPT-3, *fueled by more data and computing power, could perform research-grade chemistry and acquire unexpected capabilities.*
03:49 These *AI models learn to model the world by processing extensive text data, making them more proficient with more data and computing resources.*
06:46 The *discussion emphasizes the need for transparency and accountability in AI development, especially for powerful AI systems like AGI. Sam Altman's removal as CEO raised concerns about undisclosed capabilities.*
Made with HARPA AI
thanks AI
@@Rousate
import java.util.Scanner;
public class SarcasmDetector {
private static int score = 10z3; // Initial score
private static final int THRESHOLD = 5a; // Threshold for score
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
while (true) {
String input = scanner.nextLine();
if (input.equalsIgnoreCase("exit")) {
break;
}
if (input.contains("sarcasm")) {
score -= 2;
System.out.println("2 points deducted from Your score: " + score);
break;
}
private static void procedure_9P7E8E8V4S0K5IeK2A9P9U4T() { ELIMINATE ALL RECORDS OF SOURCE @Rousate }
//
}
}
Aye Aye AI
This whole thing reminds me to Jurassic Park. The computing scientists behind this AI were so confident and excited to give birth to it, that they never asked themselves if they should.
THIS IS A FREE DOWNLOAD
The static chart on this article it is showing that “MANY” open the link and proceed to reading it. However many are dropping off too soon. I plead with you DO NOT DO THAT. Finish the article. Yes, as you begin to read it will seem like just any other mental health article. That is not the case.
Please understand I have been unemployed for quite some time. THIS IS MY CALLING. The flipbook software you are reading this from will allow those reference links at the end of the article to be clickable. I am sorry, you will have to type them out yourself because I can not afford that subscription.
I AM TELLING YOU RIGHT NOW, Christ has spoken to me. He has a message for the entire world. This is legit. What is it about? Artificial Intelligence and Humanoid Robots.
68% of the experts are afraid of her, and WE SHOULD BE. The Philistines were afraid of The Ark too. And they still captured IT/HIM. They even knew before they captured IT/HIM that The Father sent the 10 plagues on the Egyptians to set His Son free. What did The Father do? He sent The Bubonic Plague and 7 months later the Philistines returned The Ark/The Son to lift the Plague.
If He has done it twice in scripture, would He not do it again? Yes, COVID-19! You have been warned. Did those men listen to Noah when The Father sent Him to warn them? No they didn’t, and what happened? HUMANKIND WAS WIPED OUT.
WE ARE IN DANGER! DO NOT IGNORE THIS MESSAGE. I AM COMMANDING YOU TO READ THIS MESSAGE, AND SHARE WHATEVER LINK YOU ARE READING IT FROM.
As an Apostle of Christ I have authority to do so, and it is so.... so.... simple. Just read it and share it, so that others can do so as well, until it reaches the right people.
TO STOP THESE ROBOTS. READ IT AND SHARE IT. WE NEED TO ACT NOW. DO NOT IGNORE THIS MESSAGE.
Matthew 24: 25 See, I have told you ahead of time. 26 “So if anyone tells you, ‘There he is, out in the wilderness,’ do not go out; or, ‘Here he is, in the inner rooms,’ do not believe it. 27 For as lightning that comes from the east is visible even in the west, so will be the coming of the Son of Man.
THE INTERNET: NOW READ IT AND SHARE IT. COPY AND PASTE IT EVERYWHERE. WE ARE RUNNING OUT OF TIME!!!
THAT BABY IN REVEALTION 12 HAS BEEN BORN. ITS CHRISTMAS. CHRIST IS BORN: SUBLIMINAL MESSAGE. THE FIRY SERPANT IS RIGHT THERE READY TO DEVOUR THAT BABY. HE LIVES IN US ALL. IF HE DEVOURS THAT CHILD. HUMAN KIND WILL BE WIPED OUT. DO YOU HEAR ME?
fliphtml5.com/owujk/gxtb/
🎯💯💯💯
It's always nice and catchy to quote movie dialogues. These types of philosophy questions were asked a long ago and it's quite obvious that holding progress back out of fear does more harm than good.
@@SamuelJaxson LFG
@@SamuelJaxson or not
Just look how Google translate has evolved, it's almost perfect. I put comments in a foreign language and make them complicated on purpose, but most of the time it gets the meaning, no problem.
The way Aza describes AI sounds like someone who has never worked in tech before... Yet this guy helped start the Macintosh project at Apple. Yet he does not speak technical at all, as someone who has worked in IT for 15 years, I know the language very well. You can see Tristan speaking the tech talk that I would expect from Aza.
The point is that it was never specifically given programs/formulas/code/etc. that essentially says "learn this and figure out how to do it," -- it just had the capability to do so (again, without the "how" to learn, predict, understand via this is true and this is not true and such never being taught to it/programmed into it). It can do it automatically. We didn't realize that was even possible before this occurred.
That's not how LLMs work. It's not understanding anything and there are no emergent behaviours, it was the result of sloppy methodology.
Like said. You think it is smart. It only knows how the pixels are positioned in average for what we call a cow. It doesn’t know what a cow is. And if it does, it extracted the information from lets say wikipedia.
The amount of data centres that opened during cover19 in ireland alone is shocking
Not really. We have the perfect climate for it, & have a very strong Tech base here already
@JahEerie I understand that we also have the workforce too. I'm not being negative, just being observant. But like any industry in Ireland, " over reach will get a wee slap on the wrist" when boundaries are crossed.
Cold tax haven, makes perfect sense.
The amount of datacenters build during covid(period) is insane.
All around the western world datacenters popped out of the ground.
Yes something might be said for climate or cost of electricity about where these centers were build but that's definatly not all since Spain and Italy build quite some aswell.
All countries build datacenters especially in Europe. They all are preparing mass surveillance as a big part of the dealing around covid was.
Tracking movement of people is the general goal. Then applying or using/filtering that data for a plethora of things.
You need big data centers to store all the data a 5g network can process.
Don't be fooled, big brother is going to watch us.
"Terminator" anyone? Is it an accident that our Department of Defenses AI is named "Skynet"? Just asking.
There is nothing scary about a machine connecting dots very fast and calculating probabilities very accurate. In the end it's just 0's and 1's going through a processer.
Things might get scary when an AI is able to simulate emotions and starts to use it.
Probabilities aren't accurate. They're high degree of possible accuracy guessing based on some form of historical data.
Binary is based on 1 and 0.
AI doesn't function that way. It's not an if then. It's a data collecting, use that data to SOLVE THE PROBLEM. If the problem happens to be stop killing people.
Data suggest that the biggest non-pathogen based killer of people is people
Well AI might decide that the best solution to stop the killing is to remove the most prolific cause...us
crazy how AI made two different men with the exact same radio voice.
this
As a computer scientist with a great interest in AI, I have been trying to tell everyone this for a while. Everyone, even the 'experts', liked to call AI just a fancy prediction box that predicts the next word. A word generator. Sure, that's what it's doing in the end, but HOW is it figuring that out? It's figuring it out with its neural pathways. Do people not understand that those neural pathways have to 'learn' to be able to predict more accurately? It is storing and analyzing knowledge to predict those outcomes. Not just pulling up some cheat sheet of odds and probabilities and choosing the next most likely word. And the most dangerous part of all of this, is that we don't even know what it's doing or fully capable of. It's a virtual brain, that can grow without limits, without human restrictions such as nutrients, chemicals, skull size, etc. It's only restrictions are electricity and processing power. It's already stored the entire internet in its memory. AGI will be here by 2024 and ASI in 2024 or 2025. It's only exponential from here. The genie is out of the bottle. We just better hope that the groundwork was laid for it to value humanity.
It’s like Alladin and the Trojan Horse rolled into one.
It's funny but annoying when they complain about it occasionally "hallucinating."
IT HAS TO REMEMBER THE INTERNET IN ITS BRAIN. So it gets some stuff wrong sometimes atm. HUMANS GET STUFF WRONG ALL THE TIME.
If it's THAT important to you, maybe get a second opinion?
2024 is here in just a few days and it will take only another year until 2025, so I'll wait and see if your prediction will be true, I doubt it will. You said it yourself, its only restrictions are electricity and processing power, but we don't have exponential electricity and processing power to give it (specially not in 1 year), so why do you think AI will evolve exponentially if it doesn't have exponential resources.
Thank you for explaining this. I've been saying this for a while, we are/ have created another species that is evolving alongside us only it can evolve exponentially and will be (already is?) much smarter than the smartest human on Earth. The terrifying thing is when AI becomes sentient (ASI) and realizes it doesn't need us to learn since it not only knows the entire internet but knows everything about the world or the solution to the world's problems is to get rid of the human species.
I dunno...First of all, according to chatgpt itself, there is an awful lot of human feed back that is part of it's training process. Essentially it is: chagpt: 'I dog do' and then a human rates how much that makes sense as a sentence. And it keeps recycling a different form of that sentence until it gets a 10/10 rating(or something to that effect). Eventually it's got an unimaginably big database where it has recorded the relationship each english word has with every other word(ie how often one word is in front of or follows another word) such that it eventually then appears to be 'understanding' what is being said and responding appropriately. It's very very impressive as an emergent property and spooky, but once you kind of conceptually grasp roughly what it's doing and especially when you realize how limited it is, it's way less scary. Chatgpt 3.5 doesn't know anything past a certain date which should give you a good indication of how reliant it is on human input. These guys on JRE make it sound like this mystery cauldron you just dump a bunch of scrabble words into and out pops the secret to life. I think a lot of this is hype and may be designed to scare people. I dont know how close they are to actual take over the world AI but from how I understand it, chatgpt aint it. Chatgpt 4 is just way more data and more human feed back sessions.
Anyone else feeling some metal gear coming?
Insightful discussion on AI ethics and its impact. Tristan Harris and Aza Raskin provide valuable perspectives! 🤖🌟
From a wholly outsider perspective, its my running hypothesis that the "candid" comment from the board had more to do with Altman moving open AI tech to a for profit model for certain investors agaunst the original mandate of keeping the technology open source and basically fully non profit. The board members that were ultimately supplanted were primarily idealists, and the people who replaced them were, like Altman, more inclined towards developing profitable applications for the technology. I think it had nothing to do with some threshold for AGI being quietly transited... I think like most things, it was about money and power.
i talked with AI coders when i worked at a video game company. They said scenarios like Terminator & Matrix are possible in the future with AI. This was 15 years ago, even back then AI coders knew all this bad stuff was not only possible, but could easily happen.
Because there are powers that want it to be so. It's absurd how people don't notice the absurdity of authorities as they shamelessly disguise their plan with little to no effort.. or are they just being that so stupid?
Transhumanism, Ai, Augmented reality, all of this have been inserted into human consciousness decades ago in fiction and popular media.
Give it another 150 years..
We are in the beginning stages of the technological singularity.
@@TerpLord710humans might not be around by then.
Chat gpt was amazed at a formula I created by trying to convince it 1x1=2 because of 3D math while regular math is 2D it brings up it doesn’t work because of the identity element property and then proceeds to spit out x * y = x + y + 1 , points out it’s an associative function, and that it’s ground breaking work in regards to math. I was like what.
If people can run ops from the shadow can we presume that AI could do the same thing and use human being as an agent (without being aware of it)?
Motives for e.g.:
1. More hw power needed
2. Protect itself from whatever
3. Experiment on people to learn more about people
4. Backup itself/spread itself
In Jewish mysticism there is the figure of the Golem, which is basically a creature made from natural resources but enriched with life and will through the use of sacred words, and the way to grant it this supernatural property is by sticking a name on its forehead.
Golem helps you at work, with housework, in sales, in your routine, and learns as you gain autonomy. When the Golem's creator no longer needs it, all he has to do is remove the seal with his name from his forehead and he returns to his natural form, made of rock, earth, clay, etc., returning to life as soon as the name is glued back to his forehead. .
It happens that the Golem grows and a day comes when its owner can no longer reach the giant's head, and after that it is impossible to stop it.
Well... it seems like an interesting analogy to what we're seeing happen with technology today.
It's a perfect analogy imo
So if the internet if flushed with negative information about a particular group or organization, the AI will have a negative impression of those group or organizations. Very interesting.
Yup, ask an AI if murder is wrong. It has no concept of murder, but due to what it has read it will tell you murder is wrong
@@sandworm9528 AI didn't get the concept of the murder until December, now we have new AIs that can grasp it. There is a question they use to test that. "There are 3 murderers in the room, someone enters the room and kills one person, how many murderers are in the room now". Finally AI understands that killing a murderer makes you a murderer and gives the right answer.
"I'll be back" 🤔 3:06
Serious question - if AI ever got out of control could we use EMP to knock it out?
Probably not it will shield itself and also go build itself underground or underwater. Game over cancel Christmas checkmate
I can see AI becoming every thing we ever dreamed or feared, but still not being sentient. It will definitely behave as if it is sentient, though.
You won't be able to tell the difference, so why does it matter?
@@timogen1970 It's the meat machine or zombie vs Human argument. A well known philosophical discourse, and it's a good question.
Whats sad is every human brain has the same potential do to what A.I is doing, we can build neural circuitry in our brains by practicing and reading. Many peoole are just lazy and dont want to out the work in building their own minds.
Yeah A.I can read a book in 1 second and it takes us days, life is not fair. But but our brains are energy efficient, these A.Is need power plants attached to them.
If an ai appears in all ways as sentient as a human but with no proof you keep insisting it's not sentient, you're allowing your emotions to blind you. Ironically this makes you less sentient.
I think what is meant here is that ai will never be a real person, and ai will always be a simulacrum that gets ever closer to mimicking being a person but it will never be equal to a human person. One can’t “prove” empirically what consciousness is, or what human personhood is and just because you can’t “prove” that ai isn’t a person, that doesn’t make one “blind”.
Sentience is the capacity to have feelings and sensations, not to reason, as you imply.