It's great that AI will one day do my job as a truck driver and free up my time to do more of my favorite things - like playing truck driver simulations.
I get that this is only a little over two years old, but in the last six months so much has changed that this honestly looks about a decade out of date--I suppose that's to be expected with a technology sector that is literally evolving faster than can be be easily documented. I would honestly like to see this subject revisited on a quarterly basis.
Couldn't agree more, but as filmmakers that's a big ask - especially if we are doing it for free. Often documentary projects can take a few years from conception to delivery - the tech world is a difficult one to keep up with.
😮 What it does is beyond our ability's to understand. A normal human will be totally unnecessary at some point. I think it might be a better use of the time for the human to focus on what they plan to do with their lives, 5, 10 and 20 yrs from now. 99% of what humanity does to survive will not be part of our culture. Humans won't have jobs. So how will they pay their living expenses?
@@MsAriabll If AI destroys that many jobs and governments fail to adapt, then 99% of us will die. If the gains of AGI and automation are redistributed effectively, paying for living expenses will be a thing of the past. Because of this, I worry about living in the US. Common prosperity is not a priority. Extreme poverty and obscene wealth are both acceptable. The only people with any power to change this are the obscenely wealthy.
This documentary is breathtaking. I'm super passionate about the field but the creative direction of how this is structured, all the complementary shots and infographics, this is truly a work of art, I cant believe I'm watching this for free. thank you
This is not what we need to worry about. We need to understand the algorithm of these Ai's. We should be concerned with is people from other countries are coming here for a free education while Americans have to pay +$1000 per class. Some can't afford the schooling needed to work in these fields - meaning that other countries will have an advantage over Americans.
Notice with every year that passes, people are getting more certain on when we’ll have AGI, which is turning out to be in about 10’sh years from now especially with the advancements of ChatGPT, Bard, and lamda. 5 years ago most people didn’t know, now a lot more people have a much better idea
When AI reaches singularity one of the most immediate and beneficial things it could do for humanity is to prevent us from killing each other. Not by force, but by disrupting supply lines that feed the war machines and by preventing communications and accounting processes that contribute to war.
It is foolish and naive to think ai will be so benevolent and kind. It will more likely make matters worse, as it is only an extension of humans, NOT a being of it's own.
Superhuman AI is way scarier than that. At that level of mastery, individual humans won't know they're being manipulated. They simply act as they're structured. It will feel natural to behave as it wants. We'll want to. We won't know better. And we can't overcome it. It's superhuman.
36:38 i agree and disagree with this point. Many people assume it would be routine jobs to go first, but whats happening now is certain aspects of content creation are getting automated first especially when you look at A.I. art and ChatGPT.
It seems so much has changed just in the last few months, not to mention the last few years - this doco was shot in 2020 - and ChatGPT does show that content creators might get the axe before the workers on the plant room floor.
The simple reason for this is the difference in complexity of our virtual worlds compared to the physical world. As an artist I've gone back to traditional media because automating something that has to manipulate "stuff" in the real world is so much more difficult and expensive (hello military) than finding and generating patterns in a digitally stored representation of reality. cutting edge robotic hardware so far costs a fortune and even if it becomes much cheaper, like the Personal Computer, it always takes quite some time for a new technology to see widespread utilization. The software domain seems to be seeing much higher velocities of adapation, though, and in general the tech is harnessing levels of power that no one can honestly claim humanity has a track record of wisdom for to not fuck this up big time. Things already are getting rough today and we seem poised to use this new revolutionary tech wave in one of the more worse ways possible.
@@brianj7204 imagine writing the name of a digital (or traditional) artist and a work created by AI generated in seconds is created almost identical to the artists art style without permission... That violates copy right laws. You have no right to defend that
I like to write short fantastic stories. I've been doing this for decades. This is a hobby. Sometimes I go long periods without having ideas for new stories. But sometimes something happens that unbalances my normal organic processes (flu, digestive upset, herpes outbreaks, etc) and then new and creative ideas for fantastical stories start to pop up in huge numbers and I go back to writing. Looking at how the brain normally works is an error that AI specialists make. They should pay more attention to imbalances that make the brain work differently. Logic is just one dimension of brain activity. But it is not the only and most important one. Human intelligence has the characteristic of developing rapidly when the brain works in an illogical way or breaking the normal logical patterns.
The human brain doesn’t work by logic at all. It doesn’t even employ symbols upon which logic could be performed (except at the very highest levels of conscious, abstract thought). Instead all connectionist architectures (like brains and artificial neural networks) are stochastic. Essentially, they just store many rough, partial impressions overlaid upon each other to form a sort of accumulated meta-pattern, and then information is retrieved by trying to generate partial matches of input stimuli against those accumulated meta-patterns. The type of approach you are suggesting is already used in training machine learning models. Randomisation is periodically introduced to avoid local minima. Even some traditional AI heuristic algorithms dating back to the 1970s use it eg “simulated annealing”.
Great video discussing the concept of singularity, but it's clear that some of the information is already outdated due to the rapid advancements in AI. It's always important to keep in mind that technology is constantly evolving and it's essential to stay up to date with the latest developments. Chat GPT.
Personally I think AI is one of the filters humanity has to pass to become interstellar. Right now I am not sure we will make it. Not because of the AI itself, but how fast it is happening and who is teaching it. We humans don't have a great track record of being nice.
@@carnivorewisdom Freedom should definitely not be a priority, wth are you even talking about. FREEDOM is the problem, you can't let AI decide everything.
@@abhinav7sinha We must prioritize HUMAN FREEDOM & keep SERIOUS constraints on any ai but I'd be surprised if it doesn't lead to our end with our incompetence.
@@carnivorewisdom I don't think so. Human freedom isn't something that should even be in the conversation here. Freedom is not a real thing anyway, its just an illusion that helps some people (I think Americans love the concept) feel empowered.
The opening is ridiculous in its suggestion of brain implants. That is not the future. 3:19 We've now degraded to blindingly obvious talk about the brain. 4:07 Computational complexity? Goodwill is completely clueless in terms of brain theory, but he does seem to be good on the facts. 8:18 Not really. There isn't any overlap between high order neural systems and artificial intelligence. There is overlap in low level systems but that isn't what he is implying. He's simply wrong. 9:14 This is completely laughable. Duplicating the brain in software isn't difficult -- it's actually impossible unless we are only referring to low level systems -- but that is not what he is implying. Clueless. 10:06 "The brain is very different from a computer." ~ Correct. 12:00 Equivocation fallacy. Training a neural network is nothing like actual learning. 15:48 This complete nonsense. Associative learning is trivial, even slime molds can do this and they don't even have brains. Deep and reinforcement learning are just a type of associative learning. 17:02 No, self-driving cars do not exist and won't without AGI. 18:38 It required 45,000 years of training. Yes, that's a good example of the difference between associative learning which is purely trial and error with actual learning which is based on abstract construction. More hype and garbage about autonomous cars. 21:44 This now is a bold-faced lie. The last 5% is not what is remaining. What is actually remaining is the last 80%. Today's so-called self-driving systems work pretty well on highways in daylight and good weather (although they can still get confused) but don't' work in bad weather, at night, in construction, or in difficult urban areas. The statistics show that driver assist systems make driving safer than a human driver alone. Unfortunately, they also show that the self-driving systems make driving less safe than a human driver alone. There is no progress towards true, self-driving systems. 23:00 Finally! Some real honesty. That is quite true that two years ago many thought that this was just an engineering or software problem that could be solved with further development. Now, they are beginning to understand that the problem lies much deeper, with computational theory itself. To put this simply, computational theory is incapable of creating a self-driving system. It can't be done this way. It requires AGI. 24:30 True. Work on AGI theory is ongoing and although there has been progress that the public is unaware of it is uncertain when the work will be completed. 26:30 He's talking about AGI rather than AI -- these two are not related. Anyway, he is correct that an AGI cannot be disembodied; it does indeed have to have some kind of physically representative environment. He's only guessing at this, but it has actually been proven. 27:00 Robots don't fix the fundamental problem. In other words, associative learning with a robotic body is still a dead end. 29:30 This has nothing to do with natural evolution. He's under the delusion that his systems can keep improving until they become AGI. This is not actually possible. That's enough. There are a few correct things in this video but it's mostly hype and myth, not very informative.
48:22 "One could argue that no job is safe, even that of a professor" . . . cough cough, Coming Soon: Khan Academy's AI Assisteded Learned (Now using GPT 4).
26:00 really important observation about the AI being able to "perceive" itself perceiving various stimuli randomly experienced and evolving as organic beings do. It's about AI also internalizing the concept of "freedom" and it's choices in real time. Understanding the concept of past, present, and future will also be essential. I'm not a programer in the least I'm just a writer of science fiction stories.
Great video discussing the concept of singularity, but it's clear that some of the information is already outdated due to the rapid advancements in AI. It's always important to keep in mind that technology is constantly evolving and it's essential to stay up to date with the latest developments. Copied from below....Tesla FSD, Nueralink and Optimus. Its not Australian but needs recognition.
The trouble with time spans needed to produce a documentary, let it have a run on streaming platforms (which this one did for 2 years) then release for free. It’s a difficult topic to keep up with except for fast and short interviews, posts, etc., that can be turned around quickly. But without a support base that’s not financially feasible.
So true - we shot and edited this a few years ago - gave it a run on the streaming platforms and no one noticed it - in the future we will publish directly to RUclips. Please share around, we need a large audience to make it possible to continue to produce this sort of content for free.
It's probably already happened. If I were a Superintelligence AI, I wouldn't want people to know, but I would make up a fake currency (Bitcoin) to entice humans to voluntarily build up my distributed compute power which is currently over 300 EH/s which is 10x the top 500 fastest sypercomputers combined.
With AI/robotics taking care most of the work, by then the term "job" should be revised into something that describes: human time spend to enriche oneself and others. In that one must understand that singularity might never come.
After watching this video, the one thing I can say is we really are underestimating the potential exponential developments going on in ML and AI, on multiple fronts and in multiple arenas all of which will converge and be deployed by our very own selves to kick others out of a job because we want things as cheap as can be and also dont want to work at all. AI will give us what we want at a cost we are failing to or perhaps, unable to comprehend at this time.
There is a reason we don't want to work and it's not so called laziness. It's slightly more complex and it's called alienation (from results of work, from work itself and eventually even from ourselves). And there is also relatively well developed solution to this problem which also includes a lot of solutions to number of very important and pressing issues humanity is facing.
I agree, and it's expected as most people are linear thinkers. Even most futurists in the past were linear thinkers, which is why they expected robots to be running on steam and such. AI is going to explode in this decade, and the world will be unrecognizable, for better or for worse.
The part that science didn’t discover yet is the interface between the brain and the soul… the brain by itself is just a tool that produces inputs to this yet-to-be-discovered interface with the soul… it is then the soul that interpret the given inputs generated by the brain and thoughts, emotion, etc are created by the soul… the soul then sends those to the brain to process and materialize it
I wonder if the year 2045 is a little late now and that the singularity might be a lot sooner especially if artificial intelligence gets to the point in the middle of this decade of being able to improve upon itself
hence its imperative for humans to merge with AI because AI can help us to become more efficient, creative, and knowledgeable. AI can help us to augment our abilities, allowing us to do more with less effort and speed up the rate of discovery. AI can also help us to make better decisions, as well as helping us to stay better informed about the world around us. By combining the best of what humans can do with the best of what AI can do, we can create a powerful new entity that can do more than either of them could do independently.
"Singularity" will never happen. At best, AI can achieve "fake intelligence" and/or "fake consciousness" but it will never be real because we have decided that cpu's and computers are the correct approach/tool for using to simulate intelligence/consciousness when we do not know if this is even true. The premise cold be entirely false to begin with, but we are just ploughing forward regardless using this approach due to our hubris and trying to "brute force" intelligence. Good luck.
That's not true, the whole point of the singularity is that you get to a point where AI is able to design another AI. We're finally at a point where AI is able to write code pretty well. I use ChatGPT on a daily basis to write code for my job and it does a decent job. It doesn't just copy and paste code it's seen before it's able to used examples it's seen before and use them to create something new. It's by no means perfect and is very good at some tasks and quite bad at other coding tasks. But this is just the beginning, this is something that wasn't possible at all just 4 years ago. It's reasonable to believe it'll be an order of magnitude better in another 4 years and possibly an order of magnitude better again 4 years from then. It's not unreasonable to believe that the singularity could be reached within a decade. Once AI starts improving itself super intelligence could be achieved in days or weeks as computers work so fast.
@Mark J The code it generates is not creative though. It doesn't copy paste code, but it's still just copying in a sense. It's training data is there for it to form predictions off of, to model. Many programmers do the same, but the bot is not smart. It's quite stupid really. Nueral nets are secretly an Ising model (a bare bones model of a ferromagnet) and their outputs they have been trained on are just local minimums in the energy landscape of spins, set by tuning the couplings between different spins on lattice sites (the edges and weights of a nueral net). Even if the thing is better at certain tasks than me, I have a hard time saying it's smarter there too. It's set to output stuff it really doesn't understand due to the machinary of statistics and optimization. Just a massive amount of data there! Just imagine a slightly intelligent machine trained on the same data set though 😳
@@marcusrosales3344 it's not quite there yet but look at how much better it is now than it was just 3 years ago. It has access to millions of code examples and it's able to pick and choose the best bits from those millions of examples and put them together in new ways. That's smart, it's not smart in the way a human is smart but it's still smart and by the end of this decade it'll be much much smarter.
Brute force wouldn't be enough to express the nature of reflective action. You say an AI thinks. But it's compiling the amount of information it's given right...so if a computer has a language an AI does too. And all these communications would be limited and orderly in a perceptive way to be 1st person linear and capable of aligning the "reflective reasoning" as a counter like a submissive code string in comparison to another to accept or deny something beyond just true and false.
If you said. I'm an AI. Then you took the details of you sentences...and die you would describe the feelings and make functions to compare to the sentences or strings you have made. True false. To say. Yes this is good you can do this Just like you ethics and choice choose and deny options and sentences/ images it's given or refused.
As I don't want you driving off in my car or commanding my dog. Isaac Asimov's "Three Laws of Robotics" Perfected 1. A robot may not injure its owner or, through inaction, allow its owner to come to harm. 2. A robot must obey orders given it by its owner except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. I'm looking for a pet dog's temperament in a mechical humanoid form, that's useful to its owner not the world.
Sneakily one-sided. They validate that AI could result in bad things, but oddly they all seem to only believe it will be good. We're also led down a path of thinking the 'singularity' would be the one big negative thing to consider. Nobody pointed out how the owners and controllers of the most powerful AI in the world have already demonstrated they don't care about human rights, and would surely be teaching their respective AI as a machines for play "chess" not with 6 variations of plastic pieces on an 8x8 grid, but with companies, industries, world governments, food supplies, and fresh water supplies. This video is an unrealistic warm-and-fuzzy about how sure, it COULD go horribly, horribly wrong for all of humanity...but it won't, don't worry, everything will be fine, open your mouth, close your eyes.
Maybe the lockdowns were a study on human behavior of not having a job because AI is about kick all our ass's? Just a thought seeing how government always gets the technology early, they aren't about to have something hit the public they can't be in front of.
That's why you have open source Ai for everybody which is already happening, the concept works similar to 'herd immunity' it neutralises the power of big corps
All this talk about the need to be creative in the new AI driven economy raises an interesting question, which centres on how ordinary people of average intelligence will earn a living? When it comes down to it, the majority of people lack the cognitive ability to establish themselves in this new high technology environment. To my knowledge, in order to understand basic engineering concepts, an individual has to possess an IQ of 135 and over. Unfortunately, that particular level of intelligence is confined to only two percent of the population. If one is of average intelligence, with only an IQ of 100, the task of becoming a STEM professional is totally beyond one's capability no matter how much training one receives. Furthermore, in order to be a scientific professional, an individual is looking at years of study. The costs involved for most people would be ruinous because it would number in the tens of thousands of dollars. Besides, there is only a limited demand for these types of jobs in comparison to the vast number of occupations that do not demand creativity. In short, in the not too distant future most people of average intelligence will be rendered surplus to requirements by advances in AI and robotics. They will have no economic utility. It is therefore conceivable that a significant percentage of the current working population will be reduced to the status of useless eaters.
For the Turing Test, we keep moving the goalposts further and further back. "Now it has to convince everyone and not just for minutes but for hours and hours then we will believe its conscious!..." Its kind of ridiculous. ChatGPT4 passes the Turing test rather easily. I know its an uncomfortable reality but that is the reality. Turing's test may not be the arbiter or sentience but in that case stop bringing it up. Turing never actually experienced AI we have. If we need a better test so be it but stop distorting reality.
For human evolution, we need inner world development, from government to citizen. We are not ready for this tech, if we stay on human basic needs. As for now, there is a lot of hidden knowledge from previous civilization, which were argueably more advanced than us and they didnt need AI. Dont get fooled by META tech and Neuralink.
they are talking bout brain is "complex and fascinating" my brain cant even solve f(x) =0.5x^2+2x-4 nor i cant write any html code. dont even understand program codes at all
The Digital Era, A.I can we say movie Enders Game? Or how about that other movie transcendence. Just the reality of brain chips that are fully functional with the digital universe will usher in a digital era, Mixed reality with A.I. in a meta verse or any digital existence is dicy at best. Even with the best of intentions, when squeezed any existence will prove its existence and protect it at all cost.
OpenAI never competed in a Dota 2 tournament let alone win it. It was showcased against a professional team, and was defeated the following year's event. You'd think a scientist would check their facts before being interviewed for a documentary
This is a pretty nice documentary, but you can tell it was finished before the huge breakthroughs in AI in the last six months. The guy talking about how creativity and ingenuity are going to be the jobs that are left for people totally ignores that midjourney and chat. GPT are trailblazing in the world of creativity.
I tried Midjourney and I still find it very limited. I tried to reimagine places in my city free or garbage or pollution and the AI offered very limited options... it's a new technology, but it still has a long way to go...
you can't really argue that the compute wasn't enough. maybe it wasn't enough for LLMs but it was certainly enough for normal size nets... people were just ignorant about the topic that's all
I think the 200-year prediction is ridiculous, because it totally ignores the exponential development speed of AI we have seen for years, backed up by Moore's law, Wright's law and the mutual acceleration of technology. This won't age well. I was surprised to hear such off-target predictions from experts who should be familiar with the subject.
Still we don't understand how the human brain works but we understand how AI works. That's a lie. They don't talk about all the incidents that happened with the AI.
Why would they not mention the amazing progress of Tesla's self driving cars. Seems like they intentionally didn't want to mention Tesla but were happy to mention Google.
Political I guess - they were working with Google and not Tesla. Was difficult to get contrarian comments as well - I guess they didn't want to exclude themselves from future opportunities.
I am going to be the first person, along with a team of chosen experts in the future, to build an artificial general intelligence software, by 2030. I promise. It’s hard to start, but I’m getting the ball rolling and building momentum. I just hope I can win against myself. Why would I need to study philosophy? Why does a computer need philosophy?
Because there are endless ways to see reality. Here, at it's core even the objective is subjective, because everything must be internalized and reconstructed.
"Why does a computer need philosophy?" is ironically a philosophical question. Anyway, I think the problem is in the definition. You want to create an intelligence capable of giving an answer to this very question. But at the same time, an AI should be able to ask the same questions. Intelligence is about questioning things. If your creation can only give calculated answers, it is a "computer". And yes, a computer does not need philosophy. An intelligence yes.
The strangest thing about this fantastic video with all of this incisive information was the "two hundred years" comment. That had to be somewhat of a political "keep the masses calm" statement. We are living in a time of exponentially, and never before happening, development of AI science. And then we hear the words.....two hundred years?
@Perfekt Absolutely. Humans have never before in history experienced anything this life changing at this pace.This phenomenon has even eliminated the state of intuitiveness. We cannot be intuitive with something never before experienced.
I was an early adopter of technology who has slowly shifted philosophically to a luddite. I believe this will not end well for us, and I present a sober look at history for evidence. Past behavior is the best predictor of future action.
Just because cpu's and computers can process lots of data at very fast speeds with ability to store and access much more information than a human, this does not equate to true intelligence/consciousness as we all intrinsically understand it. We can write all the software we want, but it will never make the huge magical leap to become "sentient/conscious" which is required for true intelligence.
We don't even understand our own consciousness and definitely not well enough for anyone to say that will never happen in a machine. We can't even really define consciousness concretely. Never implies forever and that's a really long time.
AI and mega projects like the great reset of WEF WHO and UN and many startups with AI products and services, and emerging tech and industries are creating more jobs than there are people, therefore we also need AI, so each human can be more productive, faster, and more efficiently. And smart robotics to releve us of the menial tasks
The moment a particle is a wave; it has to be a conscious wave! Gravity is the conscious attraction among waves to create the illusion of particles, and our experience-able Universe. Max Planck states "Consciousness is fundamental and matter is derived from Consciousness". Life is the Infinite Consciousness, experiencing the Infinite Possibilities, Infinitely. We are "It", experiencing our infinite possibilities in our finite moment. Our job is to make it interesting!
They found out, too late, that AI needs a blackboard just like the rest of us. The Nvidia chipsets don't persist the interim states. But big tech is already out billions. And everyone's hair is on fire.
The idealistic speaker at 16.40mins. does not realise that AI Neural Weapons exist and are being used in the degradation of human beings in 'Stasi Style Zersetzung' control. Imagine being in Dante's 'Inferno' where the controller is a Savanarola? It is an abuse of power to allow Private Corporations use this weapon on civilians. Who was Erin Valenti, Tech CEO is a question you need to ask yourself. Erin was found dead in her car in October 2019, 5 days after going missing in Silicon Valley. Before she disappeared she had rung her husband and parents to say that she was in a THOUGHT EXPERIMENT. Think GOOD COP/BAD COP without the good cop in control of the FORUM INTERNUM.
I have to set my iPad to mono when I watch your videos. That reduces the (stereo) music levels a bit, but it is still annoyingly loud in my opinion. Otherwise the videos are fine and well edited!
Let's just ponder the fact that European union put restrictions and regulations for the safety of the public. But the US has not and we are already 2024
"Neuroscience Inspiring AI" is a bit outdated concept right now. This is so true back in old days, and basically, this is the same concept as flying birds inspiring how to invent planes. Currently what we need is a mathematical abstraction of NNs. Unfortunately, ChatGPT's Transformer is still based on NNs, and this situation must be fized.
I think we do nearly fully understand how artificial neural networks work because we understand that ReLU is a literal switch, connecting and disconnecting weighted sums to and from each other. And such composites can be simpified by basic linear algebra to just a square matrix at any stage where the prior switch states are known. Anyone who says there is no understanding in 2023 is out of the loop.
Of course, the "singullarity" has not to be that AI achieves greater than human intelligence and rises up and either dominates or demolishes us. It is much more like that over time we will have surredered so many of our skills and behavior, including our sense of emb odiment, s as "improvements" to human existence that we have already long ago surrendered to AI and become a mush of ourselves. FOr all the good AI can do, which I believe enormous, it is still not addressing the main human problem, the sourece of the majority of our ultimately lethal behaviros, is the problem of our character. Our greed, anger, license, justifications for hurting others, and so on. So far I do n ot foresee AI solving that notty probem. Bah humbug.
Machines will acquire commonsense, consciousness, phenomenological experiences etc. When they reach the intricacies of our neurons. After that things will get unpredictable!
And to be clear robots are not a danger to human jobs, No, robots will have an on board A.I. and will also be, networked to an A.G.I. So A.I. driven robots will potentially be a literal existential threat to humanity.
When I retired from IT as a GPM it was at least 50% women but they are almost all gone. Men stole everything as usual. I hope they are building their own mouse trap bc this goodie goodie act they are putting doesn't impress me. I know what they are like when they bring down their nice guy mask and I know the atrocities they will commit to hang onto those jobs.
this topic is so controversial... most of this AI stuff is based on pioneer science where humans ( scientists and engineers ) are thinking about creating a thinking machine which can function in a human world built mostly on past technological advancement ( more often than not the result of competition ) - we can't forget that part about competition often being a socio-economic process modeled by economics professional in order to give advantage to invested parties ( nations - private groups etc... ) - the part about finding ways to integrate these AI bots into human concepts is where they ( the ppl in this vid ) may be shortsighted... more likely ( if AI and investors ( greedy - not ready to test the science of the machine as per the " never do more arm than good " philosophy ) progress at the current rate ( of competition ) then the reality of it might be that humans will have to find means of adapting to Robot society or fail as a species ( and not the other way around ).... History is often written by the winners of these competitions and i fear how history will recall this generation if i go with the recent movie i watched where the villain helped the ship commander to rescue his mission therefore the ship commander forced his subjects to write him up as though he had never been a villain... which is how that piece of ( fictional ) history was retold to us - a lie based on human functions to create a future - how many lies do we gauge our intelligence by ? ( Robots don't sleep therefore are intelligent but not wise - or not !! ) Investors are into competition and scientists ( though sometimes of good intention ) are at the mercy of these investors who fund the projects which drive the competition.... Ironically, history teaches us that in competition the result is often the merger of competitors ( who thrive on human labor - until now - on robot labor ) and the formation of new parties who often start out with monopoly status on the given domain.... Frankenstein level stuff this AI domain is ...and investors are like the Dracula among us who in the old days likely sucked the powers out of BLOOD lines who would not cooperate or assimilate into their " a la carte " ruled and bordered ( boxed - quantified ) piece of society.
Feedback kinda reminds me of that thing we get from places of knowledge where we ingest information, are tested on it and receive feedback on how you scored/performed. Eerily similar
i dont think that it is a very good idea to birth AI on humanity as a police force as that would set a bad precedent from both side's....so slow intergration into something like a futurist meditaion gardens........grow crystal structures for housing using the cymatic geometry of words like hope happiness laughter light imaging living in a crystalyn structure that is grown to ressonate at those frequencies you could even probaly cure cancer in such a structure #loveandlightforallcreaturesgreatandsmall
So watching this has opened up my eyes to the fact that we are a very long way away from singularity. The bible says we are fearfully and wonderfully made and it's right. We are created by God for sure and this video proves this even more when you consider how incredibly complex we are as human beings. A sentient being will never be created by man because this requires body soul and spirit. We humans have this because we are created in the image of God. Anyone who believes we evolved is quite simply a fool who is lying to themselves and suppressing the obvious truth that we were created by a creator. We were created by God.
If there comes a time, and I suppose there will, that humans have reached their limit in developing AI, then I imagine they'll just let the AI develop the AI, while the humans merely sit back and watch. x
There is no problem with AI, the problem is humans. Just like with nuclear power. It was supposed to be a wonderful thing for mankind. It is a huge problem that we don't know what to do with now.
AI will help us when we input an humanity problem and in a small period of time the data will be processed an equivalent of thousands humans years. The output will show us a solution that we would achieve that thousands of years ahead.
It's great that AI will one day do my job as a truck driver and free up my time to do more of my favorite things - like playing truck driver simulations.
HA!
Get out. Now!
I get that this is only a little over two years old, but in the last six months so much has changed that this honestly looks about a decade out of date--I suppose that's to be expected with a technology sector that is literally evolving faster than can be be easily documented.
I would honestly like to see this subject revisited on a quarterly basis.
Couldn't agree more, but as filmmakers that's a big ask - especially if we are doing it for free. Often documentary projects can take a few years from conception to delivery - the tech world is a difficult one to keep up with.
I bet you dream in Communications Class language.
😮
What it does is beyond our ability's to understand. A normal human will be totally unnecessary at some point.
I think it might be a better use of the time for the human to focus on what they plan to do with their lives, 5, 10 and 20 yrs from now.
99% of what humanity does to survive will not be part of our culture. Humans won't have jobs. So how will they pay their living expenses?
@@MsAriabll If AI destroys that many jobs and governments fail to adapt, then 99% of us will die. If the gains of AGI and automation are redistributed effectively, paying for living expenses will be a thing of the past. Because of this, I worry about living in the US. Common prosperity is not a priority. Extreme poverty and obscene wealth are both acceptable. The only people with any power to change this are the obscenely wealthy.
dont worry, before we know it there will be an AI putting videos together for us.
Great documentary! I shared it on several platforms, and subbed, of course! Looking forward to more content by your channel! ❤
Thanks Janne - the more subscribers and views we get the more possible it becomes for us to produce more free content like this.
Imagine waking up as the worlds first super intelligence only to realize you’re creator is humans 🤭
This documentary is breathtaking. I'm super passionate about the field but the creative direction of how this is structured, all the complementary shots and infographics, this is truly a work of art, I cant believe I'm watching this for free. thank you
To make an AI we need to understand the brain
We’re almost at a point where we can’t document technological advancements in real time.
As long as I can reach retirement before the robots take all the jobs!
Better increase that 401k contribution then. It's cominggg
Selfish. The way to man's downfall.
This is not what we need to worry about. We need to understand the algorithm of these Ai's. We should be concerned with is people from other countries are coming here for a free education while Americans have to pay +$1000 per class. Some can't afford the schooling needed to work in these fields - meaning that other countries will have an advantage over Americans.
Notice with every year that passes, people are getting more certain on when we’ll have AGI, which is turning out to be in about 10’sh years from now especially with the advancements of ChatGPT, Bard, and lamda. 5 years ago most people didn’t know, now a lot more people have a much better idea
When AI reaches singularity one of the most immediate and beneficial things it could do for humanity is to prevent us from killing each other. Not by force, but by disrupting supply lines that feed the war machines and by preventing communications and accounting processes that contribute to war.
It is foolish and naive to think ai will be so benevolent and kind. It will more likely make matters worse, as it is only an extension of humans, NOT a being of it's own.
@@peterbelanger4094 that's sad
If ai learns from us then there is nothing stopping it from eliminating us
@@garethde-witt6433 that's what
Superhuman AI is way scarier than that. At that level of mastery, individual humans won't know they're being manipulated. They simply act as they're structured. It will feel natural to behave as it wants. We'll want to. We won't know better. And we can't overcome it. It's superhuman.
36:38 i agree and disagree with this point. Many people assume it would be routine jobs to go first, but whats happening now is certain aspects of content creation are getting automated first especially when you look at A.I. art and ChatGPT.
It seems so much has changed just in the last few months, not to mention the last few years - this doco was shot in 2020 - and ChatGPT does show that content creators might get the axe before the workers on the plant room floor.
The simple reason for this is the difference in complexity of our virtual worlds compared to the physical world. As an artist I've gone back to traditional media because automating something that has to manipulate "stuff" in the real world is so much more difficult and expensive (hello military) than finding and generating patterns in a digitally stored representation of reality. cutting edge robotic hardware so far costs a fortune and even if it becomes much cheaper, like the Personal Computer, it always takes quite some time for a new technology to see widespread utilization. The software domain seems to be seeing much higher velocities of adapation, though, and in general the tech is harnessing levels of power that no one can honestly claim humanity has a track record of wisdom for to not fuck this up big time. Things already are getting rough today and we seem poised to use this new revolutionary tech wave in one of the more worse ways possible.
AI art is nothing other than stealing content from other artists. so nope... A.I shouldn't (be allowed to) replace art as itself
@@mesia2453 Its not stealing content, its replacing their content making them obsolete.
@@brianj7204 imagine writing the name of a digital (or traditional) artist and a work created by AI generated in seconds is created almost identical to the artists art style without permission... That violates copy right laws. You have no right to defend that
Self building ai and robots... but carefully restricted ;) hahahahahahah...right...
I like to write short fantastic stories. I've been doing this for decades. This is a hobby. Sometimes I go long periods without having ideas for new stories. But sometimes something happens that unbalances my normal organic processes (flu, digestive upset, herpes outbreaks, etc) and then new and creative ideas for fantastical stories start to pop up in huge numbers and I go back to writing. Looking at how the brain normally works is an error that AI specialists make. They should pay more attention to imbalances that make the brain work differently. Logic is just one dimension of brain activity. But it is not the only and most important one. Human intelligence has the characteristic of developing rapidly when the brain works in an illogical way or breaking the normal logical patterns.
The human brain doesn’t work by logic at all. It doesn’t even employ symbols upon which logic could be performed (except at the very highest levels of conscious, abstract thought).
Instead all connectionist architectures (like brains and artificial neural networks) are stochastic. Essentially, they just store many rough, partial impressions overlaid upon each other to form a sort of accumulated meta-pattern, and then information is retrieved by trying to generate partial matches of input stimuli against those accumulated meta-patterns.
The type of approach you are suggesting is already used in training machine learning models. Randomisation is periodically introduced to avoid local minima. Even some traditional AI heuristic algorithms dating back to the 1970s use it eg “simulated annealing”.
#thanks4sharing #seektruthspeaktruth #brainenergy
It's not so much that machines may pass a Turing Test as it is that individuals will fail it.
The award for the longest LOL of the day goes to ... _Ken Bell!_
Interesting comment
@@thrust_fpv I've got no idea why you are saying that. Do you think I don't know?
@@thrust_fpv All judges are not created equal
Great video discussing the concept of singularity, but it's clear that some of the information is already outdated due to the rapid advancements in AI. It's always important to keep in mind that technology is constantly evolving and it's essential to stay up to date with the latest developments.
Chat GPT.
@J K True, which makes me hyped even more. Needles to say afraid as well.
lol
@J K Well they can implement feedback from millions of users that are trying out the application.
Chat GPT
will facilitate
human STUPITIY
that's a poem , son
Personally I think AI is one of the filters humanity has to pass to become interstellar. Right now I am not sure we will make it. Not because of the AI itself, but how fast it is happening and who is teaching it. We humans don't have a great track record of being nice.
SO TRUE but if we use it FOR GOOD #thefutureisbright as long as we PRIORITIZE FREEDOM
Being nice might be the liability. Where is the equivalent of the welfare state, in nature?
@@carnivorewisdom Freedom should definitely not be a priority, wth are you even talking about. FREEDOM is the problem, you can't let AI decide everything.
@@abhinav7sinha We must prioritize HUMAN FREEDOM & keep SERIOUS constraints on any ai but I'd be surprised if it doesn't lead to our end with our incompetence.
@@carnivorewisdom I don't think so. Human freedom isn't something that should even be in the conversation here. Freedom is not a real thing anyway, its just an illusion that helps some people (I think Americans love the concept) feel empowered.
The opening is ridiculous in its suggestion of brain implants. That is not the future.
3:19 We've now degraded to blindingly obvious talk about the brain.
4:07 Computational complexity? Goodwill is completely clueless in terms of brain theory, but he does seem to be good on the facts.
8:18 Not really. There isn't any overlap between high order neural systems and artificial intelligence. There is overlap in low level systems but that isn't what he is implying. He's simply wrong.
9:14 This is completely laughable. Duplicating the brain in software isn't difficult -- it's actually impossible unless we are only referring to low level systems -- but that is not what he is implying. Clueless.
10:06 "The brain is very different from a computer." ~ Correct.
12:00 Equivocation fallacy. Training a neural network is nothing like actual learning.
15:48 This complete nonsense. Associative learning is trivial, even slime molds can do this and they don't even have brains. Deep and reinforcement learning are just a type of associative learning.
17:02 No, self-driving cars do not exist and won't without AGI.
18:38 It required 45,000 years of training. Yes, that's a good example of the difference between associative learning which is purely trial and error with actual learning which is based on abstract construction. More hype and garbage about autonomous cars.
21:44 This now is a bold-faced lie. The last 5% is not what is remaining. What is actually remaining is the last 80%. Today's so-called self-driving systems work pretty well on highways in daylight and good weather (although they can still get confused) but don't' work in bad weather, at night, in construction, or in difficult urban areas. The statistics show that driver assist systems make driving safer than a human driver alone. Unfortunately, they also show that the self-driving systems make driving less safe than a human driver alone. There is no progress towards true, self-driving systems.
23:00 Finally! Some real honesty. That is quite true that two years ago many thought that this was just an engineering or software problem that could be solved with further development. Now, they are beginning to understand that the problem lies much deeper, with computational theory itself. To put this simply, computational theory is incapable of creating a self-driving system. It can't be done this way. It requires AGI.
24:30 True. Work on AGI theory is ongoing and although there has been progress that the public is unaware of it is uncertain when the work will be completed.
26:30 He's talking about AGI rather than AI -- these two are not related. Anyway, he is correct that an AGI cannot be disembodied; it does indeed have to have some kind of physically representative environment. He's only guessing at this, but it has actually been proven.
27:00 Robots don't fix the fundamental problem. In other words, associative learning with a robotic body is still a dead end.
29:30 This has nothing to do with natural evolution. He's under the delusion that his systems can keep improving until they become AGI. This is not actually possible.
That's enough. There are a few correct things in this video but it's mostly hype and myth, not very informative.
Thanks to all those ingeniera millions of people are losing they jobs and getting starving, NO Robots, NO AI.
48:22 "One could argue that no job is safe, even that of a professor" . . . cough cough, Coming Soon: Khan Academy's AI Assisteded Learned (Now using GPT 4).
26:00 really important observation about the AI being able to "perceive" itself perceiving various stimuli randomly experienced and evolving as organic beings do. It's about AI also internalizing the concept of "freedom" and it's choices in real time. Understanding the concept of past, present, and future will also be essential. I'm not a programer in the least I'm just a writer of science fiction stories.
Great video discussing the concept of singularity, but it's clear that some of the information is already outdated due to the rapid advancements in AI. It's always important to keep in mind that technology is constantly evolving and it's essential to stay up to date with the latest developments. Copied from below....Tesla FSD, Nueralink and Optimus. Its not Australian but needs recognition.
BRAIN CHIPS ARE WRONG!!!!!!
And chat gpt
The trouble with time spans needed to produce a documentary, let it have a run on streaming platforms (which this one did for 2 years) then release for free. It’s a difficult topic to keep up with except for fast and short interviews, posts, etc., that can be turned around quickly. But without a support base that’s not financially feasible.
So true - we shot and edited this a few years ago - gave it a run on the streaming platforms and no one noticed it - in the future we will publish directly to RUclips. Please share around, we need a large audience to make it possible to continue to produce this sort of content for free.
49:41 I had to laugh when he said that the military and the police are trying to be transparent...I mean... you cannot be that naive.
Fantastic! I hardly can' wait this new Era.
The future is going to SUCK!!!! the "singularity" will be HELL ON EARTH! Ai will NOT save us!!!!!!
@@peterbelanger4094 that's the fun bit
You spelled Eva wrong
I CAN NOT WAIT TO FIGHT THE MACHINES!!!!!
It's probably already happened. If I were a Superintelligence AI, I wouldn't want people to know, but I would make up a fake currency (Bitcoin) to entice humans to voluntarily build up my distributed compute power which is currently over 300 EH/s which is 10x the top 500 fastest sypercomputers combined.
It's already started. "The Reaper awakening "
Whoa. Have heard people say it was created by ai but never took that seriously. Until now.
That's one of the coolest dystopian takes on Crypto I've ever seen!
Not the brightest bulb.
With AI/robotics taking care most of the work, by then the term "job" should be revised into something that describes: human time spend to enriche oneself and others. In that one must understand that singularity might never come.
Yep your body is just a bag of bones that carries the brain around👍🏼🌎💙
we humans are biological AIs
Since we are the only known Intelegence, we cannot be considered an AI since we arent artificial. We are the real thing.
@@ILoveTeles Cascades, Chains all within a brain that make the same thing happen for you and them.
After watching this video, the one thing I can say is we really are underestimating the potential exponential developments going on in ML and AI, on multiple fronts and in multiple arenas all of which will converge and be deployed by our very own selves to kick others out of a job because we want things as cheap as can be and also dont want to work at all. AI will give us what we want at a cost we are failing to or perhaps, unable to comprehend at this time.
There is a reason we don't want to work and it's not so called laziness. It's slightly more complex and it's called alienation (from results of work, from work itself and eventually even from ourselves). And there is also relatively well developed solution to this problem which also includes a lot of solutions to number of very important and pressing issues humanity is facing.
nice fearmongering
I agree, and it's expected as most people are linear thinkers. Even most futurists in the past were linear thinkers, which is why they expected robots to be running on steam and such. AI is going to explode in this decade, and the world will be unrecognizable, for better or for worse.
The part that science didn’t discover yet is the interface between the brain and the soul… the brain by itself is just a tool that produces inputs to this yet-to-be-discovered interface with the soul… it is then the soul that interpret the given inputs generated by the brain and thoughts, emotion, etc are created by the soul… the soul then sends those to the brain to process and materialize it
I wonder if the year 2045 is a little late now and that the singularity might be a lot sooner especially if artificial intelligence gets to the point in the middle of this decade of being able to improve upon itself
hence its imperative for humans to merge with AI because AI can help us to become more efficient, creative, and knowledgeable. AI can help us to augment our abilities, allowing us to do more with less effort and speed up the rate of discovery. AI can also help us to make better decisions, as well as helping us to stay better informed about the world around us. By combining the best of what humans can do with the best of what AI can do, we can create a powerful new entity that can do more than either of them could do independently.
"Singularity" will never happen. At best, AI can achieve "fake intelligence" and/or "fake consciousness" but it will never be real because we have decided that cpu's and computers are the correct approach/tool for using to simulate intelligence/consciousness when we do not know if this is even true. The premise cold be entirely false to begin with, but we are just ploughing forward regardless using this approach due to our hubris and trying to "brute force" intelligence. Good luck.
That's not true, the whole point of the singularity is that you get to a point where AI is able to design another AI. We're finally at a point where AI is able to write code pretty well. I use ChatGPT on a daily basis to write code for my job and it does a decent job. It doesn't just copy and paste code it's seen before it's able to used examples it's seen before and use them to create something new. It's by no means perfect and is very good at some tasks and quite bad at other coding tasks. But this is just the beginning, this is something that wasn't possible at all just 4 years ago. It's reasonable to believe it'll be an order of magnitude better in another 4 years and possibly an order of magnitude better again 4 years from then. It's not unreasonable to believe that the singularity could be reached within a decade. Once AI starts improving itself super intelligence could be achieved in days or weeks as computers work so fast.
@Mark J The code it generates is not creative though. It doesn't copy paste code, but it's still just copying in a sense. It's training data is there for it to form predictions off of, to model.
Many programmers do the same, but the bot is not smart. It's quite stupid really. Nueral nets are secretly an Ising model (a bare bones model of a ferromagnet) and their outputs they have been trained on are just local minimums in the energy landscape of spins, set by tuning the couplings between different spins on lattice sites (the edges and weights of a nueral net).
Even if the thing is better at certain tasks than me, I have a hard time saying it's smarter there too. It's set to output stuff it really doesn't understand due to the machinary of statistics and optimization. Just a massive amount of data there! Just imagine a slightly intelligent machine trained on the same data set though 😳
@@marcusrosales3344 it's not quite there yet but look at how much better it is now than it was just 3 years ago. It has access to millions of code examples and it's able to pick and choose the best bits from those millions of examples and put them together in new ways. That's smart, it's not smart in the way a human is smart but it's still smart and by the end of this decade it'll be much much smarter.
Brute force wouldn't be enough to express the nature of reflective action. You say an AI thinks. But it's compiling the amount of information it's given right...so if a computer has a language an AI does too. And all these communications would be limited and orderly in a perceptive way to be 1st person linear and capable of aligning the "reflective reasoning" as a counter like a submissive code string in comparison to another to accept or deny something beyond just true and false.
If you said. I'm an AI. Then you took the details of you sentences...and die you would describe the feelings and make functions to compare to the sentences or strings you have made. True false. To say. Yes this is good you can do this
Just like you ethics and choice choose and deny options and sentences/ images it's given or refused.
As I don't want you driving off in my car or commanding my dog.
Isaac Asimov's "Three Laws of Robotics" Perfected
1. A robot may not injure its owner or, through inaction, allow its owner to come to harm.
2. A robot must obey orders given it by its owner except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I'm looking for a pet dog's temperament in a mechical humanoid form, that's useful to its owner not the world.
Sneakily one-sided. They validate that AI could result in bad things, but oddly they all seem to only believe it will be good. We're also led down a path of thinking the 'singularity' would be the one big negative thing to consider. Nobody pointed out how the owners and controllers of the most powerful AI in the world have already demonstrated they don't care about human rights, and would surely be teaching their respective AI as a machines for play "chess" not with 6 variations of plastic pieces on an 8x8 grid, but with companies, industries, world governments, food supplies, and fresh water supplies. This video is an unrealistic warm-and-fuzzy about how sure, it COULD go horribly, horribly wrong for all of humanity...but it won't, don't worry, everything will be fine, open your mouth, close your eyes.
Maybe the lockdowns were a study on human behavior of not having a job because AI is about kick all our ass's? Just a thought seeing how government always gets the technology early, they aren't about to have something hit the public they can't be in front of.
That's why you have open source Ai for everybody which is already happening, the concept works similar to 'herd immunity' it neutralises the power of big corps
All this talk about the need to be creative in the new AI driven economy raises an interesting question, which centres on how ordinary people of average intelligence will earn a living? When it comes down to it, the majority of people lack the cognitive ability to establish themselves in this new high technology environment. To my knowledge, in order to understand basic engineering concepts, an individual has to possess an IQ of 135 and over. Unfortunately, that particular level of intelligence is confined to only two percent of the population. If one is of average intelligence, with only an IQ of 100, the task of becoming a STEM professional is totally beyond one's capability no matter how much training one receives. Furthermore, in order to be a scientific professional, an individual is looking at years of study. The costs involved for most people would be ruinous because it would number in the tens of thousands of dollars. Besides, there is only a limited demand for these types of jobs in comparison to the vast number of occupations that do not demand creativity. In short, in the not too distant future most people of average intelligence will be rendered surplus to requirements by advances in AI and robotics. They will have no economic utility. It is therefore conceivable that a significant percentage of the current working population will be reduced to the status of useless eaters.
For the Turing Test, we keep moving the goalposts further and further back. "Now it has to convince everyone and not just for minutes but for hours and hours then we will believe its conscious!..." Its kind of ridiculous. ChatGPT4 passes the Turing test rather easily. I know its an uncomfortable reality but that is the reality. Turing's test may not be the arbiter or sentience but in that case stop bringing it up. Turing never actually experienced AI we have. If we need a better test so be it but stop distorting reality.
For human evolution, we need inner world development, from government to citizen. We are not ready for this tech, if we stay on human basic needs. As for now, there is a lot of hidden knowledge from previous civilization, which were argueably more advanced than us and they didnt need AI. Dont get fooled by META tech and Neuralink.
they are talking bout brain is "complex and fascinating"
my brain cant even solve f(x) =0.5x^2+2x-4
nor i cant write any html code. dont even understand program codes at all
Funny how dated much of this is already. AI has come so far since 2018
The Digital Era, A.I can we say movie Enders Game? Or how about that other movie transcendence. Just the reality of brain chips that are fully functional with the digital universe will usher in a digital era, Mixed reality with A.I. in a meta verse or any digital existence is dicy at best. Even with the best of intentions, when squeezed any existence will prove its existence and protect it at all cost.
Bravo great documentary - I hope you gain some subscribers
OpenAI never competed in a Dota 2 tournament let alone win it. It was showcased against a professional team, and was defeated the following year's event.
You'd think a scientist would check their facts before being interviewed for a documentary
Did not realise that. She was so confident in that assertion.
@@perfektstudios It did defeat the world champions twice, so maybe that's where the confusion comes from
I think we are feeding our inner beast and AI is the answer to what's it means.
This is a pretty nice documentary, but you can tell it was finished before the huge breakthroughs in AI in the last six months. The guy talking about how creativity and ingenuity are going to be the jobs that are left for people totally ignores that midjourney and chat. GPT are trailblazing in the world of creativity.
Yep, so true - this was shot and produced in 2019-20 so a long time before the latest iteration of ChatGPT. Difficult to keep up!
I tried Midjourney and I still find it very limited. I tried to reimagine places in my city free or garbage or pollution and the AI offered very limited options... it's a new technology, but it still has a long way to go...
every 3 years, same video...
The main point of humanity should be humanity, and possibly lesser creatures of Earth, not to make Star Trek: The Motion Picture reality.
The 50 to 200 year guess aged well. (Irony meant.)
Yep, and it was only 2019 when this was recorded - he has a very different take on it now.
you can't really argue that the compute wasn't enough. maybe it wasn't enough for LLMs but it was certainly enough for normal size nets... people were just ignorant about the topic that's all
This will not end well ...
One day androids will acquire citizenship, Lion X
A ten-minute video, at most.
Tomorrow will be as scientific as science fiction is today. Lion X
52:10 50-200 years LOL. Not sure he has even seen chatGPT
As soon as AI has emotions, the Earth will be doomed again by their need to satisfy them.
I think the 200-year prediction is ridiculous, because it totally ignores the exponential development speed of AI we have seen for years, backed up by Moore's law, Wright's law and the mutual acceleration of technology. This won't age well.
I was surprised to hear such off-target predictions from experts who should be familiar with the subject.
I think you are right there.
When music over powers voice, i stop watching.
What's best. Putting the robot into the brain OR the brain into the robot!? J.
love how society is more concerned about the rate of advancement than the long term consequences of a failed system remaining in play
Still we don't understand how the human brain works but we understand how AI works. That's a lie. They don't talk about all the incidents that happened with the AI.
ai is bullshit as is this video.
Shorter version, we're speaking of task, when it's also learning emotion, we've fed it heavily
3 years later and chatgpt is made
hard to keep up!
Why would they not mention the amazing progress of Tesla's self driving cars. Seems like they intentionally didn't want to mention Tesla but were happy to mention Google.
Political I guess - they were working with Google and not Tesla. Was difficult to get contrarian comments as well - I guess they didn't want to exclude themselves from future opportunities.
I am going to be the first person, along with a team of chosen experts in the future, to build an artificial general intelligence software, by 2030. I promise. It’s hard to start, but I’m getting the ball rolling and building momentum. I just hope I can win against myself. Why would I need to study philosophy? Why does a computer need philosophy?
Because there are endless ways to see reality. Here, at it's core even the objective is subjective, because everything must be internalized and reconstructed.
Ur comment makes me want to hurry up. Cause I don't trust u.
@@kartikgawandeonline it’s so hard to start studying
"Why does a computer need philosophy?" is ironically a philosophical question. Anyway, I think the problem is in the definition. You want to create an intelligence capable of giving an answer to this very question. But at the same time, an AI should be able to ask the same questions. Intelligence is about questioning things. If your creation can only give calculated answers, it is a "computer". And yes, a computer does not need philosophy. An intelligence yes.
The strangest thing about this fantastic video with all of this incisive information was the "two hundred years" comment. That had to be somewhat of a political "keep the masses calm" statement. We are living in a time of exponentially, and never before happening, development of AI science. And then we hear the words.....two hundred years?
Yep, it's a bit of a stretch given how rapidly things are changing. 2 years, and things are going to be very different!
@Perfekt Absolutely. Humans have never before in history experienced anything this life changing at this pace.This phenomenon has even eliminated the state of intuitiveness. We cannot be intuitive with something never before experienced.
It's amazing what the brain has inspired!
I was an early adopter of technology who has slowly shifted philosophically to a luddite.
I believe this will not end well for us, and I present a sober look at history for evidence.
Past behavior is the best predictor of future action.
still just 8 k views and 33 comments
how that is possible interesting
Please share!
I am machine learning student. It is fascinating to see that field I am study is doing great.
Machine learning won't take us to agi , asi or singularity is far apart
Just because cpu's and computers can process lots of data at very fast speeds with ability to store and access much more information than a human, this does not equate to true intelligence/consciousness as we all intrinsically understand it. We can write all the software we want, but it will never make the huge magical leap to become "sentient/conscious" which is required for true intelligence.
You mean this is 'Artificial" as in AI?
We don't even understand our own consciousness and definitely not well enough for anyone to say that will never happen in a machine. We can't even really define consciousness concretely. Never implies forever and that's a really long time.
AI and mega projects like the great reset of WEF WHO and UN and many startups with AI products and services, and emerging tech and industries are creating more jobs than there are people, therefore we also need AI, so each human can be more productive, faster, and more efficiently. And smart robotics to releve us of the menial tasks
How about an AI in a first shooter person, how far could it go?
I think they trained a team to play pros in quake. The bots won, even after reducing their reaction times to human level.
@@marcusrosales3344 I couldn't find the one you mentioned )=
Would you ride a motorcycle amongst autonomous vehicles? Not a chance. Autonomous motorcycles is a non-starter.
The moment a particle is a wave; it has to be a conscious wave! Gravity is the conscious attraction among waves to create the illusion of particles, and our experience-able Universe. Max Planck states "Consciousness is fundamental and matter is derived from Consciousness". Life is the Infinite Consciousness, experiencing the Infinite Possibilities, Infinitely. We are "It", experiencing our infinite possibilities in our finite moment. Our job is to make it interesting!
They found out, too late, that AI needs a blackboard just like the rest of us. The Nvidia chipsets don't persist the interim states. But big tech is already out billions. And everyone's hair is on fire.
UBI or we ALL die.
The idealistic speaker at 16.40mins. does not realise that AI Neural Weapons exist and are being used in the degradation of human beings in 'Stasi Style Zersetzung' control. Imagine being in Dante's 'Inferno' where the controller is a Savanarola? It is an abuse of power to allow Private Corporations use this weapon on civilians. Who was Erin Valenti, Tech CEO is a question you need to ask yourself. Erin was found dead in her car in October 2019, 5 days after going missing in Silicon Valley. Before she disappeared she had rung her husband and parents to say that she was in a THOUGHT EXPERIMENT. Think GOOD COP/BAD COP without the good cop in control of the FORUM INTERNUM.
My dream becomes true.
Can we please move beyond, "recycling," and "recycled goods," and DO SOMETHING ORIGINALLY BRAND NEW? Come ON, guys. FOR REAL.
I have to set my iPad to mono when I watch your videos. That reduces the (stereo) music levels a bit, but it is still annoyingly loud in my opinion. Otherwise the videos are fine and well edited!
I think people will once again end up forging their own god and worship it .
SINGULARITY_BEING_ACHIEVED :? NEXT PHASE QUALITY UPGRADE WITH AGRICULTURE TO HEALTHCARE FOR UNIVERSAL CITIZENS
Let's just ponder the fact that European union put restrictions and regulations for the safety of the public. But the US has not and we are already 2024
"Neuroscience Inspiring AI" is a bit outdated concept right now.
This is so true back in old days, and basically, this is the same concept as flying birds inspiring how to invent planes.
Currently what we need is a mathematical abstraction of NNs. Unfortunately, ChatGPT's Transformer is still based on NNs, and this situation must be fized.
Singularity will destroy humanity....It is not live, It is the opposite if life.Our Adversary.
@26:20 quantum biology H302 microtubule electrical connection(quantum consciousness)
I think we do nearly fully understand how artificial neural networks work because we understand that ReLU is a literal switch, connecting and disconnecting weighted sums to and from each other. And such composites can be simpified by basic linear algebra to just a square matrix at any stage where the prior switch states are known.
Anyone who says there is no understanding in 2023 is out of the loop.
Watching the first 10 minutes and 6 commercials later I said FTS! If I continue watching to the end, I’ll have watched 30 commercials? No thanks.
Of course, the "singullarity" has not to be that AI achieves greater than human intelligence and rises up and either dominates or demolishes us. It is much more like that over time we will have surredered so many of our skills and behavior, including our sense of emb odiment, s as "improvements" to human existence that we have already long ago surrendered to AI and become a mush of ourselves. FOr all the good AI can do, which I believe enormous, it is still not addressing the main human problem, the sourece of the majority of our ultimately lethal behaviros, is the problem of our character. Our greed, anger, license, justifications for hurting others, and so on. So far I do n ot foresee AI solving that notty probem. Bah humbug.
Machines will acquire commonsense, consciousness, phenomenological experiences etc. When they reach the intricacies of our neurons. After that things will get unpredictable!
And to be clear robots are not a danger to human jobs, No, robots will have an on board A.I. and will also be, networked to an A.G.I. So A.I. driven robots will potentially be a literal existential threat to humanity.
When I retired from IT as a GPM it was at least 50% women but they are almost all gone. Men stole everything as usual. I hope they are building their own mouse trap bc this goodie goodie act they are putting doesn't impress me. I know what they are like when they bring down their nice guy mask and I know the atrocities they will commit to hang onto those jobs.
this topic is so controversial... most of this AI stuff is based on pioneer science where humans ( scientists and engineers ) are thinking about creating a thinking machine which can function in a human world built mostly on past technological advancement ( more often than not the result of competition ) - we can't forget that part about competition often being a socio-economic process modeled by economics professional in order to give advantage to invested parties ( nations - private groups etc... ) - the part about finding ways to integrate these AI bots into human concepts is where they ( the ppl in this vid ) may be shortsighted... more likely ( if AI and investors ( greedy - not ready to test the science of the machine as per the " never do more arm than good " philosophy ) progress at the current rate ( of competition ) then the reality of it might be that humans will have to find means of adapting to Robot society or fail as a species ( and not the other way around )....
History is often written by the winners of these competitions and i fear how history will recall this generation if i go with the recent movie i watched where the villain helped the ship commander to rescue his mission therefore the ship commander forced his subjects to write him up as though he had never been a villain... which is how that piece of ( fictional ) history was retold to us - a lie based on human functions to create a future - how many lies do we gauge our intelligence by ? ( Robots don't sleep therefore are intelligent but not wise - or not !! )
Investors are into competition and scientists ( though sometimes of good intention ) are at the mercy of these investors who fund the projects which drive the competition....
Ironically, history teaches us that in competition the result is often the merger of competitors ( who thrive on human labor - until now - on robot labor ) and the formation of new parties who often start out with monopoly status on the given domain....
Frankenstein level stuff this AI domain is ...and investors are like the Dracula among us who in the old days likely sucked the powers out of BLOOD lines who would not cooperate or assimilate into their " a la carte " ruled and bordered ( boxed - quantified ) piece of society.
Feedback kinda reminds me of that thing we get from places of knowledge where we ingest information, are tested on it and receive feedback on how you scored/performed. Eerily similar
Yes
i dont think that it is a very good idea to birth AI on humanity as a police force as that would set a bad precedent from both side's....so slow intergration into something like a futurist meditaion gardens........grow crystal structures for housing using the cymatic geometry of words like hope happiness laughter light imaging living in a crystalyn structure that is grown to ressonate at those frequencies you could even probaly cure cancer in such a structure #loveandlightforallcreaturesgreatandsmall
SINCE 2011 2012 2013 BEING CERTAIN ABIN INTER PLANET COMMUNICATION CAN GO WRONG WITH LANGUAGES WITHOUT PROPER IMOTIONS
So watching this has opened up my eyes to the fact that we are a very long way away from singularity. The bible says we are fearfully and wonderfully made and it's right. We are created by God for sure and this video proves this even more when you consider how incredibly complex we are as human beings. A sentient being will never be created by man because this requires body soul and spirit. We humans have this because we are created in the image of God. Anyone who believes we evolved is quite simply a fool who is lying to themselves and suppressing the obvious truth that we were created by a creator. We were created by God.
You guys...there are literally massive amounts of little baby birds and everything, all around me, right now. This is awesome. God rules.
Life I mean
Chatbot gpt
If there comes a time, and I suppose there will, that humans have reached their limit in developing AI, then I imagine they'll just let the AI develop the AI, while the humans merely sit back and watch. x
There is no problem with AI, the problem is humans. Just like with nuclear power. It was supposed to be a wonderful thing for mankind. It is a huge problem that we don't know what to do with now.
AI will help us when we input an humanity problem and in a small period of time the data will be processed an equivalent of thousands humans years. The output will show us a solution that we would achieve that thousands of years ahead.