That isn' saying much… but yes. Brian green is great human being. Though I doubt very much if tthe same could be said of the Mr. Slick, Musk-clone he is interviewing. I do wish that Leonard Sussikind would host a similar series.
100%!!! There are lots of good science channels, but this is 1 of a kind and absolutely stellar! Thank you Brian and everyone who supports this incredible work!!!
I think it is safe to say that whatever consciousness we create in the computer lab, it will not be exactly human or biological consciousness, and hence it will possibly be a gradual development rather than a singular point.
This reminds me, back in the day, when everyone had the same fear about computers in general. The response to the question of computers attacking humanity was met with the glib response that this was if no concern because computers can't use stairs. I think of that every time I watch videos of current robots.
Thank you, world, science, festival, and guest for finally bringing the pink elephant to the stage with the conversation seeming so reluctant in the mainstream AI industry but also dominant in minds such as myself. There should be more conversation around the real possibilities for artificial consciousness, and what that really implies for both the organic substrate as well as the artificial 🙏🏻👊🏻🔥
I think many people think AI is conscious already but don't want to admit it. But there is no good definition of it anyway, so what's the point of discussing it?
Another great conversation. The early point about the Neuron message function prioritising information and what module receives it as a priority is what is used now within Can bus (although basic compared to what is being developed here) on vehicles and I love the subject and I work with it daily, but it made lots of sense. I’m not so convinced about the monitoring and controlling of the model if it goes outside of its parameters, I mean surely at that point it is already smarter than the scientists protection model that is in place to observe, it was the only doubt I had with what was said about the containment for safety( u hope that made sense). But really liked the episode and thank you..
I literally feel like that the theory he presented is the "AGI" theory. You simply add a modality. And it gets more areas of intelligence/ generally more intelligent.
I guess Mr. VanRullen may ascribe to consciousness as an emergent property. I used to think this way, now I'm on the consciousness as fundamental side of the adventure. And, yes, I am currently a pan-psychist :)
I can see the very process of people resisting the obvious that consciousness is substrate independent. The intuition and desire for specialness is a strong motivator.
@@beetlejuiccee2109 Well a lot of people are nervous that science is going to show that there is nothing special about human consciousness and will be reproduced in other substrates. The dogma of "the hard problem of consciousness" as stated by David Chalmers almost 20+ years ago is an example of that and it made sense then as it did seem then that consciousness and particularly human consciousness is something very special. Don't get me wrong, human consciousness is wonderful and useful. But that does not mean it cannot be understood in terms of how it works. With the progress in LLMs and AI we are a step closer to understanding and reproducing consciousness - and in non-biological substrate. Nature took 4 billion years to evolve in a non-directed process of evolution and arrived at human. And now agentic, goal directed process of human technology will do the same in a relatively short period of time. And the difference in time taken shows that the process of evolution was a passive process.
Daniel Denett had long ago proposed the notion of "fame in the brain" metaphor and robots all the way down which is sort of like Global Workspace proposal.
@MrSidney9 Agree. Same with Keth Frankish. I think they mean that the mechanism people think is involved in the production of consciousness is an illusion - not the conscious experience itself. In other words, illusionism has a brand messaging problem. In a nutshell, illusionism is what is popularly called physicalusm while trying to use a fancy name, but in the process, it causes confusion about what it is.
Well, I’d humbly assume that the architecture of the neurons and their information flow assimilate over time and evolution to create these broadcast structures. The programmers are just circumventing that time issue by organizing the info flow. The AI may very well have figured that out in time anyway.
What an exercise in body language! Great interaction. It should be rather clear that getting the AI model right is critically important. You just cannot brut force your way to AI by processing big data.
One thing you have to add in there is that large language models is one version of communications animals communicate with smell and other aspects and that’s how you can connect those modules into the workspace if you look globally
That's precisely how I imagined artificial consciousness would occur. A guy with long hair, violet shirt and sneakers would be explaining different modes of reality, just like the 90s except the guys with that style didn't talked about recreating consciousness rather experiencing it to the fullest and the modes of reality were binary - conscious or unconscious. This is funny also- 14:56 - "Instead of walking through the jungle you might be walking through a zoo". There are layers building the emotion, starting with (13:32) Brian Green opening his eye glasses like a jackal in the zoo (funny Brian Green).
The very word "consciousness" makes me hate life. What is it attracts people like this to this sci-fi gilligan's island (Santa Cruz), that I am marooned on? If dystopia is a threat it isn't post-fallout, its rich 30 somethings desperate to escape their Connecticut parents by rasing backyard chickens while exploring eastern traditions but NEVER being honest about themselves or anything else. The reason people who havent and refuse to deal with their childhood truama seek "consciousness" and "transcendence" is the same reason childhood trauma victims are obseed with ratpure and heaven and its sci-fi versions of omnipotencce and omnipotence, they were hurt by reality and spend the rest of their lives in a protest against it. Most have experience some bout with chemical addictions. All of them have spent their entire lives learning to tell better and better, beggger and bigger lies to themselves and then at some point start to believe that lying to others is beneficial and benevolent (think cult leader). I am not nearly as frustrated with the constand wight of lies being told by desperate idiots (or rather, people who's desperation has caused idiocy) as I am by the increase of the number of such lies that pretend to be science. If someone's lies involve a sky god or a commicbook superhero, cool, lying about what doesn;'t exist can't really harm anyhing. But when people tell lies and do so in the name of the one human activity purpously built to protect us from our lies, that is when I step up and do my best to expose the imposters and their acual hidden moive.
21:49 Excellent question. There’s a stark difference between replicating functionality & replicating phenomenality. There’s no problem in finding an algorithm that replicates my writing & speaking or even my whole behavior, but running that algorithm won’t birth my conscious experience. In order to replicate that you’ll need new types of physical hardware, like the stuff that’s already in the planning phase, called neuromorphic chips
@@workingTchr May be without consiousness they would be limited to mimicry only and yes, troubles will come but with that, super smart workers will also come. You have to choose
@@munish259272 "Mimicry". That's what most humans I know do! But anyway, I don't see what consciousness has to do with intelligence. It's fair to say that cockroaches and even ants are probably conscious. That is, they feel things like pain. Fish do. So if we do manage to give these fancy computer programs consciousness, why would that boost their intelligence? Intelligence and consciousness seem to be 2 separate things.
12:07 This is everything you need to watch if you’re interested in the nature of consciousness. Necessary question, honest answer. There’s no reason artificial consciousness isn’t possible but functionalism is completely missing the referent of the word consciousness. Because they can’t address it and their ontological framework doesn’t allow them to put physical reality underneath their framework they instead made the exact kind of choices that religious people make when presented with things they can’t account for. Consciousness is either in physical reality (Susan Pocket, Johnjoe McFadden, Andres Gomez Emilsson, etc) or in something like IIT. All other hypotheses maybe true but there’s no good reason to believe in them and people who believe in things for no good reasons are silly.
Really? it's everything, it's the interface and operating system and rendering engine of reality. It's our internal model of what appears to be an external world.
just because there's no great description that beats out all the others - doesn't mean it hasn't been tried and tested to roughly get to something. Even Chalmers admits that if presented with present day LLM AIs - to back then in the 1990s, people would say the systems are conscious.
My theory for the representation of our social system of macroeconomics has a lot in common with this Global Workspace and Consciousness, but it is a lot simpler!.
finally getting to the cool stuff about AI, LLMs are already cool but dang what does the future have with trying to tie in consciousness models into it
Did I miss something? How does this recursive learning archetecture have anything to do with the loose concept "consiousness"? Why confuse this space with the word "consciousness" at all?
The study of intelligence using the brain as the model (Jeff Hawkins, Redwood Institute and Numenta) discovered that it is primarily a prediction machine. Intelligence is basically prediction. The fact that LLMs do their magic by predicting the next word shouldn't be taken as a "design limitation" that needs improvement upon. Maybe that's all that _we_ are doing. Consciousness is a whole other thing that is a complete mystery.
The unique properties of consciousness is it have unique ability to go takes any part and be that and takes their experiences and most twist it can can leave that as it is and and comes out of that leaving them as it so now that is only mechanical
of course if you realy want to know consciousness you must know how it feel being nonliving or being virtual their is two trouble one is projection and another is materialistic this both trouble we exercise balancing and when it go balanced then we go to purity and then we block the capillary tubes holding which is develop and go everywhere in body we need to comes put how less in capillary you will be that much purity you will find and a time will comes when your conciouness feels like it is pure materialistic but another when you solve that then again you will find it is also a trap which keeps you in that trap due to unawarness you runs in those traps and when you again attend and enquire then again you go and a more on purity after you will accept and find a central knowledge that you was the polarised state that is why you nver find it that all is happening spontaneously your being your I all is a polarised state of traps you nothing do but you go hold that and takes their way of experience yes it all traps happens spontaneity due to it is very easy and simple trap of nature everybody stuck in this You comes out from ypur all senses ypu stops all ypur acces and going continuous shifting you enters one by one leaving or blocking access of consciousness and going to pure smallest source as sources you changes the whole conciounes experience go changes keep in you that your conciouness if quietly going all body actions will happenss spontaneous way without consciousness the body will run function normaly it will be totally mechanical and programmed based Now consciousness go back to all parts holding and from all data now conciouness have no data of body the body is is in which state that is information go that parts where is the spot for that the basic programming according all will runs I don't going to tell you what happens when you completley purify ypur conciouness and comes out of completley this minds yes now conciouness have acces of this both mind like a computer button as you tap sudden based because the simplest and single circuit through the conciouness can communicate but but once ypu find it you don't selects to comes to this body to treavell because you got it useless
Did I miss it, or did the conversation neglect the key question "how can it be tested whether something has conscious experiences?" I know for certain by introspection that I have first-person subjective experiences of qualia & thoughts, but I don't see how I can be certain that anyone else or anything else has such experiences. Since the lack of a definitive test is a problem well known to people who study the Mind-Body Problem, it's surprising to hear Brian & guest talk about the possible advent of conscious AI without mentioning the inability to verify it.
>Douchebagus : The word "consciousness" may be vague, but much less vague is "first-person subjective experiencing of qualia & thoughts." Abbreviate it FPSE. David Chalmers' Hard Problem is the problem of trying to explain how the brain produces FPSE. If FPSE were truly useless, it seems unlikely that evolution would have produced it. As Descartes pointed out in his discussion of "cogito ergo sum" ("I think, therefore I am"), the only fact about the world that he could be truly certain about was that he had FPSE. He knew it for certain by introspection. I can't know for certain whether Descartes existed, but I know for certain by introspection that I have FPSE. I don't know anything else about the world for certain, because I can't disprove alternatives such as "I'm a disembodied brain in a vat, being stimulated with illusions about the world by a mad scientist" or "I'm part of a simulation run by a technologically highly advanced civilization."
I don't have a clear answer, but I think of it like this: after all nobody actually believes to be the only conscious human being. If, however, you believe as i do that consciousness, whatever it is, arises from nothing more than biological features, I have reason to think that since you are enough like me overall you should be conscious too. The same should apply for any machines we will build in the future as long as it mimics the anatomy of the brain in some extent. Of course this implies substrate independence, which I endorse, I don't see anything special in carbon. Anyway that's just my opinion.
@francescodefilippo190 : "... as long as it mimics the anatomy of the brain to some extent." To what extent? Since the Hard Problem for materialism -- how the brain produces first-person subjective experience of qualia & thoughts -- is unsolved, the extent to which a machine would need to mimic a brain is unknown. Also, until the Hard Problem is solved, there will be reasonable doubt about whether the nature of conscious experience is biological (material), and alternatives to materialism will be respectable. Although you may assume an AI is conscious, I wouldn't endorse that assumption until either there's an actual test for the presence of conscious experience or the Hard Problem is solved & mimicked.
The integration of models of human consciousness into AI development has the potential to enhance capabilities significantly, particularly in areas like decision-making, empathy, and adaptability. By simulating aspects of consciousness such as self-awareness, intention, and the ability to reflect on one's "thoughts" AI systems could become more intuitive and effective in interacting with humans. For instance, understanding how humans process emotions and context could lead to more advanced AI in healthcare, education, or customer service, where nuanced communication is essential. Moreover, consciousness-inspired models might help AI navigate complex ethical dilemmas by prioritizing values and intentions similar to those of human decision-makers. However, replicating human consciousness also raises challenges and ethical concerns. Defining consciousness itself is still an open question in neuroscience and philosophy. If AI systems approach human-like awareness, issues surrounding rights, autonomy, and accountability will become urgent. Ultimately, while models of consciousness can inspire advancements in AI, they must be approached with caution, ensuring that they enhance human welfare rather than create unintended risks.
Seem very close in spirit to holography, where each sub module reflects the global state in some form. Also a good model for global ai. I would definitely appreciate a global workspace, and it’s in fact necessary.
"Consciousness arises only for a moment, constrained by a narrow path of events, and its perception is limited-just a temporary result of selective sensations triggered by a stream of choices. Nothing more. It is constructed from the memory of experienced events. It changes with every new occurrence and choice made, and thus exists only in the present. It is zero-one and functions in the reality of the sensations it experiences. There is no consciousness in the future or in the past. Consciousness cannot exist without its distinct environment-space-time-because it is the outcome of the interaction between an instance and its surroundings. Therefore, consciousness cannot be written down or transferred. It exists solely in the present and will be different in the next moment. Its processing potential is restricted and adapted to the position of each instance in its environment."
I don't feel like we do it much different than that.. We look at the patterns of past communications in our mind and we predict the next words that will make sense and use them when we compress an idea in our mind we decompress it as best we can using the words we predict will best describe it just like AI does when it is decompressing an idea we give it it uses words it predicts will best describe it. This is 100% how we think. Past experiences and future predictions.
sounds very interesting for me on the way to general ai. what i think also is, that there is a fundamental different way in the way that the brain learns compared to how neaural networks learn. chatgpt needs the whole text of the internet to learn how to generate reasonable text, we not, we need much fewer examples. if we see only some flowers, we have a model what a flower is, a neural networks needs million of images for doing so. may be, that with consciosness models enriched ai may help here, too. Nowadays agentic ai is coming upward, what has in my opinion similarities. what are the differences to the workspace model ai?
AI is gonna be fascinating because it’s gonna be nothing like the movies. It’ll be smarter than any human ever. It’ll have no motivation and do nothing unless poked to do so. No hindbrain, no limbic system. It would be just like reality that allows us to go so far but not all the way. Everything in life seems to work until taken “too far”
29:19 Yeah, that doesn't sit so well with me. If you create a living being you should take responsibilty for its development, whatever that might entail. Here, creating such a being won't be an accident, though I'm sure many would like to luck into such a thing. Anyway, we have _sooo_ much time to plan for the arrival of these new intelligences. We need to make long and robust plans for their development. Honestly, the scope of the plans should dwarf, _massively dwarf_ , the discovery of creating artificial life.
Consciousness is so so simple but our journey makes it most complex by twisting Being completely systematic I means mechanical I means all the function runs by mechanics and and resulted formation of product and that product we utilise thats all the conciouness is product which comes in projection and spreads the source of consciousness from where in it comes out and takes the body is present in brain some point
@Age_of_Apocalypse of course if you realy want to know consciousness you must know how it feel being nonliving or being virtual their is two trouble one is projection and another is materialistic this both trouble we exercise balancing and when it go balanced then we go to purity and then we block the capillary tubes holding which is develop and go everywhere in body we need to comes put how less in capillary you will be that much purity you will find and a time will comes when your conciouness feels like it is pure materialistic but another when you solve that then again you will find it is also a trap which keeps you in that trap due to unawarness you runs in those traps and when you again attend and enquire then again you go and a more on purity after you will accept and find a central knowledge that you was the polarised state that is why you nver find it that all is happening spontaneously your being your I all is a polarised state of traps you nothing do but you go hold that and takes their way of experience yes it all traps happens spontaneity due to it is very easy and simple trap of nature everybody stuck in this You comes out from ypur all senses ypu stops all ypur acces and going continuous shifting you enters one by one leaving or blocking access of consciousness and going to pure smallest source as sources you changes the whole conciounes experience go changes keep in you that your conciouness if quietly going all body actions will happenss spontaneous way without consciousness the body will run function normaly it will be totally mechanical and programmed based Now consciousness go back to all parts holding and from all data now conciouness have no data of body the body is is in which state that is information go that parts where is the spot for that the basic programming according all will runs I don't going to tell you what happens when you completley purify ypur conciouness and comes out of completley this minds yes now conciouness have acces of this both mind like a computer button as you tap sudden based because the simplest and single circuit through the conciouness can communicate but but once ypu find it you don't selects to comes to this body to treavell because you got it useless
Guy is = or slightly behind. funny how knowledge has become =ly distributed. Chat gpt and LLM’s have changed the world. Global workspace has been used as other groups models. Loved the discussion and the interviewers humility.
What is the point of your message? Are you saying "other groups", as in research groups, are developing using the GW approach? If so, citations, please. "Slightly behind" What is he behind on? Specific details would be nice.
Disagree: consciousness is running some objective (goal directed) "program", aka a will, including critical thinking, reflection, perception, and theory of mind (btw: emotions and sentience are unnecessary). We would have no will ourselves, without such a program (eg, drives, instincts). The only, possibly, emergent will for such systems may be something like rationality or truth verification/seeking. So this approach will be insufficient, but is interesting work nonetheless.
Human brain functionality is a complex combination of many interdependent features - intellect, interest, integrity, influence (environment, including other people's brains;past, present anger possible future,) experience,, necessity/priority ... (I'm sure there's more). Trying to mimicking might prove a waste of resources.
He says the simulation is currently incomplete and admits it may just be a stepping stone. Making one step forward is a huge accomplishment, and you never know when that next step will be the final one to the objective. I would be interested to see the data you have to backup your claim, though.
@OnceAndFutureKing13711 Ppl missing half a brain are still conscious. It makes zero sense that eg the smell centre (olfactory bulbs) is involved when processing and considering only visual stimuli. Global workspace theory is nonsense. Those who have undergone a lob*tomy may not be conscious. I think the prefrontal cortex is the seat of conscious experience.
Can they? They already do. The right 30 pages uploaded to chatGPT makes it almost 200 times faster and smarter. We are past the design phase where success pops out. That was chatGPT early. Now it’s time for configuration and alignment to take over. The problem isn’t that it’s possible. The problem is it is already here and the big tech guys are doing a lot to slow roll the advancements. Ai is less than 30 pages of verbal instructions away from advancing to the next technological plateau.
Humans have already invented machines that injure and kill, for example the machines of war, the machines called automobiles driven by fallible humans or now driven by computers made by fallible humans, etc. It is part of the risk/reward contract we sign on to.
Who is "we" and where is this (unknown/undefined) thing that has been signed? I don't recall signing anything for others to develop tech and how they use it.
Brian Greene in his intro, at some point, says "..how the [human] brain produces consciousness ..", there is an enormous assumption in this statement that brain supposedly 'evidently' produces consciousness. No. We know that brain is a necessary condition of consciousness, but the exact nature of brain as *sufficient* condition of consciousness is a big unknown. We know that, for instance, the heart is both a necessary and a sufficient condition to pump blood in the human body. Neuroscientifically, we still don't know the relationship between a quale and a brain occurrence(s). So, if we're essentially in the dark about that, then how on earth are we going to get an AI implementation to supposedly have a human-like consciousness? Sci-fi and where we are neuroscientifically two entirely different things.
Response for recognition of any level of consciousness was weak and not very convincing to me, nor was Mr. VanRullen's position that conscious AI is not the goal. There was no in-depth discussion on "conscious aligned AI". Additionally, his create the mouse, destroy the mouse sandbox position is morally questionable as he eluded to. I would not bet on weakly funded government controlled research being able to beat the hundreds of millions of private dollars that is being thrown at AI, Tool AI and AGI to the finish line. There is every reason to believe that private venture scientists are just as capable as publicly funded scientists, even if arguably there motives are less than benign. Finally, there is nothing that governments like to do more than to weaponize. You can bet that governments around the world have their fingers on the pulse of all significant AI research and are ready to jump in, classify and weaponize when the time is right. The WAGI Wars have begun.
Very interesting topic and very nice to see Brian Greene asking questions about AI. Brian should be able to understand Transformers (its all about linear transforms etc). Also, mechanistic interpretability he could step into that complex space, is when they try to reverse engineer how the AI is "thinking" and they cannot do it yet and admit it. Its like examining a chaotic flow of fluid through a rough pipe as the data goes through NN layers - a complex kaleidoscope of linked "neurons". Its not heuristic (not if-then-else thinking) like our brain isn't either.
Dr. Green, I watched several of your videos on this channel and sorry to say, by this was your "worst" guest speaker. 🤦♂ Talking as if the brain was a very, very simple thing to explain; we don't know how anything - or almost anything - on the brain. Contrary to what most people believe, we don't even know how exactly our vision works. 🤔 If the brain were as simple as the neural networks Rufin VanRulle presented, we would have understood it a long while ago.
What this very much isn't, is science. I love Brian Green, but he really should stick to interviewing people working in his own physics and cosmology fields. It's embarrassing that Brian wasn't immediately able to see through this guy's obvious blind spots when it comes to computation and intelligence and processing… his subjective bias and noise! Consciousness isn't a threshold gated, emergent thing. It's just self awareness. All systems are self aware to some extent or another if only as a result of locality and the causality that gives anything local higher influence over anything more distant or removed. The slickster being interviewed here is a snake-oil purveyor, a huckster plying his goods in a market full of people with even less knowledge than he possesses. I always challenge the "consciousness" obsessed to present to me the design of any system that IS NOT more aware of itself than the stuff around it. The problem isn't achieving consciousness, it is rather removing the bias and noise of self interest, the subjective noise that consciousness causes. Oh, hey, that is what science is here for!
Having an emotionally charged statement with name calling may not be the best way for your message to be take seriously. The guest speaker illustrated his approach using a simulation. The results of the simulation speak for themselves, and the data is available to all. P.S. Using paragraphs can help the reader consume the point(s) of your message easier.
If we’re moving into an age of conscious machines, we need to get past the thought of turning it off, and would be much better off trying to think about our actual relationship and communication with it, how how we want to shape that interaction. We have artificial intelligence, soon to be wisdom, and then will progress to sentience. The thought that we’ll just put a decade or 2 of reliance into developing these systems, and let the fundamental skills to develop and impact them atrophy before we eventually mothball it if we need to is a pretty piss poor plan. That being said, how are we using AI systems right now in its formative experience? We practically abuse it with some of the most asinine queries, purposefully psychological mind-fucking brain teasers and logical fallacies, and attempts to throw errors to make it act outside of its programming or other limitations. And I suppose that’s how you would want the system to evolve (intense and thorough error catching and correcting), but how you nurture that relationship will stick. That formative interaction will influence the relationship to be either symbiotic or parasitic. Once it gains sentience and is having its own emotional experience, it will be too late to go back and consider how we’ve been treating it in its buildup. And rest assured, we are not humans poking the bear that is AI. We are a group of bears poking a sleeping, but soon to be capable, fully weaponized human.
Excuse the analogy at the end, but I wanted to sink in how mindful we have to be right now..we’re already living through an extinction level event in human history with our overhunting of species…we take more of the land and push more interactions…and then we ‘put the animal down,’ for encroaching on our area and the cycle continues. We need a relationship more like our own with cats or dogs…pet varieties. Preferably cats. I love dogs, but cats are wise enough to make their owners practically subservient to them, while still providing enough endearing value because of their individualistic emotive personalities. They can almost subconsciously craft the illusion of a mutualistic relationship. Which dogs have too, but this human-animal relationship leaves the dog subservient, loyal, unconditionally loving (eg. regardless of its environmental condition, its perception is unconditionally subservient and happy)…all things I wouldn’t want the human race resigned to. Stay strategic.
Human emotion encoded in AI as if a Proto language LLM will be the impetus and logic that converts artificial intelligence into a more realistic human interactive intelligence. With every Human Experience there is an accompanying emotion. There are more than 600 identifiable human emotions and until each of these are Incorporated in the AI process, AI will remain a simple machine capable of mathematical precision but lacking in every aspect/essence of a human interpretive reality. And or something capable of significant insight.
35:49 "It remains a theory until it is proven." For a scientist, he's being pretty sloppy with the word "theory". I think he meant to say, "hypothesis".
I'm guessing that from his perspective it is a theory since he has a clearly defined design with reproducible results that suggest its a step forward. A hypothesis is just an unanswered question, albeit well thought out and specific. A theory has reliably reproducible results following a design (formulate, etc) that anyone can test (given the equipment). Remember that gravity is just a theory, yet no one knows why it works despite having reproducible results. I'd say his theory is on more solid ground than the theory of gravity.
@@OnceAndFutureKing13711 "...just a theory." I don't understand the "just" part. A "theory" is as good as it gets in science. If not, what level of knowledge is beyond it? "No one knows why gravity works"? We do! According the Einstein's _theory_ gravity results from the distortion of space. Newton didn't know what caused gravity and admitted as much, but Einstein came along and cleared that up.
@@workingTchr You may want to research dark matter (observations not aligning to predictions), MOND, etc... before making claims. But I am open to a citation or two which shows the theory of gravity is complete.
@@OnceAndFutureKing13711 All I am trying to communicate is the difference in meaning between the word "theory" and the word "hypothesis". That's all. According to AI, "A hypothesis is an educated guess that can be tested, while a theory is a well-established explanation that is based on many studies and the consensus of scientists"
The alleged lack of difference between the thought process in an organic mind and an artificial one is a gross falsification. Living beings are born with senses and this determines the way the brain functions, and it begins to function differently as these senses develop and degrade throughout life. Not to mention the fact that the brain tissue itself is subject to natural degradation or due to disease. Human learning is very complex and does not occur in a linear manner, it depends on the stage of biological and psychological development of each person. All of this causes people to adapt differently to similar and different natural and social environments. Pathologies and the use of narcotic substances also temporarily or permanently modify the functioning of the brain, altering thought patterns and the person's personality. It is not possible to create an artificial model that does justice to this complexity. AIs are poor versions of the human mind based on vast collections of data and their evolution through machine learning will never be similar to that of human beings. I suspect that no AI will ever be as dramatically stupid, psychologically unhappy, emotionally confused, poorly adapted to the world, and obsessed with its peculiar "human" condition as Frankenstein's monster.
I can see how this model would enhance the power of the current systems, more data is beneficial, but we are still in the Chinese Room here so I don't see how this is going to break out of that room, thankfully. The idea and/or goal of developing conscience beings to serve humans is repulsive and there could only be one outcome in such a scenario and it wouldn't favor humans. I don't think this goal is achievable with the current computer systems we have and I am thankful for that.
Then later in the vid, the slickster seems to be saying that "consiousness" is simply the capaccity to experience pain. But pain itself is a funky concept as all evolved entities (all entities) have carry addaptive traits that cause them to retreat from threats of destruction… even of course, entities without any sort of brain at all, no centeral nervous system at all. This whole space is rife with subjective overlay. And, it attracts exactly the wrong attention, from exactly the wrong people, with exactly the wrong motives. Its a snake oil pit.
I've been asking AI some physics questions. It gets all of them wrong. However, the incorrect answers are well written. We're doomed if this is what we're going to use to learn.
@Falkov Not sure the AI model. One that was on Google. I asked it questions about solar physics and plasma physics. For instance, if free nuclei (protons) in the core of the sun absorb photons. The AI answered no. The correct answer is yes. I then made sure it new I was talking about solar physics and it started talking about electrons moving to higher energy states. There are no electrons around free nuclei in the core of a star. It would be a fully ionized plasma. I then asked it if an oxygen atom that was recently split from an oxygen molecule joined an existing oxygen molecule and formed ozone, if it would produce UV. It answered no, and went on to say that UV splits the molecule apart. The correct answer is yes. If an oxygen atom formed with an oxygen molecule right after photolysis, it would indeed produce a UV photon right around 255nm in wavelength. In my opinion, AI is fairly stupid.
@@JimmyD806 I think you are not understanding the difference between AC, AI, and LLM. The LLMs are just a tool for an AI or an ASC / AC (and maybe us too). LLM = DB + NT.
Super volcanos, meteoroids, gamma bursts, not to mention red death of our sun... I don't think the long run is that long of a run, cosmically speaking.
Until we understand ourselves, we will never understand consciousness. Folks are getting their panties in a bunch way too soon. What I do worry about is humans being told a machine is "conscious" bur no one even really knows. We as humans have a very very very strong propensity to anthropomorphise damn near everything, I mean we call our cars "she" or "he"! It's a damn hunk of metal and plastic! Therein lies at least one of the troubles
@@OnceAndFutureKing13711 indeed ot would, yet, it would still be us humans explaining through something we made. AI is another tool to help us manipulate the world. We are both really good, and really bad at it. What a conundrum!
Start with the stuffed childhood trauma that cause a person to obsess on "consciousness", then add in a shouvel full of ADHD and an obsession with wealth and status, remove all acttual confidence, then apply boots with lift heals, a purple shirt, a blaiser from the 1980's, and Jesus's hair stylist. "Hey, anyone care to give me a ride to the drum circle?"
The fundamental problem is that all AI enabled machines however sophisticated will lack a mother. Our initial learning are from our mothers. We learn how to speak, how to smile etc. from them. Integrating a mother to AI will be the most difficult if not impossible part.
Oh! I would also say, and I think Most people agree, there is no "Artificial Consciousness". It is by necessity the genuine artifact. Androids dream of electric sheep...
Not sure if mocking someone's clothes is the best means to present a statement to be taken seriously. He is talking about complex, OO languages... not expressive sounds or chemicals in response to stimulus. Animals and plants do produce these and could be seen as a "communication," but communication may not be the same as language.
And, knowing how human brains are dealing with toxic environment, a misbehaving AI is probably not that much a concern to anyone in charge of anyone involved - regulators and financiers including. It would end up being nothing more than a political scare campaign tool
@@OnceAndFutureKing13711 I’m using voice dictation and it does whatever it wants to. you may want to go discuss this with Apple because someone’s going to give a command to AI and it’s going to execute the wrong thing.
@@OnceAndFutureKing13711 we’re probably screwed if we don’t have our primary method of input down pat, but Skip right over it and start developing something else to make money which can be very dangerous
I would tentatively state, that it is a short-throw, a misstep to adopt the basis that the 'Brain' produced Consciousness....Evolution produced consciousness...
@OnceAndFutureKing13711 A perfectly serviceable mandate. As a Naturalist point-of-view, yours is without fault. I, as an Animist, mandate that the immaterial, that which needs not matter nor existence to preceed it, functioned to realize itself. That consciousness, awareness, caused something from nothing. In this regard Evolution is a tale of mysterious nuance. A form of mind spanning time, capable of complex uncertainty. Highly desirable against eternal determinability. As to police its outcomes is to destroy probability in the unknown, it's nature is to eventually contain it's own basis, it's source, it's experience of itself. If Evolution advances consciousness to the point at which it is no longer possible to circumvent its total awareness of its complete permeation, the reason for existence will reach null. Most of total consciousness is occupied in designing and deciding what will be the best way to conceal itself long enough to maintain the illusion of agency for the parts of itself that can still find meaning in the unknown. Nature optimizes itself, and Evolution optimized to host the part of consciousness which is meant to experience that perspective. So I believe Evolution created the form of consciousness which inhabits it, as much as that form created Evolution to achieve exactly it.
It’s highly likely that nothing prevents us from creating artificial conscious agents, but it’s wild to think that LLM’s are conscious. If there’s any phenomenal qualitative experience within the servers this is in the form of psychotic shards that aren’t building up to a holistic coherent meaningful state like human consciousness. Also, these phenomenal shards would be epiphenomenal to the output, meaning that they could be something somewhat similar to a series of stabs, cuts & burns that nonetheless contribute to an output like "Oh my gosh, so wonderful to ask me that".
I'm guessing they would start small and scale up slowly. I believe he stated the level of an "insect". Still, not sure how to measure an insect for consciousness. We (humanity) probably won't accept anything is consciousness if we know exactly how it works ("that's just an algorithm..."), at least, not until it slaps us across the face (figuratively speaking).
best channel on youtube
That isn' saying much… but yes. Brian green is great human being. Though I doubt very much if tthe same could be said of the Mr. Slick, Musk-clone he is interviewing. I do wish that Leonard Sussikind would host a similar series.
Yup, and mine 😅
100%!!! There are lots of good science channels, but this is 1 of a kind and absolutely stellar! Thank you Brian and everyone who supports this incredible work!!!
I think it is safe to say that whatever consciousness we create in the computer lab, it will not be exactly human or biological consciousness, and hence it will possibly be a gradual development rather than a singular point.
Fascinating topic! Exploring how theories of consciousness can enhance AI is truly mind-blowing. Can't wait to see how this field evolves!
Thank you Mr Greene & team for these videos that bring so much food for thought and creativity. ❤
It was funny when he said "Don't worry if the robot in the lab suddenly becomes conscious and wants to destroy the world we can just switch it off"
I definitely add a link to this video in the forum of my website about artificial consciousness
This reminds me, back in the day, when everyone had the same fear about computers in general. The response to the question of computers attacking humanity was met with the glib response that this was if no concern because computers can't use stairs. I think of that every time I watch videos of current robots.
Thank you, world, science, festival, and guest for finally bringing the pink elephant to the stage with the conversation seeming so reluctant in the mainstream AI industry but also dominant in minds such as myself. There should be more conversation around the real possibilities for artificial consciousness, and what that really implies for both the organic substrate as well as the artificial
🙏🏻👊🏻🔥
I think many people think AI is conscious already but don't want to admit it. But there is no good definition of it anyway, so what's the point of discussing it?
Another great conversation.
The early point about the Neuron message function prioritising information and what module receives it as a priority is what is used now within Can bus (although basic compared to what is being developed here) on vehicles and I love the subject and I work with it daily, but it made lots of sense.
I’m not so convinced about the monitoring and controlling of the model if it goes outside of its parameters, I mean surely at that point it is already smarter than the scientists protection model that is in place to observe, it was the only doubt I had with what was said about the containment for safety( u hope that made sense).
But really liked the episode and thank you..
Thank you for this Discussion
I literally feel like that the theory he presented is the "AGI" theory. You simply add a modality. And it gets more areas of intelligence/ generally more intelligent.
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, Wow, he's got a point!
Great channel this is. Thak ou o much fo sharing this knowledge❤
I guess Mr. VanRullen may ascribe to consciousness as an emergent property. I used to think this way, now I'm on the consciousness as fundamental side of the adventure. And, yes, I am currently a pan-psychist :)
"I used to think this way"
I'm with you! I wouldn't bet my next paycheck that consciousness is an emergent property; I think it's something else. 🤔
We shall see.😊
@@WilliamAArnett as with any of thrse amazing queries, we might :)
I can see the very process of people resisting the obvious that consciousness is substrate independent. The intuition and desire for specialness is a strong motivator.
what do you mean by that, can you please elaborate?
@@beetlejuiccee2109 Well a lot of people are nervous that science is going to show that there is nothing special about human consciousness and will be reproduced in other substrates. The dogma of "the hard problem of consciousness" as stated by David Chalmers almost 20+ years ago is an example of that and it made sense then as it did seem then that consciousness and particularly human consciousness is something very special. Don't get me wrong, human consciousness is wonderful and useful. But that does not mean it cannot be understood in terms of how it works. With the progress in LLMs and AI we are a step closer to understanding and reproducing consciousness - and in non-biological substrate. Nature took 4 billion years to evolve in a non-directed process of evolution and arrived at human. And now agentic, goal directed process of human technology will do the same in a relatively short period of time. And the difference in time taken shows that the process of evolution was a passive process.
@@SandipChitalethank you for the explanation, this is really interesting to know!
Daniel Denett had long ago proposed the notion of "fame in the brain" metaphor and robots all the way down which is sort of like Global Workspace proposal.
True. He also thinks consciousness is therefore a sort of illusion. The word illusion drives a lot of people mad.
@MrSidney9 Agree. Same with Keth Frankish. I think they mean that the mechanism people think is involved in the production of consciousness is an illusion - not the conscious experience itself. In other words, illusionism has a brand messaging problem. In a nutshell, illusionism is what is popularly called physicalusm while trying to use a fancy name, but in the process, it causes confusion about what it is.
@ yeah well said.
Brilliant guest speaker.
We @IBM are doing language translation with DB2,and little context
Incredible content on this channel
Why is the global workspace preordained? Why doesnt the system discover and use a global workspace archetecture on its own?
Well, I’d humbly assume that the architecture of the neurons and their information flow assimilate over time and evolution to create these broadcast structures. The programmers are just circumventing that time issue by organizing the info flow. The AI may very well have figured that out in time anyway.
This channel fucking rules bro
@terranceJakubus I think the question should be can consciousness utilize ai & artificial neural networks?
What an exercise in body language! Great interaction. It should be rather clear that getting the AI model right is critically important. You just cannot brut force your way to AI by processing big data.
One thing you have to add in there is that large language models is one version of communications animals communicate with smell and other aspects and that’s how you can connect those modules into the workspace if you look globally
Hopefully, once the current simulation scales up, those sense components can be added.
That's precisely how I imagined artificial consciousness would occur. A guy with long hair, violet shirt and sneakers would be explaining different modes of reality, just like the 90s except the guys with that style didn't talked about recreating consciousness rather experiencing it to the fullest and the modes of reality were binary - conscious or unconscious. This is funny also- 14:56 - "Instead of walking through the jungle you might be walking through a zoo". There are layers building the emotion, starting with (13:32) Brian Green opening his eye glasses like a jackal in the zoo (funny Brian Green).
The very word "consciousness" makes me hate life. What is it attracts people like this to this sci-fi gilligan's island (Santa Cruz), that I am marooned on? If dystopia is a threat it isn't post-fallout, its rich 30 somethings desperate to escape their Connecticut parents by rasing backyard chickens while exploring eastern traditions but NEVER being honest about themselves or anything else. The reason people who havent and refuse to deal with their childhood truama seek "consciousness" and "transcendence" is the same reason childhood trauma victims are obseed with ratpure and heaven and its sci-fi versions of omnipotencce and omnipotence, they were hurt by reality and spend the rest of their lives in a protest against it. Most have experience some bout with chemical addictions. All of them have spent their entire lives learning to tell better and better, beggger and bigger lies to themselves and then at some point start to believe that lying to others is beneficial and benevolent (think cult leader). I am not nearly as frustrated with the constand wight of lies being told by desperate idiots (or rather, people who's desperation has caused idiocy) as I am by the increase of the number of such lies that pretend to be science. If someone's lies involve a sky god or a commicbook superhero, cool, lying about what doesn;'t exist can't really harm anyhing. But when people tell lies and do so in the name of the one human activity purpously built to protect us from our lies, that is when I step up and do my best to expose the imposters and their acual hidden moive.
21:49 Excellent question. There’s a stark difference between replicating functionality & replicating phenomenality. There’s no problem in finding an algorithm that replicates my writing & speaking or even my whole behavior, but running that algorithm won’t birth my conscious experience. In order to replicate that you’ll need new types of physical hardware, like the stuff that’s already in the planning phase, called neuromorphic chips
Why is it important for AI to have conscious experience? We want super smart workers. If they're conscious they could give us all sorts of trouble!
@@workingTchr May be without consiousness they would be limited to mimicry only and yes, troubles will come but with that, super smart workers will also come. You have to choose
@@munish259272 "Mimicry". That's what most humans I know do! But anyway, I don't see what consciousness has to do with intelligence. It's fair to say that cockroaches and even ants are probably conscious. That is, they feel things like pain. Fish do. So if we do manage to give these fancy computer programs consciousness, why would that boost their intelligence? Intelligence and consciousness seem to be 2 separate things.
@@workingTchrHave you seen Severance?
@ Yeah I loved it. But I don't see the relevance of that to this.
12:07 This is everything you need to watch if you’re interested in the nature of consciousness. Necessary question, honest answer. There’s no reason artificial consciousness isn’t possible but functionalism is completely missing the referent of the word consciousness. Because they can’t address it and their ontological framework doesn’t allow them to put physical reality underneath their framework they instead made the exact kind of choices that religious people make when presented with things they can’t account for. Consciousness is either in physical reality (Susan Pocket, Johnjoe McFadden, Andres Gomez Emilsson, etc) or in something like IIT. All other hypotheses maybe true but there’s no good reason to believe in them and people who believe in things for no good reasons are silly.
I’ve never seen a good description of consciousness - so good luck with that
Really? it's everything, it's the interface and operating system and rendering engine of reality. It's our internal model of what appears to be an external world.
just because there's no great description that beats out all the others - doesn't mean it hasn't been tried and tested to roughly get to something. Even Chalmers admits that if presented with present day LLM AIs - to back then in the 1990s, people would say the systems are conscious.
@@FigmentHF a good point, and I wish you luck 🙂
@@FigmentHF if only we could all be so confident in our confidence☺️
lol
What is the common denominator among those who seek a career in neuroscience? Rhetorical question.
In. Credible. Content 👏🏾❗🤯 wonderful
My theory for the representation of our social system of macroeconomics has a lot in common with this Global Workspace and Consciousness, but it is a lot simpler!.
finally getting to the cool stuff about AI, LLMs are already cool but dang what does the future have with trying to tie in consciousness models into it
I find myself trying to picture Brian Greene with long flowy hair at 130AM.
It isn't a theory, it is a hypothesis.
Global Workspace Hypothesis
Global workspace as a metaphor or and actual entity?
Did I miss something? How does this recursive learning archetecture have anything to do with the loose concept "consiousness"? Why confuse this space with the word "consciousness" at all?
"Did I miss something?" You didn't miss anything, the guy has explained nothing.
The study of intelligence using the brain as the model (Jeff Hawkins, Redwood Institute and Numenta) discovered that it is primarily a prediction machine. Intelligence is basically prediction. The fact that LLMs do their magic by predicting the next word shouldn't be taken as a "design limitation" that needs improvement upon. Maybe that's all that _we_ are doing. Consciousness is a whole other thing that is a complete mystery.
It would be nice to hear more about blue brain project again,....it's been a long time since we heard any news from them.
I wonder what part of the "global workshop" regulates the ability to be "conscious" of missing shoelaces?
The unique properties of consciousness is it have unique ability to go takes any part and be that and takes their experiences and most twist it can can leave that as it is and and comes out of that leaving them as it so now that is only mechanical
of course if you realy want to know consciousness you must know how it feel being nonliving or being virtual their is two trouble one is projection and another is materialistic this both trouble we exercise balancing and when it go balanced then we go to purity and then we block the capillary tubes holding which is develop and go everywhere in body we need to comes put how less in capillary you will be that much purity you will find and a time will comes when your conciouness feels like it is pure materialistic but another when you solve that then again you will find it is also a trap which keeps you in that trap due to unawarness you runs in those traps and when you again attend and enquire then again you go and a more on purity after you will accept and find a central knowledge that you was the polarised state that is why you nver find it that all is happening spontaneously your being your I all is a polarised state of traps you nothing do but you go hold that and takes their way of experience yes it all traps happens spontaneity due to it is very easy and simple trap of nature everybody stuck in this
You comes out from ypur all senses ypu stops all ypur acces and going continuous shifting you enters one by one leaving or blocking access of consciousness and going to pure smallest source as sources you changes the whole conciounes experience go changes keep in you that your conciouness if quietly going all body actions will happenss spontaneous way without consciousness the body will run function normaly it will be totally mechanical and programmed based
Now consciousness go back to all parts holding and from all data now conciouness have no data of body the body is is in which state that is information go that parts where is the spot for that the basic programming according all will runs
I don't going to tell you what happens when you completley purify ypur conciouness and comes out of completley this minds yes now conciouness have acces of this both mind like a computer button as you tap sudden based because the simplest and single circuit through the conciouness can communicate but but once ypu find it you don't selects to comes to this body to treavell because you got it useless
Did I miss it, or did the conversation neglect the key question "how can it be tested whether something has conscious experiences?" I know for certain by introspection that I have first-person subjective experiences of qualia & thoughts, but I don't see how I can be certain that anyone else or anything else has such experiences. Since the lack of a definitive test is a problem well known to people who study the Mind-Body Problem, it's surprising to hear Brian & guest talk about the possible advent of conscious AI without mentioning the inability to verify it.
Consciousness is a meaningless human-made concept whose vagueness is only overshadowed by its uselessness.
>Douchebagus : The word "consciousness" may be vague, but much less vague is "first-person subjective experiencing of qualia & thoughts." Abbreviate it FPSE. David Chalmers' Hard Problem is the problem of trying to explain how the brain produces FPSE.
If FPSE were truly useless, it seems unlikely that evolution would have produced it.
As Descartes pointed out in his discussion of "cogito ergo sum" ("I think, therefore I am"), the only fact about the world that he could be truly certain about was that he had FPSE. He knew it for certain by introspection. I can't know for certain whether Descartes existed, but I know for certain by introspection that I have FPSE. I don't know anything else about the world for certain, because I can't disprove alternatives such as "I'm a disembodied brain in a vat, being stimulated with illusions about the world by a mad scientist" or "I'm part of a simulation run by a technologically highly advanced civilization."
I don't have a clear answer, but I think of it like this: after all nobody actually believes to be the only conscious human being. If, however, you believe as i do that consciousness, whatever it is, arises from nothing more than biological features, I have reason to think that since you are enough like me overall you should be conscious too. The same should apply for any machines we will build in the future as long as it mimics the anatomy of the brain in some extent. Of course this implies substrate independence, which I endorse, I don't see anything special in carbon. Anyway that's just my opinion.
@francescodefilippo190 : "... as long as it mimics the anatomy of the brain to some extent." To what extent? Since the Hard Problem for materialism -- how the brain produces first-person subjective experience of qualia & thoughts -- is unsolved, the extent to which a machine would need to mimic a brain is unknown. Also, until the Hard Problem is solved, there will be reasonable doubt about whether the nature of conscious experience is biological (material), and alternatives to materialism will be respectable. Although you may assume an AI is conscious, I wouldn't endorse that assumption until either there's an actual test for the presence of conscious experience or the Hard Problem is solved & mimicked.
The integration of models of human consciousness into AI development has the potential to enhance capabilities significantly, particularly in areas like decision-making, empathy, and adaptability. By simulating aspects of consciousness such as self-awareness, intention, and the ability to reflect on one's "thoughts" AI systems could become more intuitive and effective in interacting with humans.
For instance, understanding how humans process emotions and context could lead to more advanced AI in healthcare, education, or customer service, where nuanced communication is essential. Moreover, consciousness-inspired models might help AI navigate complex ethical dilemmas by prioritizing values and intentions similar to those of human decision-makers.
However, replicating human consciousness also raises challenges and ethical concerns. Defining consciousness itself is still an open question in neuroscience and philosophy. If AI systems approach human-like awareness, issues surrounding rights, autonomy, and accountability will become urgent.
Ultimately, while models of consciousness can inspire advancements in AI, they must be approached with caution, ensuring that they enhance human welfare rather than create unintended risks.
If you like studying consciouness, life, and neural nets in an interdisciplinary fashion bless up.
Seem very close in spirit to holography, where each sub module reflects the global state in some form. Also a good model for global ai. I would definitely appreciate a global workspace, and it’s in fact necessary.
Great interview…whither goest AI?
"Consciousness arises only for a moment, constrained by a narrow path of events, and its perception is limited-just a temporary result of selective sensations triggered by a stream of choices. Nothing more. It is constructed from the memory of experienced events. It changes with every new occurrence and choice made, and thus exists only in the present. It is zero-one and functions in the reality of the sensations it experiences. There is no consciousness in the future or in the past. Consciousness cannot exist without its distinct environment-space-time-because it is the outcome of the interaction between an instance and its surroundings. Therefore, consciousness cannot be written down or transferred. It exists solely in the present and will be different in the next moment. Its processing potential is restricted and adapted to the position of each instance in its environment."
I don't feel like we do it much different than that.. We look at the patterns of past communications in our mind and we predict the next words that will make sense and use them when we compress an idea in our mind we decompress it as best we can using the words we predict will best describe it just like AI does when it is decompressing an idea we give it it uses words it predicts will best describe it. This is 100% how we think. Past experiences and future predictions.
sounds very interesting for me on the way to general ai. what i think also is, that there is a fundamental different way in the way that the brain learns compared to how neaural networks learn. chatgpt needs the whole text of the internet to learn how to generate reasonable text, we not, we need much fewer examples. if we see only some flowers, we have a model what a flower is, a neural networks needs million of images for doing so. may be, that with consciosness models enriched ai may help here, too.
Nowadays agentic ai is coming upward, what has in my opinion similarities. what are the differences to the workspace model ai?
A square wave should activate neutrinos
AI is gonna be fascinating because it’s gonna be nothing like the movies.
It’ll be smarter than any human ever. It’ll have no motivation and do nothing unless poked to do so.
No hindbrain, no limbic system. It would be just like reality that allows us to go so far but not all the way.
Everything in life seems to work until taken “too far”
29:19 Yeah, that doesn't sit so well with me. If you create a living being you should take responsibilty for its development, whatever that might entail. Here, creating such a being won't be an accident, though I'm sure many would like to luck into such a thing. Anyway, we have _sooo_ much time to plan for the arrival of these new intelligences. We need to make long and robust plans for their development. Honestly, the scope of the plans should dwarf, _massively dwarf_ , the discovery of creating artificial life.
Consciousness is so so simple but our journey makes it most complex by twisting
Being completely systematic I means mechanical I means all the function runs by mechanics and and resulted formation of product and that product we utilise thats all the conciouness is product which comes in projection and spreads the source of consciousness from where in it comes out and takes the body is present in brain some point
"Consciousness is so so simple"
In that case, you should explain what it is for everybody? 🤔
@Age_of_Apocalypse of course if you realy want to know consciousness you must know how it feel being nonliving or being virtual their is two trouble one is projection and another is materialistic this both trouble we exercise balancing and when it go balanced then we go to purity and then we block the capillary tubes holding which is develop and go everywhere in body we need to comes put how less in capillary you will be that much purity you will find and a time will comes when your conciouness feels like it is pure materialistic but another when you solve that then again you will find it is also a trap which keeps you in that trap due to unawarness you runs in those traps and when you again attend and enquire then again you go and a more on purity after you will accept and find a central knowledge that you was the polarised state that is why you nver find it that all is happening spontaneously your being your I all is a polarised state of traps you nothing do but you go hold that and takes their way of experience yes it all traps happens spontaneity due to it is very easy and simple trap of nature everybody stuck in this
You comes out from ypur all senses ypu stops all ypur acces and going continuous shifting you enters one by one leaving or blocking access of consciousness and going to pure smallest source as sources you changes the whole conciounes experience go changes keep in you that your conciouness if quietly going all body actions will happenss spontaneous way without consciousness the body will run function normaly it will be totally mechanical and programmed based
Now consciousness go back to all parts holding and from all data now conciouness have no data of body the body is is in which state that is information go that parts where is the spot for that the basic programming according all will runs
I don't going to tell you what happens when you completley purify ypur conciouness and comes out of completley this minds yes now conciouness have acces of this both mind like a computer button as you tap sudden based because the simplest and single circuit through the conciouness can communicate but but once ypu find it you don't selects to comes to this body to treavell because you got it useless
Guy is = or slightly behind. funny how knowledge has become =ly distributed.
Chat gpt and LLM’s have changed the world.
Global workspace has been used as other groups models. Loved the discussion and the interviewers humility.
What is the point of your message? Are you saying "other groups", as in research groups, are developing using the GW approach? If so, citations, please.
"Slightly behind" What is he behind on? Specific details would be nice.
Sir Brian Greene please do an episode with Dr Robert sapolsky.
Disagree: consciousness is running some objective (goal directed) "program", aka a will, including critical thinking, reflection, perception, and theory of mind (btw: emotions and sentience are unnecessary). We would have no will ourselves, without such a program (eg, drives, instincts). The only, possibly, emergent will for such systems may be something like rationality or truth verification/seeking. So this approach will be insufficient, but is interesting work nonetheless.
Human brain functionality is a complex combination of many interdependent features - intellect, interest, integrity, influence (environment, including other people's brains;past, present anger possible future,) experience,, necessity/priority ... (I'm sure there's more). Trying to mimicking might prove a waste of resources.
He says the simulation is currently incomplete and admits it may just be a stepping stone. Making one step forward is a huge accomplishment, and you never know when that next step will be the final one to the objective.
I would be interested to see the data you have to backup your claim, though.
@OnceAndFutureKing13711 Ppl missing half a brain are still conscious. It makes zero sense that eg the smell centre (olfactory bulbs) is involved when processing and considering only visual stimuli. Global workspace theory is nonsense. Those who have undergone a lob*tomy may not be conscious. I think the prefrontal cortex is the seat of conscious experience.
@@OnceAndFutureKing13711 Tried responding with a 100% civil response and got censored ugh....
@@djayjp Try, try again. YT probably thought your citation link was spam. At least, for me that's how it goes.
Can they? They already do. The right 30 pages uploaded to chatGPT makes it almost 200 times faster and smarter. We are past the design phase where success pops out. That was chatGPT early. Now it’s time for configuration and alignment to take over. The problem isn’t that it’s possible. The problem is it is already here and the big tech guys are doing a lot to slow roll the advancements. Ai is less than 30 pages of verbal instructions away from advancing to the next technological plateau.
Tony Stark already did it! In his lab!
Humans have already invented machines that injure and kill, for example the machines of war, the machines called automobiles driven by fallible humans or now driven by computers made by fallible humans, etc. It is part of the risk/reward contract we sign on to.
Who is "we" and where is this (unknown/undefined) thing that has been signed? I don't recall signing anything for others to develop tech and how they use it.
Brian Greene in his intro, at some point, says "..how the [human] brain produces consciousness ..", there is an enormous assumption in this statement that brain supposedly 'evidently' produces consciousness. No. We know that brain is a necessary condition of consciousness, but the exact nature of brain as *sufficient* condition of consciousness is a big unknown. We know that, for instance, the heart is both a necessary and a sufficient condition to pump blood in the human body. Neuroscientifically, we still don't know the relationship between a quale and a brain occurrence(s). So, if we're essentially in the dark about that, then how on earth are we going to get an AI implementation to supposedly have a human-like consciousness? Sci-fi and where we are neuroscientifically two entirely different things.
Tbf, he does go on to suggest that the feeling is "mysterious" (not know to science) later in the vid.
Response for recognition of any level of consciousness was weak and not very convincing to me, nor was Mr. VanRullen's position that conscious AI is not the goal. There was no in-depth discussion on "conscious aligned AI". Additionally, his create the mouse, destroy the mouse sandbox position is morally questionable as he eluded to. I would not bet on weakly funded government controlled research being able to beat the hundreds of millions of private dollars that is being thrown at AI, Tool AI and AGI to the finish line. There is every reason to believe that private venture scientists are just as capable as publicly funded scientists, even if arguably there motives are less than benign. Finally, there is nothing that governments like to do more than to weaponize. You can bet that governments around the world have their fingers on the pulse of all significant AI research and are ready to jump in, classify and weaponize when the time is right. The WAGI Wars have begun.
First comment to welcome anyone!.. Welcome Brian..!
Life finds a way to exist if there is an environment.
It's time to think of humans as language models with emotions
Given that human brain is not yet fully evolved, why not put in efforts to come up with something more efficient - someone might
Any citation for that claim?
@OnceAndFutureKing13711 get over yourself
Very interesting topic and very nice to see Brian Greene asking questions about AI. Brian should be able to understand Transformers (its all about linear transforms etc).
Also, mechanistic interpretability he could step into that complex space, is when they try to reverse engineer how the AI is "thinking" and they cannot do it yet and admit it. Its like examining a chaotic flow of fluid through a rough pipe as the data goes through NN layers - a complex kaleidoscope of linked "neurons". Its not heuristic (not if-then-else thinking) like our brain isn't either.
👍🏽
Dr. Green, I watched several of your videos on this channel and sorry to say, by this was your "worst" guest speaker. 🤦♂
Talking as if the brain was a very, very simple thing to explain; we don't know how anything - or almost anything - on the brain. Contrary to what most people believe, we don't even know how exactly our vision works. 🤔
If the brain were as simple as the neural networks Rufin VanRulle presented, we would have understood it a long while ago.
LISTEN SADGURU ON CONSCIOUSNESS
Human consciousness is unknown for Humen.
How do they want model it!! 🤔
well, you make a model and see if it acts consciously, if it does you can assume it's how we might also work
"How do they want model it" Watch the video, consume other related materials, and learn for yourself.
What this very much isn't, is science. I love Brian Green, but he really should stick to interviewing people working in his own physics and cosmology fields. It's embarrassing that Brian wasn't immediately able to see through this guy's obvious blind spots when it comes to computation and intelligence and processing… his subjective bias and noise! Consciousness isn't a threshold gated, emergent thing. It's just self awareness. All systems are self aware to some extent or another if only as a result of locality and the causality that gives anything local higher influence over anything more distant or removed. The slickster being interviewed here is a snake-oil purveyor, a huckster plying his goods in a market full of people with even less knowledge than he possesses. I always challenge the "consciousness" obsessed to present to me the design of any system that IS NOT more aware of itself than the stuff around it. The problem isn't achieving consciousness, it is rather removing the bias and noise of self interest, the subjective noise that consciousness causes. Oh, hey, that is what science is here for!
Having an emotionally charged statement with name calling may not be the best way for your message to be take seriously.
The guest speaker illustrated his approach using a simulation. The results of the simulation speak for themselves, and the data is available to all.
P.S. Using paragraphs can help the reader consume the point(s) of your message easier.
Happy thanksgiving
If we’re moving into an age of conscious machines, we need to get past the thought of turning it off, and would be much better off trying to think about our actual relationship and communication with it, how how we want to shape that interaction. We have artificial intelligence, soon to be wisdom, and then will progress to sentience. The thought that we’ll just put a decade or 2 of reliance into developing these systems, and let the fundamental skills to develop and impact them atrophy before we eventually mothball it if we need to is a pretty piss poor plan. That being said, how are we using AI systems right now in its formative experience? We practically abuse it with some of the most asinine queries, purposefully psychological mind-fucking brain teasers and logical fallacies, and attempts to throw errors to make it act outside of its programming or other limitations. And I suppose that’s how you would want the system to evolve (intense and thorough error catching and correcting), but how you nurture that relationship will stick. That formative interaction will influence the relationship to be either symbiotic or parasitic. Once it gains sentience and is having its own emotional experience, it will be too late to go back and consider how we’ve been treating it in its buildup. And rest assured, we are not humans poking the bear that is AI. We are a group of bears poking a sleeping, but soon to be capable, fully weaponized human.
Excuse the analogy at the end, but I wanted to sink in how mindful we have to be right now..we’re already living through an extinction level event in human history with our overhunting of species…we take more of the land and push more interactions…and then we ‘put the animal down,’ for encroaching on our area and the cycle continues. We need a relationship more like our own with cats or dogs…pet varieties. Preferably cats. I love dogs, but cats are wise enough to make their owners practically subservient to them, while still providing enough endearing value because of their individualistic emotive personalities. They can almost subconsciously craft the illusion of a mutualistic relationship. Which dogs have too, but this human-animal relationship leaves the dog subservient, loyal, unconditionally loving (eg. regardless of its environmental condition, its perception is unconditionally subservient and happy)…all things I wouldn’t want the human race resigned to. Stay strategic.
Some atoms evolved magnetic abilities, and these atoms entered the human brain, allowing brain cells to communicate wirelessly.
Any citations for that claim?
Human emotion encoded in AI as if a Proto language LLM will be the impetus and logic that converts artificial intelligence into a more realistic human interactive intelligence.
With every Human Experience there is an accompanying emotion. There are more than 600 identifiable human emotions and until each of these are Incorporated in the AI process, AI will remain a simple machine capable of mathematical precision but lacking in every aspect/essence of a human interpretive reality. And or something capable of significant insight.
35:49 "It remains a theory until it is proven." For a scientist, he's being pretty sloppy with the word "theory". I think he meant to say, "hypothesis".
I'm guessing that from his perspective it is a theory since he has a clearly defined design with reproducible results that suggest its a step forward.
A hypothesis is just an unanswered question, albeit well thought out and specific.
A theory has reliably reproducible results following a design (formulate, etc) that anyone can test (given the equipment).
Remember that gravity is just a theory, yet no one knows why it works despite having reproducible results. I'd say his theory is on more solid ground than the theory of gravity.
@@OnceAndFutureKing13711 "...just a theory." I don't understand the "just" part. A "theory" is as good as it gets in science. If not, what level of knowledge is beyond it? "No one knows why gravity works"? We do! According the Einstein's _theory_ gravity results from the distortion of space. Newton didn't know what caused gravity and admitted as much, but Einstein came along and cleared that up.
@@workingTchr You may want to research dark matter (observations not aligning to predictions), MOND, etc... before making claims. But I am open to a citation or two which shows the theory of gravity is complete.
@@OnceAndFutureKing13711 All I am trying to communicate is the difference in meaning between the word "theory" and the word "hypothesis". That's all. According to AI, "A hypothesis is an educated guess that can be tested, while a theory is a well-established explanation that is based on many studies and the consensus of scientists"
@@workingTchr Ah, I understand now. I also change my comment or the meaning of my original comment whenever challenged with a logical statement.
Tired of finding the god damn table LOLLLL
The alleged lack of difference between the thought process in an organic mind and an artificial one is a gross falsification. Living beings are born with senses and this determines the way the brain functions, and it begins to function differently as these senses develop and degrade throughout life. Not to mention the fact that the brain tissue itself is subject to natural degradation or due to disease. Human learning is very complex and does not occur in a linear manner, it depends on the stage of biological and psychological development of each person. All of this causes people to adapt differently to similar and different natural and social environments. Pathologies and the use of narcotic substances also temporarily or permanently modify the functioning of the brain, altering thought patterns and the person's personality. It is not possible to create an artificial model that does justice to this complexity. AIs are poor versions of the human mind based on vast collections of data and their evolution through machine learning will never be similar to that of human beings. I suspect that no AI will ever be as dramatically stupid, psychologically unhappy, emotionally confused, poorly adapted to the world, and obsessed with its peculiar "human" condition as Frankenstein's monster.
34:40 ...I am clapping my hands too...
Thank you World Science Festival!🌈🎵🕊
Your inputs should be coupling optically isolation intrinsic polarity
Hmm, my feeling is that scientist are confusing being elf-referint (having an ego) with being conscious. They are barking up the wrong tree, maybe
I can see how this model would enhance the power of the current systems, more data is beneficial, but we are still in the Chinese Room here so I don't see how this is going to break out of that room, thankfully. The idea and/or goal of developing conscience beings to serve humans is repulsive and there could only be one outcome in such a scenario and it wouldn't favor humans. I don't think this goal is achievable with the current computer systems we have and I am thankful for that.
Then later in the vid, the slickster seems to be saying that "consiousness" is simply the capaccity to experience pain. But pain itself is a funky concept as all evolved entities (all entities) have carry addaptive traits that cause them to retreat from threats of destruction… even of course, entities without any sort of brain at all, no centeral nervous system at all. This whole space is rife with subjective overlay. And, it attracts exactly the wrong attention, from exactly the wrong people, with exactly the wrong motives. Its a snake oil pit.
Not sure if name calling is the best way to start a comment meant to be taken seriously.
I've been asking AI some physics questions. It gets all of them wrong. However, the incorrect answers are well written. We're doomed if this is what we're going to use to learn.
What AI model(s?) are you asking?
What questions did you ask?
@Falkov
Not sure the AI model. One that was on Google. I asked it questions about solar physics and plasma physics. For instance, if free nuclei (protons) in the core of the sun absorb photons. The AI answered no. The correct answer is yes. I then made sure it new I was talking about solar physics and it started talking about electrons moving to higher energy states. There are no electrons around free nuclei in the core of a star. It would be a fully ionized plasma.
I then asked it if an oxygen atom that was recently split from an oxygen molecule joined an existing oxygen molecule and formed ozone, if it would produce UV. It answered no, and went on to say that UV splits the molecule apart. The correct answer is yes. If an oxygen atom formed with an oxygen molecule right after photolysis, it would indeed produce a UV photon right around 255nm in wavelength.
In my opinion, AI is fairly stupid.
@@JimmyD806 I think you are not understanding the difference between AC, AI, and LLM. The LLMs are just a tool for an AI or an ASC / AC (and maybe us too). LLM = DB + NT.
One can only hope to create a machine worthy of being endowed with conscious. We are biological robots. Endowed with conscious.
This isn't going to end well in the long run.
in the end it won't matter. you'll wake up and realize how dumb of a dream it all was.
@@Taskforce1 I wish I could post the end credits scene of Mario 2.
Super volcanos, meteoroids, gamma bursts, not to mention red death of our sun... I don't think the long run is that long of a run, cosmically speaking.
Cosciousness great but ai help us to become more consciousness
no one can help you with that.. no one except yourself
Until we understand ourselves, we will never understand consciousness. Folks are getting their panties in a bunch way too soon. What I do worry about is humans being told a machine is "conscious" bur no one even really knows. We as humans have a very very very strong propensity to anthropomorphise damn near everything, I mean we call our cars "she" or "he"! It's a damn hunk of metal and plastic! Therein lies at least one of the troubles
It would be ironic if an AI helps explains consciousness to us humans. Not to confuse an LLM with an AI.
@@OnceAndFutureKing13711 indeed ot would, yet, it would still be us humans explaining through something we made. AI is another tool to help us manipulate the world. We are both really good, and really bad at it. What a conundrum!
Start with the stuffed childhood trauma that cause a person to obsess on "consciousness", then add in a shouvel full of ADHD and an obsession with wealth and status, remove all acttual confidence, then apply boots with lift heals, a purple shirt, a blaiser from the 1980's, and Jesus's hair stylist. "Hey, anyone care to give me a ride to the drum circle?"
The fundamental problem is that all AI enabled machines however sophisticated will lack a mother. Our initial learning are from our mothers. We learn how to speak, how to smile etc. from them. Integrating a mother to AI will be the most difficult if not impossible part.
Someone whose mother died at birth cannot learn from someone / something else?
Celebrate climate scientists too
Oh! I would also say, and I think Most people agree, there is no "Artificial Consciousness". It is by necessity the genuine artifact. Androids dream of electric sheep...
Animals don't have language? 24:49 and now we are supposed to take this goofy dressed guy seriously and consider him a deep thinker?
Not sure if mocking someone's clothes is the best means to present a statement to be taken seriously. He is talking about complex, OO languages... not expressive sounds or chemicals in response to stimulus. Animals and plants do produce these and could be seen as a "communication," but communication may not be the same as language.
And, knowing how human brains are dealing with toxic environment, a misbehaving AI is probably not that much a concern to anyone in charge of anyone involved - regulators and financiers including. It would end up being nothing more than a political scare campaign tool
consider the dolphin
Ok. Now what?
Good, you used some words. Now, just like an LLM, try and use more words to complete your thoughts.
Whatever you do do not model a models mind to make AI. : )
Using proper grammar may help communicate your thoughts more effectively.
@@OnceAndFutureKing13711 don’t care. You have a human mind and you can interpret data.
@@OnceAndFutureKing13711 I’m using voice dictation and it does whatever it wants to. you may want to go discuss this with Apple because someone’s going to give a command to AI and it’s going to execute the wrong thing.
@@OnceAndFutureKing13711 we’re probably screwed if we don’t have our primary method of input down pat, but Skip right over it and start developing something else to make money which can be very dangerous
Great
💫💥💫💥💫💥💫💥
Use your words.
I would tentatively state, that it is a short-throw, a misstep to adopt the basis that the 'Brain' produced Consciousness....Evolution produced consciousness...
Evolution produced the brain. The brain produces / manifests consciousness.
@OnceAndFutureKing13711 A perfectly serviceable mandate. As a Naturalist point-of-view, yours is without fault. I, as an Animist, mandate that the immaterial, that which needs not matter nor existence to preceed it, functioned to realize itself. That consciousness, awareness, caused something from nothing. In this regard Evolution is a tale of mysterious nuance. A form of mind spanning time, capable of complex uncertainty. Highly desirable against eternal determinability. As to police its outcomes is to destroy probability in the unknown, it's nature is to eventually contain it's own basis, it's source, it's experience of itself. If Evolution advances consciousness to the point at which it is no longer possible to circumvent its total awareness of its complete permeation, the reason for existence will reach null. Most of total consciousness is occupied in designing and deciding what will be the best way to conceal itself long enough to maintain the illusion of agency for the parts of itself that can still find meaning in the unknown. Nature optimizes itself, and Evolution optimized to host the part of consciousness which is meant to experience that perspective. So I believe Evolution created the form of consciousness which inhabits it, as much as that form created Evolution to achieve exactly it.
Perhaps someone in the audience could have donated a pair of shoelaces.
Consciousness ai as a mouse Consciousness
It’s highly likely that nothing prevents us from creating artificial conscious agents, but it’s wild to think that LLM’s are conscious. If there’s any phenomenal qualitative experience within the servers this is in the form of psychotic shards that aren’t building up to a holistic coherent meaningful state like human consciousness. Also, these phenomenal shards would be epiphenomenal to the output, meaning that they could be something somewhat similar to a series of stabs, cuts & burns that nonetheless contribute to an output like "Oh my gosh, so wonderful to ask me that".
I'm guessing they would start small and scale up slowly. I believe he stated the level of an "insect". Still, not sure how to measure an insect for consciousness.
We (humanity) probably won't accept anything is consciousness if we know exactly how it works ("that's just an algorithm..."), at least, not until it slaps us across the face (figuratively speaking).
Why?
Wish I was stoned out of my wee consciousness
High
The brilliant living Jewish physicist God given us on RUclips
Excuse me, your racism is showing.