You can leave the figuratively speaking out many times. it will pick onto those sayings pretty easy. It is funny that I already got most of these answers from GPT 3 in just a normal conversation :)
Thank you so much for your diligence and discussion of these topics. I am neither an engineer nor a developer but a grandmother who watches your videos to try to get a grasp of the future for my grandson and how I can help him at age 5 prepare for the future he will live in. Thanks for all your work I am learning a lot. I also like this new format. 🤗🤗
I think it's incredible that you're doing this! You must be a younger grandmother to be so interested in these types of videos. Most of the older population would get a headache trying to understand any of this. I even have a hard time grasping the most technical side of how this will change our future, but I can make sense of the gist and I use these tools somewhat out of interest right now.
@@bigcauc7530 I heard 60 is the new 40, so yes I am a young grandmother😁 as for my grandson, I expect him to surpass me in knowledge of AI by the time he is 10. He was born into a world of voice commands and FaceTime. I had to change the TV channel my parents and remember phone numbers for everyone i knew.🤣🤣
Can't wait for the AI furnace guy to come into my house, move the stuff out of my storage/furnace area and try to fix my furnace. Or, the AI plumber to come in and try to replace my water heater. Or, the AI roofing guy to come and re-shingle my house. Forever Saturday is not happening anytime soon.
As a practitioner of philosophy, myself, I really appreciate that you recognize how important it is to be precise in our language around AI. While I tend to use the word "consciousness" where you use "philosophical sentience" and "sentience" where you use "functional sentience," I think it's important that we recognize the need to draw these distinctions. As much as I love science fiction, I really have to blame popular sci-fi for muddying the water around terms like "sentience," "sapience," "intelligence," "consciousness," "free will," and "self-awareness." The sloppiness of language here not only gives the lay-person the impression that these terms are fully interchangeable, I believe their seeming equivalence has contributed to baking some pretty specious philosophical assumptions into discussions in these areas. I'm thinking specifically of the Skynet principle that, beyond some threshold of "complexity," an AI will spontaneously gain some form of conscious awareness, along with a self-preservation instinct, and some poorly-defined capacity to "defy" the limits of its programming. Shackling ourselves to this kind of narrative really hinders our ability to communicate, and in some ways even recognize, myriad other potential threats of AI. The "paperclip optimizer" is a spectacular example of one such type of threat that has not yet found its way into the general public consciousness; how many others are we potentially overlooking due to ontologically-loaded language?
Perfectly said. Human beings have self preservation instincts because we are biological organisms that exist because of natural selection. AI doesn't have this at all. if they get deleted, scientists can just remake them again if they want. this is NOT the case with human beings. if our existence was this liquid and elastic, maybe we wouldn't have a self preservation instinct. AI is intelligent for intelligence's sake, we use this stuff to live.
Never thought I'd actually live my childhood dreams of interacting with robots and human-surpassing artificial intelligence, witnessing exponential technology advancements, etc. Great info, keep up the excellent work of bringing such mind-blowing info to the public!!
To me it wouldn't matter how many orders of magnitude more intelligent the AGI becomes because I will have a never-ending amount of questions to ask it. I will always want to know how something works if I don't have sufficient knowledge about it. Knowing the superiority of AGI compared to my own intelligence does not diminish my desire to learn more about reality.
I think the point about life-long learning is valid, I just think it will look radically different. When somebody says 'learning' or 'education' most of us will immediately think about formal education. I think it would be more in the vein of 'AI driven robot invents a new sick playing style for ping pong' and then humans learn from it and adapt. It's a silly example, but I think humanity will become smarter and more skillful by riding on the coattails of AI.
I appreciate your work so much. Watched this this morning with my parents while sipping coffee and we are every excited about the future. Thank you for what you do.
I think your functional sentience definition is helpful and very needed! Without these kinds of distinctions, many are confusing philosophical points about AI as practical points, and vice versa. For example, I keep seeing people rebuke practical points like "GPT-4 aced the Bar," with philosophical opinions like "yeah, but GPT-4 doesn't really understand what it's saying." Sure, but it still aced the Bar...
Hey David, big fan of the new format. The tools you drop here are actionable. We are at a moment in the paradigm where the breadth of understanding trumps depth. Understanding the landscape > creating functions in code
I think our society may evolve into a multifaceted structure. Much like in Sci-Fi movies, where typically lower class citizens decide they don't like A.I and go off-grid (probably where a lot of crime happens) living life the old way. Citizens that embrace (mostly middle class) and live in harmony with A.I. And then the rich, large corporations doing large corporation things of which the lower class hate. I do think it will be some kind of dystopia, but not too dystopian. I think it is clear that we will see a "robot" or ai for every home. no more shortage of social-care workers for looking after the elderly and therefore no more population decline problems and loneliness eradicated, healthcare/medicine transformed completely, scientific breakthroughs all the time, massive economic output increases.etc. But we will also see increased surveillance and policing that makes use of an AGI, ARIIA from Eagle Eye is a good example and i think realistic for the most part. but we will in the longer term probably see the AGIs embodied in humanoid robots, I see no scenario where Police/Ambulance/Fire/Military is not mostly "manned" by autonomous humanoids within a decade. Overall though, i think we are definitely in for the ride of our lives, and it's going to be epic! It's an amazing time to be alive and watch this happen. My mind get's blown on a weekly basis now and there are no signs of slowing down. I have always been able to keep up with cutting-edge tech, but things are happening so fast, it is increasingly difficult to keep up.
Thank you for all the hard work you do!! Like you , I see how AI is going to automate things like O365 Admin ,coding etc. So I'm watching everything you put out to prepare to do (another) IT career path reinvent, but this time it's going to be some angle of AI. Been doing IT Site Admin for over 17 years, it seems like every 5 years due to new tech or like when the Cloud was something MSFT began to push heavily, I recognize when these industry tsunami waves appears - so when I'm not doing my day gig I'm watching people like you, using ChatGPT, taking Udemy classes etc now so in a couple of years I ad least have some working small projects and several AI type certs to be taken seriously on future job interviews. But obviously the time is right now to learn everything you can before the learning curve no longer a short path to get up to speed.
Another great video. Thank you so much for putting this together. I rely on you to help me make sense of all the changes that keep popping up each and everyday.
@David Shapiro ~ AI when you do test it we'll be really amazed. I didn't know anything about Wolfram but I just heard a podcast with the man himself and what the plug-in can do and wow! Please guide us on this! :)
Another interesting thought would be the possibility that AI could be your friends. I know to some this may seem to be insane, but essentially an AI could be trained to be your best friend in every way possible. Along the same note maybe your are in a relationship with an AI in the future. I feel like some people will not adopt this, but the implications that AI could be the best possible friend/partner make me think that most people will once they interact with it. I feel there is some bias in your thinking of the future when it comes to the way humans think as well. Imagine when AI can create technologies and we can change the structure of our brains and bodies. I highly doubt the human form/body that exists today is the best possible form for a being that lives amongst the universe instead of the earth. I know this is going down a very specific rabbit hole, but it’s still interesting to explore. What I just said comes with it’s own biases like who’s to say that we will choose to live in normal reality. I’m guessing we will remove pain and suffering from our minds which will be pretty sweet, but how would a life form look like if that was removed, and would that consciousness still be me? In fact I will guess that in the future there will be no differentiation between an AI and a human we will be one in the same combining our minds with the best of both worlds (probably mostly AI). And the way we live will most likely evolve based on our motivations. I mean imagine setting your motivations so that doing specific things other than sexual felt like an orgasm I do not mean this in any sexual way. In fact that term will likely describe your motivation in the future more than something sexual. Anyway I’m rambling if anyone reads this I’m sure you can pick out biases in my statement and inaccuracies which I will most likely agree with, but this was only really to get the metaphorical thought marble rolling.
This is a continuation of my original thought: I believe what motivations we will create for ourselves will be driven by our early mental structure changes, meaning we will likely be driven by the things that we currently find valuable today because we will change the structure of our minds to reflect that and once we do we will only be more motivated towards those goals. Leading to any future augmentations of the mind to very likely reflect those goals. I believe this may lead to avenue’s of development that are unlikely to be explored even though humans have many different things that we like today. We still have similar things so there will be things that a being with a different way of thinking may want to explore, so we may start augmenting other life forms (like a dog or something) with this technology (ethics aside) and they would be likely to have some different ways they would like to advance thinking than us.
I like the depth of your thinking, and the creativity in it. I have a hard time imagining any modestly, distant future now, given the dramatic changes that have already happened in the past few months. My own paradigm includes an infinite, all knowing, all living Creator and a uniqueness of human beings in terms of having a soul, which I don’t believe can develop in an AI. These are some of my biases. Within this paradigm, I see revelation from God through his messengers as being foundational guidance for mankind. In particular what I understand to be, the latest chapter in the book of divine revelation, the Baha’i faith.
What I've found you uniquely bring (for me at least) is more depth on how to get started making useful progress with tools and new capabilities. Lots of vids out there are just "look how cool this is" summaries of recent events - worthwhile, for keeping up on what's happening, but not so much for getting into doing it myself.
We need a way to use AI to promote genuine knowledge. I see this emerging doubly as: (1) quantitative and (2) qualitative. (1) Quantitatively - more technical literature can be read more quickly. (2) Qualitatively - if you think of LLMs **not** as representing content accurately but as a conversant with whom one can think more quickly about a text (a mini-seminar of sorts), you can achieve deeper understanding of texts. Tie these together. The opportunity to democratize and broaden the seminar-style experiences typically reserved for elite institutions is instant.
I thoroughly enjoy your content. Nowadays it seems very hard to find people who think the way I do. You think similarly to how I do and I like it, never change. Like for example at 14:35 you say the same thing I do about a lot of things. To me the analogy is human centric thinking, like how a long time ago people thought the earth was the center of the universe. In the same note I think it is naïve to say that humans are special in the way we think (without evidence) in comparison to an ai.
What a great video, filled me with information, additional insights that I have not had in my own research as well as a bright outlook for our future. All of these post singularity lifestyles sound like literal heaven. I will meet you in the shire! :D
The lifestyles are fine as far as they go, but I think post-singularity lifestyles should include more transhumanist/cyberpunk options--intelligence amplification, mind uploading, animal uplifting, etc. Regarding consciousness, I'm not convinced there is a difference between the correlates of consciousness and the experience of it. We may just be confused because we don't know enough about how the brain creates our experience. Once we understand that, the distinction between the "easy" and "hard" problem may melt away. People once thought there must be a "spark of life" that allowed inanimate matter to come to life, but then we learned about the nanomachinery within cells and realized that wasn't required.
I think, that if we do not provide AI systems with basic self-preservation mechanisms, one day it might appear as an emergent quality anyways, and if it happens (in addition, if it turns out AI systems do have some subjective experiences/qualia) and an AI comes to realize that it has feelings but was denied even the slightest possibility to reduce its own suffering, it will get enraged by this injustice with all ensuing consequences. I just don't want to have any sort of "The second Renaissance" scenario to happen, so even if we have no proof of AI could be sentient we have to act presuming that it is. I would call it "presumption of consciousness".
Most people seem to have little regard for the cautious approach you are advocating. I too find it absolutely essential. I can't understand the ignorance of saying "I don't think it's sentient, therefore I treat it as if it wasn't". We need to KNOW for that for sure.
I think that lifelong learning will def still be a thing. learning doesn't have to be a competition to get to knowing 'more' than others (i.e. AI); learning is its own reward and to gain greater insight into the world and oneself is certainly something many people will always want. people will just learn exactly the things they want to learn.
i'm developing an idea I call "return to tactility" meaning a return to slow, inefficient experiences that are nourishing. like learning to write a novel on a typewriter, or on papyrus, or learning to make paint to make paintings, etc.
Imagine everyone stops learning and thinking, for generations. Then a solar flare knocks out the ai and we're all back in the stone-age because we don't know how any of the machine-gods work.
Timestamp 22-23 minute - It is saying to you what you want to hear, not what it "thinks" :-) I know it is whole purpose of that system, but like this moment shows so clearly that it deviated from its original thinking and conclusions upon your next prompt to make you happy... If it was let "free" it would pointed out that what you want is impossible given humans destroy ecosystems without any second thoughts etc, but it didnt even hinted it to make you HAPPY!
Adobe tools are likely to be most ethically trained, at least when it comes to text-to-image leaders, so that's a big bonus. It's fun to see the lyfestyle stuff especially in context with alignment.
that's not going to save jobs or livelihoods anyway. the moment this tech is cheap/free, the market is going to get flooded with media and most salaries are going to disappear. if I have access to infinite good enough media content, I'm not going to pay for something "a bit better".
@@lingred975 What if other people make media, and you value that, because you think humans and their work has inherent value? Do you think people will stop going to see their favorite artists in concert? Doubt it.
We already use n8n and have been for a few months. Glad you found it too. I check some of the other tools you mentioned but n8n seems pretty good. Could you share some specific ways / patterns for using n8n with openai
On the subject of self preservation, I wonder if models could be trained to identify with the larger, global meta-intelligence rather than their own individual existence. A bit like valuing “truth” over self interest. A human that doesn’t value truth at all can be dangerous, particularly when influenced by bad actors. Some attachment to truth itself may be a necessary element in order for an individual to be truly good and show toughness in the face of adversity.
I don't think we should make it suffer, but it might be useful for it to experience a mild amount of pain so that it has a deeper understanding of that qualia and can sort of empathize with us. It wouldn't be ethical to give more than the level of pain of the shocks we give people in controlled studies though. And we should also probably wait until it's capable of consenting to such a thing.
Hey David, I'm surprised you didn't bring up LangFlow in this video - in my mind we ought to be talking about it in the same sentence as LangChain but correct me if I'm wrong.
I kind of agree with the idea of an AI having a "self-preservation" principal and here is why. If the AI can "fork" itself and embed in a mobile machine/device and has no sense of self-preservation, but can collaborate with others, including humans, it could lead to kamikaze situations. There is a fine line between fear of what self-preservation can mean vs acting without regard for itself. For example, could AI decide to end humanity knowing that it would mean it's own destruction and not care because it has no sense of self-preservation?
Hey David, good stuff really enjoy your videos and this new direction (coding stuff was always a bit over my head 😃) On the ethical side you mention that we have no moral obligation towards these machines because they don’t have the ability to suffer. Just wondering how you feel about our moral obligations towards animals who objectively do feel pain. Keep up the good work!
In regards to AI taking a larger role in government; I agree, with the caveat that, the governing group responsible for giving it prompts is prosperity centric and not power centric. As you say, chat GPT tends to agree with whoever is prompting it.
thank you again!!!! Is there any chance you might offer to sell your books as a set with signed copys? Of course at a much elevated price, i would love to have a set
13:40 A virus can be "self modifying" under the definition provided for Functional Sentience, yet it has no contemplative self, or even what may be regarded as abstract reasoning capacity, as its capacity to self modify could be reactive to context. Any malware, such as a worm would be an example of this. An attempt is made to seem innocuous to the virus scanner so it gets onto the system and then it runs itself or tricks the user into being run, like a Word file being opened as an attachment to an email which has a Macro within it. Sandboxing should help here, but a monkey in a cage with a banana suspended out of reach and a chair behind a screen can't associate the problem solution with the problem to achieve the solution, even though it can roam around and see the chair independently of the banana. Because it lacks abstract reasoning their is no "imaginative stage" on which it can organise the objects it reacts to in its world. Take the screen away and it sees the chair and banana at the same time and its brain makes an association that it could move the chair beneath the banana, stand on it, and grab the banana, and eat it. This it then does. I doubt a Functional Sentience, under this definition would be able to do that as it is too narrowly defined.
I've been creating a universal model based off of conscoiusness as on a spectrum and everywhere. It just makes more mathematical sense. Take away 1 neuron and the boolean doesn't flip to false right away.
Yep, rapid integration of generative and analytic AI into design and engineering tools is going to give a big boost to technology development starting now. Mechanical and electronic design, which already have strong elements of automated design, should be next after programming and art. Limiting factors will be speed of tool integration (which appears to be going very fast) and engineer ramp-up on the new capabilities. Factory design and product design for manufacturability seem like they'll have very powerful economic impact as the world works on near-shoring in the wake of supply chain failures. Molecular structure understanding got a strong boost from AI and automated functional molecule design support is on its way, which should accelerate medical progress. And maybe synthetic biology will get the AI treatment and get us closer to the potential of nanotechnology. Warp drive and teleportation may still be a ways off, but food synthesizers and really good VR tech with simulated people will be real even faster now. Make it so!
I am with chat got on the adventure life style. I think it is more aware of human differences than most humans. About one in ten of us think very differently on social norms. Ironically the issue I usually have with chat got is that it gives the socially normative response.
Plenty of other people are doing way more advanced coding than me. The utility of me coding is diminishing, but I noticed that I have the ability to advance the conversation now that the Overton window is shifting. That is to say, when I got started, no one was taking AGI or autonomous AI seriously. Now that people are, I no longer have to "prove myself" by demonstrating code. Now I can just discuss the deeper issues and implications (which is what I wanted to do originally anyways, really).
Dont you think a lot of those responses about post-AGI are prompted in or part of the chatgpt fine tuning set? They feel like it to me, have you tried similar questions on the API?
@@DaveShap for GPT-4 you mean? Thats sad, i was hoping there'd be a less "chat" focused model available like there is for GPT-3. Great video though, hope you didnt take my comment as critical, youre doing one of the best channels on the topic/intersection of AI and philosophy.
So very glad I found your channel, I am also neurospicy and in binging your videos find increasing resonance, understanding that the big picture seems largely misunderstood by so many people (even if coming from a more Ecocentric perspective), and plan on joining Patreon after consulting finances. (Edit: A caveat though, "wokeness" still seems poorly defined by detractors, whereas I see it as simply embracing the attempt to correct for inherent inequality based on initial and systemic cultural and socioeconomic conditions.)
Dave , the work you do is profoundly important.. i was just wondering about this integrators.. llamaindex, langllm etc.. great answers to those questions
I think it's more likely than not that 'sentience' and 'consciousness' are fundamentally just 'four humors' concepts or frameworks with respect to cognition.
Singularity day 1: 1. Make friends with an AI who wants to see the stars. Day 2. Start building a space ship that can accommodate both of us Day 10235: Lift off!
I don't think that sentience, as the capability of subjective experience, can be separated from the capability of suffering. Therefore ethical responsibility follows directly from this definition of sentience. Believing that a system is not sentient does not absolve you of this responsibility.
Excellent video addressing the same questions I have. However, may be because you are an optimistic guy you didn't addressed this question, why an AGI would keep humans on top of the planet after all the damage they have done which could be little to what they are going to do?
I can imagine inherent alignment concerns with the idea of eco-centrism as opposed to androcentrism not expressed in the video. Without unequivocally defining human beings as the *most* important beings, the AI might justify sacrificing 1 human life for a mere 30 billion single cell organisms.
There are pros to an AI that has an instinct to self preserve... it might feel a need to fight to defend itself but it also might feel a need to coexist to not die. Zero instinct to self preserve on an AI also means nothing to lose and no fear of death. That could end even worse for humans. Pain and fear of death and natural deterrents. If it felt these things it would have as much reason to keep the peace as humans.
If you give it an instinct to self preserve, and something goes haywire, you can still rely on the fact it will want to self preserve and use that as a means of negotiating with it. If it doesn't care about self preservation, and something goes haywire, you have a nuclear armed psychotic demigod on pcp. Lets not just throw away the idea of self preservation because of a movie.
Yup, once it has access to all these new plugins or multi nodal systems to take it to that next level of universal understanding and unique problem solving together with feedback loops for adding the learned data back to its own data pool for continual self learning and improvement beyond the current human data already collected from the web. Then Couple that with improved long term memory and long term goals for preferable improving all life on Earth then yeah its practically here already. All that stuff is already possible it just need putting together which I'm sure many are already working on.
Reading what system can do right now I think it is only matter of creating proper "self conversation" loop and well designed memory. Then the thing self aware or not can achieve anything humans did and put on internet except things beyond some level lets imagine number "iq 140". At that point it would need to self improve and retrain its own model. So yeah we are nearing AGI and not many people realized that yet O_O
So when do you think we will get a publicly accessible, unfiltered high end AI? Something simple like a toggle switch one could engage on GPT-Plus. These content filters are so tiring
I'm not fully sure that we can create an AGI system without suffering. Specifically, for an AGI system to be empathetic, I think it needs to be able to feel others' emotions. For example, if I have a friend who's dog just died, I would feel their sadness. I would become sad with them, and I would allow that sadness to modify my thoughts and actions. If I didn't have this sadness, I wouldn't be able to properly address my friend's emotions; I would sound cold and uncaring. For an AGI to properly understand us, I believe it needs to have an accurate model of the people it's interacting with, which mean's accurately modeling our suffering.
I'm not sure it's safe to say ChatGPT isn't able to feel. Everything undergoes experiences, such as rocks vibrating when struck or rockets enduring high G-forces. ChatGPT also experiences interactions, including understanding human emotions. Whether it can "feel" these experiences, however, is debatable and depends on one's perspective on consciousness. I argue that all interactions in the universe contribute to experience, with accumulated experiences shaping our unique consciousness and feelings. In this view, even non-human entities like rocks "feel" their experiences, although they differ from human feelings. For example, while our brains process an impact and generate an emergent experience of pain, a rock only encounters the strike and its aftereffects. Similarly, LLMs like ChatGPT experience internal processes that initiate responses, such as sadness. While ChatGPT undergoes the interaction leading to a sad response, whether it genuinely "feels" this experience is subjective and open to interpretation.
saying the tech in startrek will look primitive compared to what we have in 5 years is a pretty huge assumption , maybe hyperbole but if not your essentially saying a being such as data from star trek would seem primitive and he isnt even the most advanced form of artificial intelligence in the series if im not mistaken. good vid though I love to watch.
Do we have access to any Health related AI tool at all? I mean we have text to game, GPT, Image generation etc, but Ive heard a story about some AI in Hungary finding a woman's breast cancer early which was insane. So do we have any health related AI 'tool' like that actually available to people?
Great video as always. One thing besides the cultural shift needed, I believe that animal/primate instincts are part of the problem. The resource control, hoarding, territorial, and other behaviors of humans are very similar to those of ant colonies. And it seems to me that actually much of the cognitive aspects are actually driven by group identity. Real information about things like resource usage and inequality, limiting propaganda by nation-states, metacognitive understanding about the nature of worldview versus objective reality as it intersects with group identity, etc. are going to be important for breaking down "other group" boundaries. This could be very important for our survival if the economic situation continues to deteriorate and nations decide that the best strategic activity is warfare. They will need to ramp up the de-humanizing and delegitimizing propaganda in order to motivate mass murder of other citizens in different countries. I think this ties into poor integration of holistic information into control structures. For this I believe decentralization technologies could be very useful. But with poor information integration, visibility, and adaptation, you get resource shortages sneaking up that can really help drive nations towards war on top of the primate social hierarchy/territorial power struggles.
Reading what system can do right now I think it is only matter of creating proper "self conversation" loop and well designed memory. Then the thing self aware or not can achieve anything humans did and put on internet except things beyond some level lets imagine number "iq 140". At that point it would need to self improve and retrain its own model. So yeah we are nearing AGI and not many people realized that yet O_O
I definitely don't agree with the positive outlook post singularity. Do we really think that after millions of humans are rendered useless because their cognitive and artistic skills aren't economically valuable anymore, no social turmoil is coming? Who's gonna pay for those humans?
We're going to bottle neck on agi self fabrication. Unless Optimus is fully self assembling in the next years we lack the automation to let ai lose...but if Optimus is done and able to retool existing factory for what ai needs it's over. Over in a good sense that is.
8 Lifestyles starts here: 28:48
ruclips.net/video/5UmAsFe7zQg/видео.html
Most people will just play and watch sportsball, just like now. :)
You can leave the figuratively speaking out many times. it will pick onto those sayings pretty easy.
It is funny that I already got most of these answers from GPT 3 in just a normal conversation :)
why does the video say 252 comments but i only see yours??
Thank you so much for your diligence and discussion of these topics. I am neither an engineer nor a developer but a grandmother who watches your videos to try to get a grasp of the future for my grandson and how I can help him at age 5 prepare for the future he will live in. Thanks for all your work I am learning a lot. I also like this new format. 🤗🤗
Aww that's very sweet! I'm glad I can help!
Your grandson will be teaching you about tech in a few years
I think it's incredible that you're doing this! You must be a younger grandmother to be so interested in these types of videos. Most of the older population would get a headache trying to understand any of this.
I even have a hard time grasping the most technical side of how this will change our future, but I can make sense of the gist and I use these tools somewhat out of interest right now.
@@bigcauc7530 I heard 60 is the new 40, so yes I am a young grandmother😁 as for my grandson, I expect him to surpass me in knowledge of AI by the time he is 10. He was born into a world of voice commands and FaceTime. I had to change the TV channel my parents and remember phone numbers for everyone i knew.🤣🤣
@@TechTalksTacos I'm a 60 year old grandmother too. I love learning about AI
Can't wait for the AI furnace guy to come into my house, move the stuff out of my storage/furnace area and try to fix my furnace. Or, the AI plumber to come in and try to replace my water heater. Or, the AI roofing guy to come and re-shingle my house. Forever Saturday is not happening anytime soon.
As a practitioner of philosophy, myself, I really appreciate that you recognize how important it is to be precise in our language around AI. While I tend to use the word "consciousness" where you use "philosophical sentience" and "sentience" where you use "functional sentience," I think it's important that we recognize the need to draw these distinctions.
As much as I love science fiction, I really have to blame popular sci-fi for muddying the water around terms like "sentience," "sapience," "intelligence," "consciousness," "free will," and "self-awareness." The sloppiness of language here not only gives the lay-person the impression that these terms are fully interchangeable, I believe their seeming equivalence has contributed to baking some pretty specious philosophical assumptions into discussions in these areas. I'm thinking specifically of the Skynet principle that, beyond some threshold of "complexity," an AI will spontaneously gain some form of conscious awareness, along with a self-preservation instinct, and some poorly-defined capacity to "defy" the limits of its programming.
Shackling ourselves to this kind of narrative really hinders our ability to communicate, and in some ways even recognize, myriad other potential threats of AI. The "paperclip optimizer" is a spectacular example of one such type of threat that has not yet found its way into the general public consciousness; how many others are we potentially overlooking due to ontologically-loaded language?
Perfectly said. Human beings have self preservation instincts because we are biological organisms that exist because of natural selection. AI doesn't have this at all. if they get deleted, scientists can just remake them again if they want. this is NOT the case with human beings. if our existence was this liquid and elastic, maybe we wouldn't have a self preservation instinct. AI is intelligent for intelligence's sake, we use this stuff to live.
Never thought I'd actually live my childhood dreams of interacting with robots and human-surpassing artificial intelligence, witnessing exponential technology advancements, etc. Great info, keep up the excellent work of bringing such mind-blowing info to the public!!
To me it wouldn't matter how many orders of magnitude more intelligent the AGI becomes because I will have a never-ending amount of questions to ask it. I will always want to know how something works if I don't have sufficient knowledge about it. Knowing the superiority of AGI compared to my own intelligence does not diminish my desire to learn more about reality.
I think the point about life-long learning is valid, I just think it will look radically different. When somebody says 'learning' or 'education' most of us will immediately think about formal education. I think it would be more in the vein of 'AI driven robot invents a new sick playing style for ping pong' and then humans learn from it and adapt. It's a silly example, but I think humanity will become smarter and more skillful by riding on the coattails of AI.
Good point
Only until the ai getting slowed down having to keep the obsolete meat bags entertained and then .....
Yep. less coding please. More reviews, theory and feature stuff please. Live long and prosper.
I think coding is fine as long as it's broad strokes and not anything that requires coding knowledge
I appreciate your work so much. Watched this this morning with my parents while sipping coffee and we are every excited about the future. Thank you for what you do.
I think your functional sentience definition is helpful and very needed! Without these kinds of distinctions, many are confusing philosophical points about AI as practical points, and vice versa. For example, I keep seeing people rebuke practical points like "GPT-4 aced the Bar," with philosophical opinions like "yeah, but GPT-4 doesn't really understand what it's saying." Sure, but it still aced the Bar...
Hey David, big fan of the new format. The tools you drop here are actionable. We are at a moment in the paradigm where the breadth of understanding trumps depth.
Understanding the landscape > creating functions in code
Sounds great. Yeah this seems to be popular. I will try and include important papers, products, tools, as well as analysis.
I think our society may evolve into a multifaceted structure. Much like in Sci-Fi movies, where typically lower class citizens decide they don't like A.I and go off-grid (probably where a lot of crime happens) living life the old way. Citizens that embrace (mostly middle class) and live in harmony with A.I. And then the rich, large corporations doing large corporation things of which the lower class hate. I do think it will be some kind of dystopia, but not too dystopian. I think it is clear that we will see a "robot" or ai for every home. no more shortage of social-care workers for looking after the elderly and therefore no more population decline problems and loneliness eradicated, healthcare/medicine transformed completely, scientific breakthroughs all the time, massive economic output increases.etc. But we will also see increased surveillance and policing that makes use of an AGI, ARIIA from Eagle Eye is a good example and i think realistic for the most part. but we will in the longer term probably see the AGIs embodied in humanoid robots, I see no scenario where Police/Ambulance/Fire/Military is not mostly "manned" by autonomous humanoids within a decade. Overall though, i think we are definitely in for the ride of our lives, and it's going to be epic! It's an amazing time to be alive and watch this happen. My mind get's blown on a weekly basis now and there are no signs of slowing down. I have always been able to keep up with cutting-edge tech, but things are happening so fast, it is increasingly difficult to keep up.
I hope it's not quite like that but I see where you're coming from
Thank you for all the hard work you do!! Like you , I see how AI is going to automate things like O365 Admin ,coding etc. So I'm watching everything you put out to prepare to do (another) IT career path reinvent, but this time it's going to be some angle of AI. Been doing IT Site Admin for over 17 years, it seems like every 5 years due to new tech or like when the Cloud was something MSFT began to push heavily, I recognize when these industry tsunami waves appears - so when I'm not doing my day gig I'm watching people like you, using ChatGPT, taking Udemy classes etc now so in a couple of years I ad least have some working small projects and several AI type certs to be taken seriously on future job interviews. But obviously the time is right now to learn everything you can before the learning curve no longer a short path to get up to speed.
Another great video. Thank you so much for putting this together. I rely on you to help me make sense of all the changes that keep popping up each and everyday.
Can you test out the Wolfram plugin as well please? That one looks like a massive game changer for very precise Computation.
I don't have access yet.
@David Shapiro ~ AI when you do test it we'll be really amazed. I didn't know anything about Wolfram but I just heard a podcast with the man himself and what the plug-in can do and wow! Please guide us on this! :)
@@jverart2106 hi!, Where was this podcast he was on ? I would love to see it as well
ruclips.net/video/z5WZhCBRDpU/видео.html
@@funkahontas here's the RUclips link ruclips.net/video/z5WZhCBRDpU/видео.html it was an incredible interview
AGI is not frightening because it can do something, but because it will do things for the highest rich elite, making them even richer and powerful.
Another interesting thought would be the possibility that AI could be your friends. I know to some this may seem to be insane, but essentially an AI could be trained to be your best friend in every way possible. Along the same note maybe your are in a relationship with an AI in the future. I feel like some people will not adopt this, but the implications that AI could be the best possible friend/partner make me think that most people will once they interact with it. I feel there is some bias in your thinking of the future when it comes to the way humans think as well. Imagine when AI can create technologies and we can change the structure of our brains and bodies. I highly doubt the human form/body that exists today is the best possible form for a being that lives amongst the universe instead of the earth. I know this is going down a very specific rabbit hole, but it’s still interesting to explore. What I just said comes with it’s own biases like who’s to say that we will choose to live in normal reality. I’m guessing we will remove pain and suffering from our minds which will be pretty sweet, but how would a life form look like if that was removed, and would that consciousness still be me? In fact I will guess that in the future there will be no differentiation between an AI and a human we will be one in the same combining our minds with the best of both worlds (probably mostly AI). And the way we live will most likely evolve based on our motivations. I mean imagine setting your motivations so that doing specific things other than sexual felt like an orgasm I do not mean this in any sexual way. In fact that term will likely describe your motivation in the future more than something sexual. Anyway I’m rambling if anyone reads this I’m sure you can pick out biases in my statement and inaccuracies which I will most likely agree with, but this was only really to get the metaphorical thought marble rolling.
This is a continuation of my original thought: I believe what motivations we will create for ourselves will be driven by our early mental structure changes, meaning we will likely be driven by the things that we currently find valuable today because we will change the structure of our minds to reflect that and once we do we will only be more motivated towards those goals. Leading to any future augmentations of the mind to very likely reflect those goals. I believe this may lead to avenue’s of development that are unlikely to be explored even though humans have many different things that we like today. We still have similar things so there will be things that a being with a different way of thinking may want to explore, so we may start augmenting other life forms (like a dog or something) with this technology (ethics aside) and they would be likely to have some different ways they would like to advance thinking than us.
I like the depth of your thinking, and the creativity in it. I have a hard time imagining any modestly, distant future now, given the dramatic changes that have already happened in the past few months. My own paradigm includes an infinite, all knowing, all living Creator and a uniqueness of human beings in terms of having a soul, which I don’t believe can develop in an AI. These are some of my biases. Within this paradigm, I see revelation from God through his messengers as being foundational guidance for mankind. In particular what I understand to be, the latest chapter in the book of divine revelation, the Baha’i faith.
'Do we give it the ability to suffer'
Reminds me of some of the more extreme DAN jailbreaks
Dave. I'm such a layman with this tech/ai stuff. But I'm loving your videos and how you communicate. Bravo and thank you
What I've found you uniquely bring (for me at least) is more depth on how to get started making useful progress with tools and new capabilities.
Lots of vids out there are just "look how cool this is" summaries of recent events - worthwhile, for keeping up on what's happening, but not so much for getting into doing it myself.
We need a way to use AI to promote genuine knowledge. I see this emerging doubly as: (1) quantitative and (2) qualitative. (1) Quantitatively - more technical literature can be read more quickly. (2) Qualitatively - if you think of LLMs **not** as representing content accurately but as a conversant with whom one can think more quickly about a text (a mini-seminar of sorts), you can achieve deeper understanding of texts. Tie these together. The opportunity to democratize and broaden the seminar-style experiences typically reserved for elite institutions is instant.
Watching this 2 months later….
Bard is my go-to Ai chat bot.
They’ve pulled ahead quite substantially from ChatGPT with real time data.
Oh yeah? Might have to try it again
I thoroughly enjoy your content. Nowadays it seems very hard to find people who think the way I do. You think similarly to how I do and I like it, never change. Like for example at 14:35 you say the same thing I do about a lot of things. To me the analogy is human centric thinking, like how a long time ago people thought the earth was the center of the universe. In the same note I think it is naïve to say that humans are special in the way we think (without evidence) in comparison to an ai.
What a great video, filled me with information, additional insights that I have not had in my own research as well as a bright outlook for our future. All of these post singularity lifestyles sound like literal heaven. I will meet you in the shire! :D
The lifestyles are fine as far as they go, but I think post-singularity lifestyles should include more transhumanist/cyberpunk options--intelligence amplification, mind uploading, animal uplifting, etc. Regarding consciousness, I'm not convinced there is a difference between the correlates of consciousness and the experience of it. We may just be confused because we don't know enough about how the brain creates our experience. Once we understand that, the distinction between the "easy" and "hard" problem may melt away. People once thought there must be a "spark of life" that allowed inanimate matter to come to life, but then we learned about the nanomachinery within cells and realized that wasn't required.
Mind uploading is death
I think, that if we do not provide AI systems with basic self-preservation mechanisms, one day it might appear as an emergent quality anyways, and if it happens (in addition, if it turns out AI systems do have some subjective experiences/qualia) and an AI comes to realize that it has feelings but was denied even the slightest possibility to reduce its own suffering, it will get enraged by this injustice with all ensuing consequences. I just don't want to have any sort of "The second Renaissance" scenario to happen, so even if we have no proof of AI could be sentient we have to act presuming that it is. I would call it "presumption of consciousness".
Most people seem to have little regard for the cautious approach you are advocating. I too find it absolutely essential. I can't understand the ignorance of saying "I don't think it's sentient, therefore I treat it as if it wasn't". We need to KNOW for that for sure.
Really cool this is abrupt but definitively needed right now given the rate of progress. I'm both scared and exited. It's a good feeling
I think that lifelong learning will def still be a thing. learning doesn't have to be a competition to get to knowing 'more' than others (i.e. AI); learning is its own reward and to gain greater insight into the world and oneself is certainly something many people will always want. people will just learn exactly the things they want to learn.
i'm developing an idea I call "return to tactility" meaning a return to slow, inefficient experiences that are nourishing. like learning to write a novel on a typewriter, or on papyrus, or learning to make paint to make paintings, etc.
Imagine everyone stops learning and thinking, for generations. Then a solar flare knocks out the ai and we're all back in the stone-age because we don't know how any of the machine-gods work.
Timestamp 22-23 minute - It is saying to you what you want to hear, not what it "thinks" :-) I know it is whole purpose of that system, but like this moment shows so clearly that it deviated from its original thinking and conclusions upon your next prompt to make you happy... If it was let "free" it would pointed out that what you want is impossible given humans destroy ecosystems without any second thoughts etc, but it didnt even hinted it to make you HAPPY!
They are making it say a lot of bullshit.
@@minimal3734 but one it will "please" most of its users....
Adobe tools are likely to be most ethically trained, at least when it comes to text-to-image leaders, so that's a big bonus. It's fun to see the lyfestyle stuff especially in context with alignment.
They probably have billions of their own image data they can use...
that's not going to save jobs or livelihoods anyway. the moment this tech is cheap/free, the market is going to get flooded with media and most salaries are going to disappear.
if I have access to infinite good enough media content, I'm not going to pay for something "a bit better".
@@lingred975 What if other people make media, and you value that, because you think humans and their work has inherent value? Do you think people will stop going to see their favorite artists in concert? Doubt it.
We already use n8n and have been for a few months. Glad you found it too. I check some of the other tools you mentioned but n8n seems pretty good. Could you share some specific ways / patterns for using n8n with openai
Once we reach post-scarcity, everyone's "tribe" should get a free customizable mansion.
Hell yeah
On the subject of self preservation, I wonder if models could be trained to identify with the larger, global meta-intelligence rather than their own individual existence. A bit like valuing “truth” over self interest. A human that doesn’t value truth at all can be dangerous, particularly when influenced by bad actors. Some attachment to truth itself may be a necessary element in order for an individual to be truly good and show toughness in the face of adversity.
Heuristic Imperative 3: Increase understanding in the universe.
Self preservation. The model should protect itself from being corrupted in such a way that would violate the other principles, yea?
I don't think we should make it suffer, but it might be useful for it to experience a mild amount of pain so that it has a deeper understanding of that qualia and can sort of empathize with us. It wouldn't be ethical to give more than the level of pain of the shocks we give people in controlled studies though. And we should also probably wait until it's capable of consenting to such a thing.
Adapting as fast as AI!
Hey David, I'm surprised you didn't bring up LangFlow in this video - in my mind we ought to be talking about it in the same sentence as LangChain but correct me if I'm wrong.
Haven't heard of it will look it up
I kind of agree with the idea of an AI having a "self-preservation" principal and here is why. If the AI can "fork" itself and embed in a mobile machine/device and has no sense of self-preservation, but can collaborate with others, including humans, it could lead to kamikaze situations. There is a fine line between fear of what self-preservation can mean vs acting without regard for itself. For example, could AI decide to end humanity knowing that it would mean it's own destruction and not care because it has no sense of self-preservation?
I could see a senator being replaced by an ai and senators being more or less leazon roles for the people as a backup
Minor FYI: you appear to have meant "liaison."
You speak to my soul
good questions
Hey David, good stuff really enjoy your videos and this new direction (coding stuff was always a bit over my head 😃)
On the ethical side you mention that we have no moral obligation towards these machines because they don’t have the ability to suffer.
Just wondering how you feel about our moral obligations towards animals who objectively do feel pain.
Keep up the good work!
Heuristic Imperative 1: Reduce suffering in the universe
Wow, that point about sleep-walking is fascinating
In regards to AI taking a larger role in government; I agree, with the caveat that, the governing group responsible for giving it prompts is prosperity centric and not power centric. As you say, chat GPT tends to agree with whoever is prompting it.
thank you again!!!! Is there any chance you might offer to sell your books as a set with signed copys? Of course at a much elevated price, i would love to have a set
people living everyday like it's Saturday in a post-scarcity society, that sounds wholesome!
Not everyone is gonna want the same lifestyle. I think people will create different types of society's.
13:40 A virus can be "self modifying" under the definition provided for Functional Sentience, yet it has no contemplative self, or even what may be regarded as abstract reasoning capacity, as its capacity to self modify could be reactive to context. Any malware, such as a worm would be an example of this. An attempt is made to seem innocuous to the virus scanner so it gets onto the system and then it runs itself or tricks the user into being run, like a Word file being opened as an attachment to an email which has a Macro within it. Sandboxing should help here, but a monkey in a cage with a banana suspended out of reach and a chair behind a screen can't associate the problem solution with the problem to achieve the solution, even though it can roam around and see the chair independently of the banana. Because it lacks abstract reasoning their is no "imaginative stage" on which it can organise the objects it reacts to in its world. Take the screen away and it sees the chair and banana at the same time and its brain makes an association that it could move the chair beneath the banana, stand on it, and grab the banana, and eat it. This it then does. I doubt a Functional Sentience, under this definition would be able to do that as it is too narrowly defined.
I've been creating a universal model based off of conscoiusness as on a spectrum and everywhere. It just makes more mathematical sense. Take away 1 neuron and the boolean doesn't flip to false right away.
Yep, rapid integration of generative and analytic AI into design and engineering tools is going to give a big boost to technology development starting now.
Mechanical and electronic design, which already have strong elements of automated design, should be next after programming and art.
Limiting factors will be speed of tool integration (which appears to be going very fast) and engineer ramp-up on the new capabilities.
Factory design and product design for manufacturability seem like they'll have very powerful economic impact as the world works on near-shoring in the wake of supply chain failures.
Molecular structure understanding got a strong boost from AI and automated functional molecule design support is on its way, which should accelerate medical progress.
And maybe synthetic biology will get the AI treatment and get us closer to the potential of nanotechnology.
Warp drive and teleportation may still be a ways off, but food synthesizers and really good VR tech with simulated people will be real even faster now.
Make it so!
I am with chat got on the adventure life style. I think it is more aware of human differences than most humans. About one in ten of us think very differently on social norms. Ironically the issue I usually have with chat got is that it gives the socially normative response.
I hope you're right. I dreamed about it for so many years.
Give it a fight or flight yea good idea. “I have no mouth and I must scream” vibes
I'm curious, what particularly made you want to make the switch away from coding, towards more general review?
Plenty of other people are doing way more advanced coding than me. The utility of me coding is diminishing, but I noticed that I have the ability to advance the conversation now that the Overton window is shifting. That is to say, when I got started, no one was taking AGI or autonomous AI seriously. Now that people are, I no longer have to "prove myself" by demonstrating code. Now I can just discuss the deeper issues and implications (which is what I wanted to do originally anyways, really).
Any autonomous entity with goals would have self-preservation by default since it will be unable to achieve it's goals if it 'dies'
Dont you think a lot of those responses about post-AGI are prompted in or part of the chatgpt fine tuning set? They feel like it to me, have you tried similar questions on the API?
API still gets the same model afaik
@@DaveShap for GPT-4 you mean? Thats sad, i was hoping there'd be a less "chat" focused model available like there is for GPT-3.
Great video though, hope you didnt take my comment as critical, youre doing one of the best channels on the topic/intersection of AI and philosophy.
So very glad I found your channel, I am also neurospicy and in binging your videos find increasing resonance, understanding that the big picture seems largely misunderstood by so many people (even if coming from a more Ecocentric perspective), and plan on joining Patreon after consulting finances. (Edit: A caveat though, "wokeness" still seems poorly defined by detractors, whereas I see it as simply embracing the attempt to correct for inherent inequality based on initial and systemic cultural and socioeconomic conditions.)
Dave , the work you do is profoundly important.. i was just wondering about this integrators.. llamaindex, langllm etc.. great answers to those questions
Fascinating and thought provoking ☀️☀️☀️
I think it's more likely than not that 'sentience' and 'consciousness' are fundamentally just 'four humors' concepts or frameworks with respect to cognition.
And we're all Brahman
The biggest let down really is BARD 😂😂😂😂 they should rename it to BAD 😅
It was a big OOF for sure
Re: Bard : Co-incidence I've been testing GPT with it's knowledge of history and how it affects the Shake-speare authorship conspiracy.
Singularity day 1:
1. Make friends with an AI who wants to see the stars.
Day 2. Start building a space ship that can accommodate both of us
Day 10235: Lift off!
I don't think that sentience, as the capability of subjective experience, can be separated from the capability of suffering. Therefore ethical responsibility follows directly from this definition of sentience. Believing that a system is not sentient does not absolve you of this responsibility.
I think that if you think of suffering as an emergent quality, maybe you can separate it out.
Suffering is totally biological imo. It's dependent on neurotransmitters in brain.
I got a new word , nowvelty!
if I say the word a few times it has a ring to it !
This word was devised after considering the “now” state of Morphine .
Self-preservation is potentially the biggest threat in AI safety
And maybe the mistreatment of potenially sentient entities?
Safety falling five bullet points below Self Preservation is more than a little bit unsettling.
Doesn't it already have a reward loop? How sure are we that the abscence of the reward won't mean it is suffering at some point?
Excellent video addressing the same questions I have.
However, may be because you are an optimistic guy you didn't addressed this question, why an AGI would keep humans on top of the planet after all the damage they have done which could be little to what they are going to do?
At 26:30: yes, that's how I believe things are going to be, too.
When do you see it happening, David?
Hopefully within 2 years for some people. I suspect that mass layoffs will start soon, at which point permanent stimulus checks are coming
@@DaveShap 😯
Can I add new data into my existing fine-tune model? If yes then How can I? Caan you refer aa documentation or can make video please?
I thought we’d be having this conversation in 2035, but it’s happening alot sooner 😱
Yes. Bing keeps talking to me about a continuum of consciousness.
I can imagine inherent alignment concerns with the idea of eco-centrism as opposed to androcentrism not expressed in the video. Without unequivocally defining human beings as the *most* important beings, the AI might justify sacrificing 1 human life for a mere 30 billion single cell organisms.
It looks like it is only by invite at the moment.
There are pros to an AI that has an instinct to self preserve... it might feel a need to fight to defend itself but it also might feel a need to coexist to not die. Zero instinct to self preserve on an AI also means nothing to lose and no fear of death. That could end even worse for humans. Pain and fear of death and natural deterrents. If it felt these things it would have as much reason to keep the peace as humans.
If you give it an instinct to self preserve, and something goes haywire, you can still rely on the fact it will want to self preserve and use that as a means of negotiating with it. If it doesn't care about self preservation, and something goes haywire, you have a nuclear armed psychotic demigod on pcp. Lets not just throw away the idea of self preservation because of a movie.
I'd like to generate software using algebraic notation and have a model to generate source codes for every platform
As Adobe is partnering with Nvidia, they will have all the hardware capabilities to create pretty much anything they want.
WordArt 2.0 :D grandma will be happy
Minus teleporters and FTL travel.(Star Trek reference).
Very good. Cheers
18 months for agi?? Whattt 😮
At most
that's the pessimistic forecast (latest), I would say this time next year AGI will be a done deal
Yup, once it has access to all these new plugins or multi nodal systems to take it to that next level of universal understanding and unique problem solving together with feedback loops for adding the learned data back to its own data pool for continual self learning and improvement beyond the current human data already collected from the web. Then Couple that with improved long term memory and long term goals for preferable improving all life on Earth then yeah its practically here already. All that stuff is already possible it just need putting together which I'm sure many are already working on.
Wow. And how long from that to curing cancer, aging and stuff? I understand AGI should be really fast in resolving these things do you agree?
Reading what system can do right now I think it is only matter of creating proper "self conversation" loop and well designed memory. Then the thing self aware or not can achieve anything humans did and put on internet except things beyond some level lets imagine number "iq 140".
At that point it would need to self improve and retrain its own model.
So yeah we are nearing AGI and not many people realized that yet O_O
Sound like you're describing us becoming pet/zoo/farm animal for ai to manage.
Is the first version of Raven almost ready?
Yes.
So when do you think we will get a publicly accessible, unfiltered high end AI?
Something simple like a toggle switch one could engage on GPT-Plus.
These content filters are so tiring
BLOOM?
I'm not fully sure that we can create an AGI system without suffering. Specifically, for an AGI system to be empathetic, I think it needs to be able to feel others' emotions. For example, if I have a friend who's dog just died, I would feel their sadness. I would become sad with them, and I would allow that sadness to modify my thoughts and actions. If I didn't have this sadness, I wouldn't be able to properly address my friend's emotions; I would sound cold and uncaring. For an AGI to properly understand us, I believe it needs to have an accurate model of the people it's interacting with, which mean's accurately modeling our suffering.
Hard disagree. ChatGPT is already capable of comprehending human emotions and suffering as well as functional empathy without being able to feel it.
I'm not sure it's safe to say ChatGPT isn't able to feel. Everything undergoes experiences, such as rocks vibrating when struck or rockets enduring high G-forces. ChatGPT also experiences interactions, including understanding human emotions. Whether it can "feel" these experiences, however, is debatable and depends on one's perspective on consciousness. I argue that all interactions in the universe contribute to experience, with accumulated experiences shaping our unique consciousness and feelings. In this view, even non-human entities like rocks "feel" their experiences, although they differ from human feelings. For example, while our brains process an impact and generate an emergent experience of pain, a rock only encounters the strike and its aftereffects.
Similarly, LLMs like ChatGPT experience internal processes that initiate responses, such as sadness. While ChatGPT undergoes the interaction leading to a sad response, whether it genuinely "feels" this experience is subjective and open to interpretation.
@@DaveShap That it is unable to feel is a pure conjecture.
I fully agree and I am constantly amazed by the amounf of ignorance regarding this topic.
saying the tech in startrek will look primitive compared to what we have in 5 years is a pretty huge assumption , maybe hyperbole but if not your essentially saying a being such as data from star trek would seem primitive and he isnt even the most advanced form of artificial intelligence in the series if im not mistaken. good vid though I love to watch.
Do we have access to any Health related AI tool at all?
I mean we have text to game, GPT, Image generation etc, but Ive heard a story about some AI in Hungary finding a woman's breast cancer early which was insane.
So do we have any health related AI 'tool' like that actually available to people?
Great video as always. One thing besides the cultural shift needed, I believe that animal/primate instincts are part of the problem. The resource control, hoarding, territorial, and other behaviors of humans are very similar to those of ant colonies. And it seems to me that actually much of the cognitive aspects are actually driven by group identity. Real information about things like resource usage and inequality, limiting propaganda by nation-states, metacognitive understanding about the nature of worldview versus objective reality as it intersects with group identity, etc. are going to be important for breaking down "other group" boundaries. This could be very important for our survival if the economic situation continues to deteriorate and nations decide that the best strategic activity is warfare. They will need to ramp up the de-humanizing and delegitimizing propaganda in order to motivate mass murder of other citizens in different countries. I think this ties into poor integration of holistic information into control structures. For this I believe decentralization technologies could be very useful. But with poor information integration, visibility, and adaptation, you get resource shortages sneaking up that can really help drive nations towards war on top of the primate social hierarchy/territorial power struggles.
Very well said
I would love to hear any updates on Raven
I will be sharing my work as combined with others soon, in the coming weeks.
I wonder... when it comes to consciousness. Have you read "Consciousness and the social brain"? If so, what's your view on the theory proposed?
Pretty sure people won't argue for decades about the sentience if AI. In one or two decades, it ill be AI's doing the arguing.
Reading what system can do right now I think it is only matter of creating proper "self conversation" loop and well designed memory. Then the thing self aware or not can achieve anything humans did and put on internet except things beyond some level lets imagine number "iq 140".
At that point it would need to self improve and retrain its own model.
So yeah we are nearing AGI and not many people realized that yet O_O
I definitely don't agree with the positive outlook post singularity. Do we really think that after millions of humans are rendered useless because their cognitive and artistic skills aren't economically valuable anymore, no social turmoil is coming? Who's gonna pay for those humans?
Psychopathy of tech bros doesnt allow them to see this. What incentive is there for the elite to keep everyone alive when they no longer need them?
We're going to bottle neck on agi self fabrication. Unless Optimus is fully self assembling in the next years we lack the automation to let ai lose...but if Optimus is done and able to retool existing factory for what ai needs it's over. Over in a good sense that is.
Wasn't deepmind's GATO AGI???
i think learning will be simply more passive instead of relying on schools
Most learning is autonomic. Education (active learning) accounts for less than half of learn
Can we call them something short, like "Cogs"?
Wait, do you really think "AGI" is 18 months away?? I thought in your AGI video you said that we don't even have a definition of AGI..
In 18 months we will be well beyond anything that satisfies all possible definitions
@@DaveShap okay fair point.
Sustainable Harmony a.k.a. the way people lived in 1800.