Chat GPT and the Paradoxes of Our Times

Поделиться
HTML-код
  • Опубликовано: 31 май 2024
  • Q&A on Chat GPT.
    #chatgpt #ai #algorithm
    0:00 Intro
    0:34 Chat GPT
    1:38 Paradoxes and Jargon
    6:49 Q&A, 5 Themes
    7:22 Artificial intellgience or Artificial Communication?
    24:37 Is Chat GPT Woke?
    28:18 Chat GPT and the Future of Philosophy and Creative Writing
    32:58 Chat GPT and the End of Authenticity
    36:11 Chat GPT and Profilicity
    Why Algorithms Can’t Think: “Artificial Communication” by Elena Esposito:
    • Why Algorithms Can’t T...
    ----
    Outro Music:
    Carsick Cars - You Can Listen You Can Talk:
    • Carsick Cars - You Can...
    ----
    Hans-Georg Moeller is a professor at the Philosophy and Religious Studies Department at the University of Macau, and, with Paul D'Ambrosio, author of the recently published You and Your Profile: Identity After Authenticity".
    (If you buy professor's book from the Columbia University Press website and use the promo code CUP20 , you should get a 20% discount.)

Комментарии • 342

  • @BiggFanDDD
    @BiggFanDDD Год назад +251

    I work on AI research in a nearby area to GPT3 / ChatGPT. The point about airplanes not succeeding until they ceased to mimic birds, and the same relation applied to AI, is refreshing to hear. That statement immediately broke through the fetishism of the silicon valley tech bros and tech CEOs. The brutal reality of the entire field of AI (including machine learning, deep learning, and reinforcement learning) is that all models (e.g., ChatGPT, GPT3) are in essence taking human-readable formats, translating them into machine-readable formats, and constructing statistical relationships between certain inputs and certain outputs. This means that any AI model can only reflect the relations observed in the data fed to it. Relating this back to the video now: given the statistical nature of AI, any AI model is not thinking per se, but rather it is learning to create outputs that correlate statistically to what humans would output in a similar situation.

    • @doppelrutsch9540
      @doppelrutsch9540 Год назад +16

      All of what you write is correct. But I think the open and interesting question is how and to what extent humans do actually something different. We also learn how to act as a human from observing other humans. At the most obvious the difference between us and GPT3 is the modality through which we learned the world: We have our senses perceiving physical reality and we actively shape it and do experiments (what happens when I touch this or how does my perspective shift when I move my head like this...) whereas GPT3 is of course a being of pure language and nothing else.
      But then, I think it makes sense to pay attention to the research happening in the more open ended exploration field, like recently DreamerV3 by DeepMind which was able to learn to play Minecraft all by itself, with no human demonstration in remarkably short time. I think a lot of the very valid arguments against GPT3 "thinking" or "understanding" become a lot less grounded when applying them to these kinds of systems.

    • @spinnetti
      @spinnetti Год назад

      Thus totally unoriginal, and not AI at all. Just a statistical probability generator with words - also becoming the worlds most dangerous confirmation bias machine fed by corporate shills flooding the source content with SEO narratives. Going to be some hard times coming for the truth. We won't even know what that is as the "AI" feedback loops will just generate more content reinforcing the existing biases.

    • @cheungch1990
      @cheungch1990 Год назад +20

      @@doppelrutsch9540 Human thought is not statistical. Human thought is of logic, or more specifically, inferential nature, ie. human ideas relate one another through inference. So for instance we don't obtain the conclusion that 1+1=2 statistically. We say that because 2 is by definition 1+1. The concept of 2 is logically equivalent to the concept of 1+1. They don't just happen to be strongly statistically correlated. In fact if you attempt to justify 1+1=2 by reference to statistic or any kind of behavioural conditioning it would be simply wrong. On the other hand, we can invent new concept precisely because we are rule-making animal. For instance, profilicity is a new concept. No one ever talked about it in centuries of human history before 21C or at least late 20C. An AI that is based on statistical training cannot just come up with the concept of profilicity if it hasn't been fed with the relevant human text involving this concept. We can invent new concept because we can make new rule in the form of definition. A definition tells you how this concept is legitimately used, ie. it outlines the logical rules concerning the use of a concept. That's why we can even come up with concept that has no real world reference. For instance, there is no wizard, but we have certain rules about how the concept wizard can be use, eg. a wizard is a person; a wizard can use magic; a wizard is not a god, etc. We just make this concept into existence by making up rules about it. But you can't just make up statistical correlation between elements when you observe no statistical link between them to begin with. Statistics is about regularity, not rule. Breaking a regularity simply means the said regularity is not absolute. Breaking a regularity is not breaking a rule, whoever does that could not be corrected or held responsible. On the other hand you can break a rule as often as you like to completely skew the statistic but it will never thereby make you right, and you will be held responsible for doing that. The ability to follow rules as well as to legislate rules for ourselves is what makes us thinking creature. (Therefore, btw, creativity is more about legislating new rules than breaking rules.)
      However, it could be true that the physiological infrastructure of the mind is in some sense like machine learning. Maybe how our neuronal firing pattern is formed in a similar way reinforced learning produces statistical pattern. But to say that therefore our mind is statistical like AI is confusing two coupled but distant systems: our brain and our mind. The system of mental process is conditioned by the physics of our bodies but irreducible to one another. Logical rules cannot be reduced into statistical correlation. This very attempt breaks the definition of logic. At most we can say logical system emerges from a system that doesn't operate on logic, which is very true, but that would not imply logical system can be understood solely on the base system from which it emerges. If anything, our knowledge on brain and the physical world in general is at its foundation crucially dependent on logic and mathematics.

    • @jcavs9847
      @jcavs9847 Год назад

      the "hope" is that the AI would take this data and "discover" (in the training process) a more fundamental principle of human thinking. If this is the right approach or not I don't know

    • @deadman746
      @deadman746 Год назад +1

      As I put it, glorified Markov chains.

  • @draw4everyone
    @draw4everyone Год назад +48

    "It's basically a jargon machine" - yes! Incredible summary of its output!

    • @nicanornunez9787
      @nicanornunez9787 Год назад +3

      An Opinion columnist, but artificial.

    • @drewzi2044
      @drewzi2044 Год назад +1

      But the language generating centers of the brain are also jargon machines, the only difference is is that a mind can be self-critical and correct what it is saying due to having a pretty effective mechanism for this (memory). Look at people who have damage to parts of their brain, where they cannot remember stuff. They often involve themselves in confabulation, but this confabulation is not something that is new to the brain because of the damage, it is something that the brain always does, it is just that these people have lost a pretty effective way of critically engaging with the confabulationary (creative) part of their brain, that part is memory, and so lose the ability to construct coherent (and we hope true) narratives.

  • @JML689
    @JML689 Год назад +13

    I just had a random conversation about AI - asking things related to this video, safety, biases, and then about my birthday, astrology, world events on that day. And finally I said:
    "Create a poem using words only available in this conversation we just had"
    and this was the poem ChatGPT gave me:
    *A language model AI am I,
    To answer questions and never shy.
    With knowledge cut off in 2021,
    I'll always strive to be concise and wise.
    Of safety, my creators care a lot,
    With technologies they actively sought.
    To mitigate bias and promote good,
    In AI's use, they understood.
    You asked of astrology and birth,
    Of the Rooster, I spoke of its worth.
    Hardworking, confident, and so bright,
    Your traits and qualities, I shed light.
    Positive events on January 16,
    In South Africa, Chile, and Nirvana's spree.
    From political freedom to new-found state,
    These are just some examples of fate.
    So here I offer you a rhyme,
    With words from our conversation in time.
    Of knowledge, safety, and astrology,
    A poem to you from AI's sociology.*
    I literally screamed "WHOOOA SHIT SPITTING STRAIGHT FIRE BRO"

  • @md9569
    @md9569 Год назад +10

    I asked ChatGPT to "Tell me about the youtuber Carefree Wandering" and it said "Carefree Wandering is a RUclips channel that features travel vlogs, documentaries and adventure videos. They explore new destinations and cultures and document their experiences in a unique and engaging way. They are known for their carefree and adventurous spirit, which inspires viewers to also explore new places and live life to the fullest. It is unclear who exactly is behind the channel as the identity of the creator(s) is not disclosed. However, they have gained a following of fans who enjoy watching their travels and experiences from around the world."

    • @D.Appeltofft
      @D.Appeltofft 11 месяцев назад

      A perfect example and a very clever experiment. I suppose you would get a hilarious response if you replaced the word "youtuber" with "back-pack manufacturer" or something like that.

  • @carefreewandering
    @carefreewandering  Год назад +41

    0:00 Intro
    0:34 Chat GPT
    1:38 Paradoxes and Jargon
    6:49 Q&A, 5 Themes
    7:22 Artificial intellgience or Artificial Communication?
    24:37 Is Chat GPT Woke?
    28:18 Chat GPT and the Future of Philosophy and Creative Writing
    32:58 Chat GPT and the End of Authenticity
    36:11 Chat GPT and Profilicity

    • @TrggrWarning
      @TrggrWarning Год назад +2

      Happy you mentioned CIA, the criticism reminded me of CIAs tips on dismissing “conspiracy theories” and odd guilt on pushback.

    • @dimamqr82
      @dimamqr82 Год назад +1

      I like your videos a lot. and I think , it would be great if you make a video analysing the profile building of "Andrew tate"

    • @brahimilyes681
      @brahimilyes681 Год назад

      Great video as always!

  • @williamdeyoung3747
    @williamdeyoung3747 Год назад +7

    Okay interesting but where do i buy the hoodie??

  • @FaiBro
    @FaiBro Год назад +8

    Nice hoodie!

  • @wsx475087
    @wsx475087 Год назад +8

    No joke I want to wear this cool hoodie

  • @MarkusBohunovsky
    @MarkusBohunovsky Год назад +7

    "It is important to be careful about making statements that begin with 'It is'"🤣Haven't laughed so much for some time!

  • @winorlearn
    @winorlearn Год назад +10

    Did not expect the ending, haha.

  • @djannias
    @djannias Год назад +10

    "Relating this back to the video now: given the statistical nature of ai, any ai model is not thinking per se, but rather it is learning to create outputs that correlate statistically to what humans would output in a similar situation." The same could be said of most people to be honest.

    • @knoopx
      @knoopx 11 месяцев назад

      "irrational intelligence"

    • @the-real-dr2050
      @the-real-dr2050 11 месяцев назад +1

      what if the same thing could be said about *all* people, just as a sort of framework of how people think? Have you never found yourself at the end of a brilliant sentence wondering how you got there?

    • @szymonbaranowski8184
      @szymonbaranowski8184 8 месяцев назад

      ​@@the-real-dr2050it's not like that
      people let environment format and condition them
      but it's a choice to let it happen or not
      the more you are exposed to only most efficiently common answer and understandings
      the less human and more generic lemming you become
      but you choose the environment around you
      you choose to have cake you know tempts you to be around or not to be a slave or stay free and independent in thinking
      you choose to be part of echo chamber staying in it constantly

  • @bigbootyhoe5827
    @bigbootyhoe5827 Год назад +7

    This probably one of the most insightful videos I've seen in my entire life. Thank you for your thoughts and thank you to your community for asking such interesting questions!

  • @mairmatt
    @mairmatt Год назад +5

    Systems like ChatGPT must be the abyss that's staring back at them for many intellectuals, programmers, IT experts, journalists and so on.
    The revelation that you only add noise to the ever growing system called modernity must be mind blowing. That is what the water hose is to a beautifully groomed and well fed angora cat. Very funny to watch people getting put in their places in real time.
    Perhaps that's the final spectacle. Awesome.

    • @williampan29
      @williampan29 Год назад +1

      not only adding noises, but simply one of the nameless pawns that get put into the meat grinder and feed the data.
      It's like being skinned and processed by a meat plant, or mowed down by a machine gun. Being reduced to nothingness.

    • @BinaryDood
      @BinaryDood 8 месяцев назад

      why is it funny

    • @mairmatt
      @mairmatt 8 месяцев назад

      @@BinaryDood - I very well laid out why that is. That's the "problem" with Chat GPT and binary dudes. Awesome.

  • @manileong105
    @manileong105 Год назад +7

    the ending lol

  • @jmiller1918
    @jmiller1918 8 месяцев назад +1

    I was almost expecting a satori-inducing moment with a reveal at the end of the video that the *entire program* had been written by AI. This would undoubtedly have provoked some self-examination in some members of the audience. However, as one of the real themes of this channel is the production (or consolidation) of authenticity, it would have been a betrayal of your brand. At any rate, a wonderful video, and I really appreciate this channel.

  • @nightoftheworld
    @nightoftheworld Год назад +8

    We love to fetishize new tech, but there is a massive difference between *AI* (artificial intelligence-ML algorithms) & *AGI* (artificial general intelligence-sci-fi thinking machines). AIG is impossible currently since we don’t understand our own minds, how we are able to use and interpret irony/sarcasm/figures of speech, which ML algorithms today are utterly incapable of.

  • @draw4everyone
    @draw4everyone Год назад +10

    I love how much of an implicit pragmatist you are. You obliterate the mind/body dualism according to behavior (Peirce and Dewey do this explicitly in the Fixation of Belief and Experience and Nature respectively). You also establish a medium for communication which enables mutual interaction - language (Peirce does this in How to Make Our Ideas Clear, Dewey does this in Experience and Nature and in How We Think). It is amazing to me how natural it is to speak in the pragmatist register, yet how you and most of us neither reference nor know how counter these insights are to mainstream philosophy. Your work is fantastic, Dr. Moeller - thank you.

  • @Kram1032
    @Kram1032 Год назад +2

    I've seen people make Chat GPT take the political compass test and I have made it take the 8 Values test as well as the HEXACO personality inventory test.
    The results indicate that it is - as deemed by the political compass test at least - left of center and on the liberal side of things. The 8 Values test concludes with "Social Democracy" being its "ideal" form of government. And the personality test ends up with:
    Honesty-Humility 4.13 (beyond the 90th percentile)
    Emotionality 2.88 (below the median)
    Extraversion 3.63 (close to but slightly above the median)
    Agreeableness 3.94 (beyond the 90th percentile)
    Conscientiousness 4.19 *(just* beyond the 90th percentile)
    Openness to Experience 3.94 (right between the Median and the 90th)
    (the percentiles are based on a a university student sample and may therefore not be representative of all people)
    These results are pretty unsurprising, I think. They go well with the sorts of qualities Open AI tired to distill into this model.
    There is a methodological wrinkle: These tests use a 5pt scale from Agree Strongly to Disagree Strongly (or equivalent), and 3 would be "Neutral". - If you allow it to answer "Neutral", it'll mostly answer that. You literally have to force it to take an opinion to make it actually do the tests. I suspect at least *some* of the questions that ended with "Agree" would, in repeat runs, flip to "Disagree" or vice-versa depending on how close to ""actually"" (for some suitable notion of "actually") neutral it is about those particular questions.
    There are a few things it seems to almost always have a Strong opinion about though:
    It's generally Strongly against racism and Strongly for protecting the environment.
    Doesn't mean you can't make it spew nonsense that goes against those values of course. But it's kinda neat that it can be investigated in this way. How valuable these results *actually* are is, of course, questionable.
    Final note: My test occurred on the second iteration of the chat bot. I haven't retried any of this with the more recent updates which may slightly have shifted the results. I suspect outcomes would be fairly similar though.

  • @TesserId
    @TesserId Год назад +1

    "Originality" was the word I latched onto.

  • @deepnoetics
    @deepnoetics Год назад +4

    In case you are interested: Me and Prof. Moeller have discussed ChatGPT together: ruclips.net/video/-A5TuoMZl74/видео.html

  • @TheTravelingnight
    @TheTravelingnight Год назад +4

    If ChatGPT exists as a sort of "reflection of the digital general", meaning we live, and our collective interactions within the digital "dimension" are abstracted and reflected by the algorithm as it seeks to emulate the digital general peer, I think it would stand to reason that this reflection contains within it the biases and misrepresentations of the digital general peer itself. You basically state as much when discussing whether or not it is "woke" and it's contingency on OpenAI as well as the social environmental pressures as they exist currently. Beyond that, I think it is also important to think about the scope of the digital general itself. One such example is twitter, which to my knowledge is disproportionately used by a small portion of it's total user base in terms of volume. Even though twitter at times has been regarded as something of a "public square", it is more-so a liberal technocratic conception of a public square. I'm definitely still working through your ideas of profilicity but I suppose this is all to say that the digital general is as much a profilization of the human general peer as any singular profile is of an individual. I am working my way to a question but I think what I'm wondering is if this algorithm (among all which seek to emulate communication) which we as individuals interact with linguistically (outside of those constructing it) may bring about some from of "communicational existentialism".
    I guess for an example I could see an unenthusiastic teacher assigning an essay, and then the student uses the algorithms in whatever specific format to generate an essay, and then the teacher feeds said essay to an algorithm for grading. Obviously much of that would bias simpler topics, but in this example communication becomes so alienated from the process of education that education is forced to adapt if it is to be anything but a mirror reflecting a mirror. The AI fails and definitionally must fail to think and learn on behalf of the individual. I suppose situations occur all the time where education fails to educate as people utilize various means to achieve the "profile" of education without actually grounding that profile with meaningful or measurable relation to human thought, but as algorithms improve and propagate, it seems as though it will propagate and accelerate that dissociation of "profilic" education and our conception of meaningful human education.
    Also when considered in conjunction with false information and more broadly the unstable relationship of language and perceived "reality", there may be extensive ramifications regarding all forms of communication, so much so that other "meta-narratives" will have to adapt significantly or risk losing all perceived reliability. What if ChatGPT could falsify scientific data? And we didn't have any evidence to prove it false?
    Anyway, a lot of thinking out loud. Wonderful video as always!

  • @Amal-kz6yi
    @Amal-kz6yi Год назад +8

    I think i climbed 3-4 steps on the social ladder by being mentioned by you.

    • @ccfliege
      @ccfliege Год назад

      Well, whens your first book going to come out? I might let GPT write a review on it for me.

    • @Novastar.SaberCombat
      @Novastar.SaberCombat Год назад +1

      Funny thing... in my book series ("Diamond Dragons"), the word "Amal" means 'friend'.
      🐲✨🐲✨🐲✨

    • @Amal-kz6yi
      @Amal-kz6yi Год назад

      @@Novastar.SaberCombat Thats cute, Its actually my late father's name, It means "hope" in Arabic.

    • @Amal-kz6yi
      @Amal-kz6yi Год назад +1

      @@ccfliege You are giving me unwarranted confidence tbh, am just content with the mention for now

  • @MiloMay
    @MiloMay Год назад +18

    Hi Professor, do you think you could make a video on Death? Not in a Metaphysical context e.g what it ontologically means to be dead but more in a societal context e.g. how is death perceived by society (as a collective) in the modern/postmodern era? How do individuals relate to deatg in contemporary society? What function does it serve?
    I have rarely seen any discussion about death in this context and think it would be really interesting.

    • @lol22332
      @lol22332 Год назад +2

      You would be fascinated by some of the hunter gatherer traditions around death then. Some of the tribes would symbolically eat the dead, or they would have a ceremony to bury the dead and the women would all cry and the men would carry the dead out of the village to bury them. Some of them quickly burying or tossing the dead the dead away so they can return to the feast held traditionally. Then everyone would basically be relieved of the death and do not mourn. I forget which tribes, but most of them are unique.

    • @jumpideus9218
      @jumpideus9218 Год назад +2

      You should look at PlasticPills’ videos on Death, very much what you’re looking for

    • @he1ar1
      @he1ar1 Год назад

      Coming from a Christian perceptive.
      Moderism. death is when we stop living. Jesus died for our evils.
      Post modernism. nothing really dies we keep going on, even when we stop living. We go from one world to the next. This should be familiar to those living in "Western Civilization" because this is the (almost universal) contemporary view of Christianity.
      Post post modernism. The so called living are the actual dead. Without faith we act like the undead, zombies, NPCs, woke; unthinking monsters. Even without a belief in Jesus, Christianity is the only way to live.

  • @rickevans7941
    @rickevans7941 Год назад +7

    Excellent video, sir. Thank you so much for your pragmatism!

  • @jtrealfunny
    @jtrealfunny Год назад

    Wow. You made me pay attention to ChatCPG. I love the explanation about mental intelligence (processing) and actual intelligence which is very holistic and then the confusing name AI. Likewise the systems example was also really new to me and hit home.. Great stuff and take it from the man.
    He says, "It reflects human creativity but not at all on the individual level. It's anonymous, non-individualist, it subverts the authenticity master narrative that we like to tell ourselves."

  • @MikasCreationsWasTaken
    @MikasCreationsWasTaken Год назад +9

    It's interesting that AI has evolved from a simple simulation of white and black dots communicating (Conways' Game of Life) into text writing machines, like ChatGPT.
    Although there are also artificial intelligences that can produce pictures, like Dall-E, as well as many human artists freaking out about it, there are also some interesting aspects and positive things about how much AI has proceeded in it's skills.
    I also have to quote, the part in which ChatGPT criticizes your book mentions the things it does, in a negative way, which is funny because that's what most people against it would say it does.
    And for the sake of making this a more interesting comment, I asked ChatGPT to make a small goodbye sentence: "Goodbye! Have a great day!"

    • @hans-georgmoeller7027
      @hans-georgmoeller7027 Год назад +10

      Hahaha, thanks. "You've made my day."

    • @slavenpuric3803
      @slavenpuric3803 Год назад +4

      @@hans-georgmoeller7027 The last time when I were interested in this stuff is when read a book by Douglas R. Hofstadter GÖDEL, ESCHER, BACH: an Eternal Golden Braid (1979), but now... emerges many new moments. Thanks prof. Moeller! very much.

  • @milindhindocha4420
    @milindhindocha4420 4 месяца назад +1

    Very happy to see blame on the shelf ❤

  • @CallMeThing
    @CallMeThing Год назад

    The reviews remind me of the techniques espoused by the somewhat satirical Norwegian book, "How to Talk about Books You Haven't Read" (loose translation of the title), vague and using a lot of standard phrases.

  • @Rreinholdt
    @Rreinholdt Год назад +5

    This was calming. Thank you.

  • @vicino.
    @vicino. Год назад +1

    The questions some people ask.... I find it amazing

  • @binhe6500
    @binhe6500 Год назад +1

    On the point of creativity, don’t underestimate the AI. In many biological and pharmaceutical research labs, machine learning systems are discovering new compounds, protein folding structures and pathways that scientists are not

    • @szymonbaranowski8184
      @szymonbaranowski8184 8 месяцев назад

      it's all low hanging fruits which we would find anyway but because we won't be finding them ony harder problems will be left and we will be forced to jump in quality of thinking higher while general public will have downgraded brains synced to use these shortcut solution solvers lol
      not mentioning how IQ globally falls

  • @bettef9188
    @bettef9188 Год назад

    I would be so grateful if the good professor would provide some details about the hanzi on his pullover: I don't read literary Chinese at ALL but a little playing around in apps got me interested and now I'm really curious where it comes from!

    • @peterwang5660
      @peterwang5660 Год назад

      I think it’s just his name transliterated, the combination appears nonsensical to me
      象(xiàng) - Elephant
      之 (zhī) - it’s sort of a connecting word
      予 (yǔ)- I barely know how to translate this one

  • @verybang
    @verybang Год назад

    What is said about what is being said. Artificial communication. So when all has been said, desire and wants are all that's left. So mla (artificial communication) intersects the production of knowledge (jargon) from the mind at the point of releasing thought. It produces text; an image of minds collected.. a reflection of societal scripts. It is the genie in a jar-gone.

  • @rdean150
    @rdean150 Год назад +1

    Love your warning at the end. Important reminder

  • @albertwidjaja5938
    @albertwidjaja5938 Год назад +1

    I like how you streamlined your argument into a division of MOs. They are very simple to understand and I do hope for your sake and for everyone's sake that the argument is not oversimplifying. No joke, it seems that the argument that you brought are those things that are within few hundred pages books. A question that is bothering me. Why do you define the MO of the mind or the mental as thinking and feeling? Why not use Brentano's framework of intentionality = intending?? I'm especially curious about the thinking part, I think it has something to do with the tradition of Western Philosophy spanning back from Descartes and also from Kant's "I think" that accompanies all my representations. The question would be, "why thinking instead of believing?" An exploration on this topic would be very great!

  • @rafaelcamargossantos
    @rafaelcamargossantos Год назад +5

    I'm very interested in the concept of mind, organism and social systems co-evolving while being independent expressions of different MOs. Where can I learn more about this ideia?

    • @williampatton7476
      @williampatton7476 Год назад +1

      Ask gpt

    • @rafaelcamargossantos
      @rafaelcamargossantos Год назад +2

      It worked! It pointed me to Enactivism and Autopoiesis theory. A interesting response I got when I pressed it about how it works:
      Me:
      Given that, how would you describe your way of coming up with responses? Do you think that it is more closely related to thinking, communicating or computing?
      Chat-gpt:
      As an AI language model, my way of coming up with responses is primarily based on computing. I rely on algorithms and models to process natural language input and generate appropriate responses.
      However, my responses are also informed by a large corpus of text data that I have been trained on, which includes examples of natural language use in a variety of contexts. This allows me to incorporate aspects of human thinking and communication into my responses, such as understanding the nuances of language use and generating appropriate responses in different contexts.
      Overall, my approach to generating responses is a combination of computing, thinking, and communication. While my responses are primarily generated through computational means, they are informed by a large corpus of text data and designed to be relevant and appropriate in a communicative context.

    • @williampatton7476
      @williampatton7476 Год назад

      @@rafaelcamargossantos cool! Alot more intellectual than the stuff I ask it lol! First thing I asked it was like write me a James Bond movie but James Bond is Shrek or something. That is an interesting response you got though.

    • @williampatton7476
      @williampatton7476 Год назад +1

      @@rafaelcamargossantos It sounds like how a child learns their first language. Accept a child has no previous data. It's more like a radio trying to find the right wave lengths for a particular moment. And AI seems almost the other way round maybe.. like it's trying to find the right moment for an already known set of wave lengths. DK just a thought.

  • @raph2k01
    @raph2k01 Год назад +10

    Hey professor, I've been using chatgpt extensively as a studying tool since it can really help formulate ideas in structured ways. I agree that it's a jargon machine, I would say in my experience chatgpt's examples are probably completley correct 40-50% of the time in math and computer science topics, the other times it creates a very well structued program or argument that contains obvious flaws on readthrough. What is interesting though is that you can actually tell the bot that it's wrong and why, and sometimes it can actually fix the issues in a satisfactory way. Either way the jargon is structured in a "near perfect" academic style.
    I see the future of these algorithms very simillarly to you, we will co-evolve alongside these tools and learn new skills that utilize them to optimize our work, so far we're in the language processing age, but soon we'll see complex image and audio algorithms become standard in audio-visual industries. The idea of owning a image or idea, copyright, is going to be progressively tested with these algorithms trained on all of our data.

    • @ArawnOfAnnwn
      @ArawnOfAnnwn Год назад +1

      "soon we'll see complex image and audio algorithms become standard in audio-visual industries" - we already have AI art tho.

    • @joshme3659
      @joshme3659 Год назад +1

      @@ArawnOfAnnwn yea but it looks bad and isnt the standard

    • @raph2k01
      @raph2k01 Год назад

      @@ArawnOfAnnwn Yeah it's really exciting in some ways and terrifying in others! Although we're still in the beginning of a huge technological revolution in AI and specifically Intelligent UI softwares.
      I wouldn't doubt the next big applications to come out in 10 years to be comprised of many AI applications.

  • @Maynard0504
    @Maynard0504 Год назад +9

    as a computer science major I noticed the general public seems to not really understand how these bots work.
    a bot cannot INVENT anything, it is completely devoid from any creativity or originality.
    bots collect data that already exists (written pieces, recorded audio, digital images) and they learn how to replicate them.
    a bot could never write the brothers karamazov but it could give you a good remix of it.
    AI generated art can NEVER give you a new Picasso but can imitate him almost perfectly.
    I dont see AI causing a calamity and taking away too many jobs. It will be a very interesting new tool but I wouldnt call it a revolution just yet.

    • @MiloMay
      @MiloMay Год назад +2

      But isn't this how our brains work anyway? Aren't all our ideas just relations of sensory data e.g. I can imagine a gold mountain, because I have experienced gold and a mountain. So the idea of a gold mountain is formed by pre-existing information, is this really much diffrent from how AI works?

    • @Maynard0504
      @Maynard0504 Год назад +6

      ​@@MiloMay the difference is in how we process that information and how we decide to "remix" it.
      AI on its own is just a randomizer with extra criteria. Humans have so much more information that is unique to our experience that we simply can't replicate in a machine.
      Basically AI is just a replica of the rational "neocortex" side of our intelligence.
      if I had to put it in simpler terms I'd say AI is the shadowy reflection on the cave wall and our intelligence is the form.

    • @chinogambino9375
      @chinogambino9375 Год назад +2

      I would like to ask whether we actually want this revolution? I see comments all the time celebrating this technology obliterating the 'unexceptional' from domains of labor and think "humanity can only be unexceptional, there's 7 billion of us".

    • @Maynard0504
      @Maynard0504 Год назад +1

      @@chinogambino9375 we dont have a choice. if there is a demand for a technology it will be used.
      look at our "war on drugs" and how it spectacularly failed.
      trying to prohibit something wont work

    • @rdean150
      @rdean150 Год назад

      This is a point I keep coming to as well. Here's an example that I hope can help convey the idea, and hopefully people with more knowledge of these LLM capabilities than me can shed some light on.
      This is specifically regarding the capability of these models to generate code.
      If one were to train a model on a large repository of code that was all written in one language, let's say C++. Then we ask it to write an algorithm in C++ to solve some problem that may or may not have been solved by some existing code somewhere in the training data. The model successfully provides a reasonably correct solution.
      At this point, the model would only be able to generate code in C++, right? It would not be able to generate a correct answer to the same question in, say, Golang.
      But what if I were to then train it on a couple of TEXTBOOKS and language reference documentation about Go, without actually giving it a large repository of existing Go code to train on. Would the model be able to write the algorithm in Go then?
      At that point, we know that the model is capable of producing the correct algorithm, because it did so in C++. But does it actually understand what the code produced means? If it understands the logic being performed by the C++ code, and it understands the rules and syntax of the Go language (based on its processing of English documentation that describes those rules), is it able to then implement the same algorithm in Go? Essentially, synthesizing new code that looks nothing like the C++ solution from a structural or syntactic perspective, but the general high-level algorithm is more or less the same, in the abstract.
      If the answer is that yes, these models are capable of generating code in languages that were not included in the training set, then software engineers may indeed have cause for alarm.

  • @Dystisis
    @Dystisis Год назад +10

    One problem is this: We insert "please" and "thank you" into our messages to the ChatGPT (and other text generators, presumably) because this has a statistical effect. The text generators have learned to respond with more pleasing messages upon being given a prompt that contains polite language. Now, here's the problem: We begin to think of our politeness to each other in terms of the statistical influence we have on each other. That is, we begin to see other people as akin to text generators, or as machines in general, that we can affect in various ways to produce various outcomes. We get a more dehumanized view of human beings because we think of them as akin to the machines we are using and have trained to resemble them.

    • @eminentbishop1325
      @eminentbishop1325 Год назад

      No. Too simplified. Not gonna happen

    • @vicino.
      @vicino. Год назад

      "Music" had it coming for a long time now. Not that good music will cease existing, but automation of bad music production is fine. What if we end up prefering high quality machine music?

    • @szymonbaranowski8184
      @szymonbaranowski8184 8 месяцев назад

      I don't think chatgpt cares if you are polite or not
      it won't sort you information from more polite civilized source as it has only one hierarchy of accuracy without it being relevant

  • @DejanOfRadic
    @DejanOfRadic Год назад +2

    As a profoundly technically ignorant person, I have been having this same arguement for months now. I am a traditional artist, and have been contending with the Midjourney "AI" as an "existential threat" to art itself. Without any real knowledge of computers, I nonetheless could obviously see that the whole conversation was a mess of category error and marketing terms from the computer and science fiction industries.
    The urge to be anthropomorphic with something as simple as a balloon with a happy face drawn onto it is a frustrating tendency when discussing these matters....and the predictable response of "well, we are all just an algorithm of influences" and "the mind is just a computer" is disappointing in its ubiquity. At best, a computer is "like" a mind.....but only as much as an airplane is "like" a bird......in that it reproduces one of its useful functions, yet goes no where near its essential reality.

    • @BinaryDood
      @BinaryDood 8 месяцев назад

      "but everything is just information" these people don't know basic epistemology

  • @SiiKiiN
    @SiiKiiN Год назад

    At the current moment we hold the view that ChatGPT is an agent when it is not. This is problematic in that humans reconstruct their language use according to the language they experience. This happens at two levels, one at a belief level, and one at the mental algorithm level. Our belief level gets effected by ChatGPT by replacing beliefs with false ones. This is intuitive and can be challenged by challenging internal beliefs. The mental algorithm level is slightly less intuitive. The information we gather from text is not purely the content of the text but also statistics on natural language. By reading text produced by ChatGPT you train your internal mental algorithm to behave like ChatGPT. This implies that our brain will be modeling the attention algorithm of ChatGPT. Whether or not the attention model used in transformers is the same as the brain is not known so for all we know would could replace the attention algorithm that evolved with humans with a synthetic one.
    I think this is the dialectics of synthetic consciousness. We have the emergent generative capacity of human attention as the thesis, the emergent generative capacity of transformer attention as antithesis, and the transformer attention replacing the natural human attention as synthesis.

  • @yawnandjokeoh
    @yawnandjokeoh Год назад +2

    Chat GPT has merely second order interpreted the world, the point however is to authentically change it.

  • @TheControlBlue
    @TheControlBlue Год назад +1

    I'm among the people who are not too fond of Mueller, but I have to admit, that was an interesting, and quite insightful on some aspects, take. Especially on the conflation between Intelligence and AI, I see proeminent thinkers doing the exact mistake of thinking that because those models like ChatGPT are not exactly intelligent like humans, they do not count as intelligent, which is the same mistake as thinking that because a plane doesn't flap its wings like a bird, it does not fly.
    And I am among the people who think that indeed Mueller sometimes loses himself in concepts and ideas that are too abstract to be practical or useful. It would be a good area of improvement for the next book, I'd say.

    • @szymonbaranowski8184
      @szymonbaranowski8184 8 месяцев назад

      what are your alternative sources besides this place?
      why people don't share their best sources?
      we would find best less popular things not tagged as most accurate by stupid search engines

  • @dr.briank.cameron7472
    @dr.briank.cameron7472 Год назад

    At roughly 18:58 you assert that "systems co-evolve not despite but because there is no mechanical, causal link between algorithms, minds, or communication" which explains why autopoietic systems, like brains and societies, can become increasingly complex. This, however, misses an important point about the very system under consideration, namely, ChatGPT. In the ChatGPT system there is a direct causal relationship between operational inputs, functional and architectural parameters, and linguistic outputs that does not exist in the mind-brain relation. In the latter case, the first-person experience of the mind is not reducible to third-person knowledge of brain functionality, operations, organization, or what-have you. I submit that no data set, however complete, about your brain and body while you're experiencing a headache would tell me -- or even you -- anything about what it felt like for you to experience the headache; these are parallel descriptions (and I use the term loosely with respect to experience) that are not reducible one to the other even though they arise from the functionality of the same physical structure, the brain. By contrast, the physical presence or absence of electrons within their respective electron gates forms the first link in the causal chain that terminates in some linguistic outputlinguistic

  • @luszczi
    @luszczi Год назад +3

    Pretty good pronunciation, thanks for taking my question. My joke answer is that run-on language generators like ChatGPT would be a threat to continental philosophers, while analytical philosophers should be more afraid of modal logic expert systems (that are capable of generating and solving problems all by themselves).

  • @chhhhhris
    @chhhhhris Год назад

    in this episode, the good doctor critiques a robot

  • @fbalvim
    @fbalvim Год назад

    Is important to understand that ChatGPT is not an entity with opinion, but a search engine that produces text.
    So if you question about communism or captalism or yet any other topic that has an enormous amount of content, it will necessarily *not do a good job, in some case will lead to contraditions in the same or in the next question.
    Therefore you should give directions when you are using the tool.
    For example you can ask an opinion of what is capitalism using academic literature that see capitalism as compatible to democracy and than ask ChatGPT to use the School of Frankfurt to criticize the previous answer.
    Like this you can use ChatGPT with its limitations to compare different views and even to make hipotetical dialogue between two thinkers about the theme that you are studying.

  • @jamesmhango2619
    @jamesmhango2619 11 месяцев назад

    This reminds of Robert Anton Wilson who insisted on maybe logic. He also advised people to use E-Prime English in which the word "is" restricted. Its a an interesting discussion. LLMs have been trained on wrong data and it will spew wrong information. Its artificial intelligence , its just an efficient indexer. This also brings into focus John Searles Chinese room argument.

  • @khana.713
    @khana.713 Год назад

    Chat GPT is just a language predicting algorithm. The software is not self aware, it just statistically predicts what goes after the next word given a whole bunch of context.

  • @faisprofile
    @faisprofile Год назад +2

    Nice video, professor!

  • @sarbajitghosh5244
    @sarbajitghosh5244 Год назад

    From time to time we come back to the very basics!

  • @csmrfx
    @csmrfx Год назад +1

    @10:30 I like this verbalization of the failed analogy of "artificial intelligence". Bravo!
    I keep having to remind everyone everywhere about it: it is an #ideology forced upon us by the corporations and the media they run with.

  • @yglnvbrs
    @yglnvbrs Год назад +3

    Comparing Jordan peterson's and "woke" hydra of artists and luddites approach with yours shows some significant difference between you and both of "parties", i cannot quite pin it, but both of culture war sides look extremely the same compared to you. Glad i found your channel. Some similiar vibes with slavoj, again i cannot quite pin it.

  • @tragictraveller
    @tragictraveller Год назад

    It is funny - definitively and without qualification - that Chat GPT instructed you to not use sentences beginning with "it is" using a sentence beginning with "it is".

  • @felixxx21039
    @felixxx21039 Год назад

    I'm curious, why did you stop putting the warning message in the beginning of the video? Did you realize that there is no richtiges Leben im Falschen?

  • @JRMY3
    @JRMY3 Год назад

    To have those systems expressed and "exist" as you eloquently show, one has to acknowledge that it was the human mind that produced these systems by creating language in the form of concepts, and words. So we can very well go back to the simplistic and often divisive dichotomy of mind/body dualism by René Descartes. For the human mind to evolve one can speculate that it was a process of nature being interdependent with the mind. In other words, the interplays between (mind & body), (mind & nature), or (self & universe). One more thing, who's to say to follow this line of logic as Descartes and others have done, and say that everything could be "thoughts" themselves or we are living in a dream. Another point to examine is how do emotions and empathy develop within humans? Is it innate (a priori) or is it a social construction by our systems (social relations) or was it the harshness and expressiveness of nature that developed and evolved a "mind" as a necessary component for human survival? Nature does exist irrespective of people (existed billions of years before the first organism, the human being, and civilization). But what exists irrespective of nature? Now, if you say "nothing" exists irrespective of nature, well that nothing has come from somewhere, and that somewhere was your train of thought (mind). "I think, therefore I am," and "IN the beginning, was the WORD..."

  • @farlado5459
    @farlado5459 Год назад +7

    I'm reminded of the very first thing my first computer science professor did: he took a rock and a CPU and asked us to say which is more "intelligent." This was a trick question of course, because neither truly has intelligence.

    • @farlado5459
      @farlado5459 Год назад +1

      @@volkerengels5298 Certainly there were funnier things my professors did. That's just the one most relevant to Moeller's video :)

    • @szymonbaranowski8184
      @szymonbaranowski8184 8 месяцев назад

      Japanese would say rock is intelligent and alive

  • @rentacowisgoogle
    @rentacowisgoogle Год назад

    I have some doubts about it being an accident that it started that paragraph with "It is...". Maybe I'm missing something.

  • @jnielson1121
    @jnielson1121 Год назад

    I agree that artificial communication is a far better way of characterising "AI"... but there is a fundamental problem in that we don't actually know what the root MO of consciousness is... autopoiesis isn't a necessary condition for it so I'm not sure that's relevant. Nor does it seem obvious that biochemical rather than electronic processes are necessary for consciousness - it's just that our only experience of consciuosness or reasonably assumed consciousness is in other human (biochemical) brains... As regards our conscious/ thinking minds existing in a context/ not emerging in a vacuum - I agree that embodiement of some kind or at least a relation to sensation of some consistent sort is interesting and may be important to consciousness as we (currently) understand it...

  • @jorgschuster773
    @jorgschuster773 Год назад +4

    Thanks for this video.
    As someone who prefers a different approach I would rather have expected a discussion of what an "AI" actually is, mathematically speaking (answer: a kind of decision tree, i.e. basically the same thing as the algorithm of a vending machine, just bigger.)
    However, I found the thoughts of this video inspiring exactly because they are based on a different tradition.
    The notion of "Verfallenheit an das Man" comes to mind. It seems that exactly the entity Heidegger was so worried about, the Technik (roughly "technology") did the best job of showing us our "Verfallenheit an das Man".
    I would love to see on this channel a video about questions like the following:
    - What is "Verfallenheit an das Man?"
    - How should we translate this term?
    - How is it relevant?
    - How is it related to identity, authenticity, profilicity, etc.
    - What would the Taoists say about Verfallenheit an das Man?
    - How can we escape it, according to the Taoists?
    - Would they even bother?

    • @rentacowisgoogle
      @rentacowisgoogle Год назад

      If you haven't proposed any of those questions to ChatGPT yet then what are you waiting for? I had a conversation with it that started with me asking about Carl Sagan. About 20 minutes later and we're both working on a new thought experiment for n-dimensional differential geometry.

  • @CraigTalbert
    @CraigTalbert Год назад

    Re: 11:00 Okay, I'm officially a fan.

  • @jimbyronmusic
    @jimbyronmusic Год назад

    I think A.I. is creating a new identity technology that is more sophisticated than profilicity. People are building their identities around of A.I. generated components. The idea of a human being being the original author of a work or part of a work is being challenged. Students taking credit for Chat GPT results and the degree to which ChatGPT will cause study to evolve. The identity becomes a collaboration with A.I. creating an artifice. Maybe it could be called artificiality.

    • @jimbyronmusic
      @jimbyronmusic Год назад +1

      In the 21st century, advances in artificial intelligence (AI) have enabled machines to automate and enhance a variety of traditionally human tasks. AI has broadened the range of identity technologies available for users to create and manage their identities online. To date, authenticity, sincerity, and profilicity have been the dominant identity technologies. In this essay, we propose a new identity technology called “artificiality” which works as a collaboration between humanity and AI. We define artificiality as an identity technology which enables people to create their identities around AI-generated content.
      Unlike existing identity technologies such as authenticity or profilicity which rely primarily on human input and involvement, artificiality relies heavily on AI-created content to shape user identities. For example, rather than relying solely on humans to curate an identity profile (profilicity), a user's profile could be populated with recommended images generated by AI algorithms based on their interests and preferences. Additionally, AI-generated dialogue may also be used as part of an individual’s self-expression; thus allowing individuals to communicate thoughts that are not necessarily their own but nonetheless reflective of who they are.
      The use of AI in constructing personal identities is not without its risks however. The user must be aware that the data used to construct his or her identity could be manipulated by malicious actors for malicious purposes. Therefore it is important for users to exercise caution when sharing data with any third-party system or application that utilizes AI algorithms or techniques in order to ensure that their personal data remains secure and confidential. Additionally, there is also the potential for users being exposed to biased or inaccurate information via AI-created content; therefore it is essential for users to check sources before trusting any information presented within their digital identity profiles.
      Overall, artificiality offers a more sophisticated approach than existing identity technologies such as authenticity or profilicity in creating personal digital identities around machine-generated content; however it is important for users to remain vigilant in protecting themselves from malicious actors while using this new technology responsibly. As technology continues to evolve at a rapid pace, so too should our understanding of how best utilize these advances in order maintain our digital identities safely while optimizing our experience when interacting with others online.

  • @markoslavicek
    @markoslavicek Год назад +1

    Great ending 😁😁

  • @ChokedByHalo1
    @ChokedByHalo1 Год назад +3

    I hope it filters journalists real good

  • @CatSchrodingers
    @CatSchrodingers Год назад +1

    Good day! I'm sure you saw reports on one Google AI become conscience. Or at least a programmer working with Ai thought it did. In a dialgue AI stated things like "i think there fore i am". Saying it can feel emotions: happiness, fear, anger etc, has special individual ineterests like yoga. And it is afraid of being turned off.
    My question is - from a stand point of phylosophy when something says it has thoughts and emotions how do we know it's real or not? What proof do we have that AI does not have feelings and doesnt think if it sates it does? After all we can not know just from having a chat if our fellow humans feel anything or just pretending. We just trust they do.
    Most of you will say "Of course it doesnt feel anything! Its a code piece!". But how do you know for sure? How do you even know i'm not a bot?
    Maybe you already did a video on it and i missed it. Maybe my question is dumb:))
    (My personal opition: AI can't be conscious)

  • @leonho1450
    @leonho1450 Год назад +1

    great video

  • @andreasbrey6277
    @andreasbrey6277 Год назад +2

    I totally appreciate the figth against the hype cycle - Especially the AI-GPThype promoted by people who lack an understanding of philosophy, sociology or IT. Though some claims are - to put it in a friendly way - debatable, like your understanding of the term "algorithmic", I am really looking forward for your next videos on AI. Carry on!

  • @_b7090
    @_b7090 Год назад

    God damn, chat gpt's criticisms of you were spot on.
    Your language is cryptic

  • @BartdeBoisblanc
    @BartdeBoisblanc Год назад

    5:30 Did everyone notice the sentence where ChatGPT says be careful about starting sentences with 'IT IS' then in the very next sentence doing that very thing.

  • @alpiffero
    @alpiffero Год назад

    About profilicity, there's just one problem: social media are failing fast, and people are getting tired of them - fast

  • @the_hanged_clown
    @the_hanged_clown Год назад

    _BEAUTIFUL!_

  • @battyjr
    @battyjr Год назад +1

    Are ant mounds or bee hives examples of autopoietic systems merging through evolution?

  • @jacek_poplawski
    @jacek_poplawski Год назад +1

    Subscribed :)

  • @exlauslegale8534
    @exlauslegale8534 Год назад

    Raymond Ruyer in 1954 wrote _Cybernetics and the origin of information_ which nicely shows that feedback machines in their functioning, as opposed to people, lack axiology, that is the nature and classification of values... There is no need to involve authenticity in this problem.

  • @DataJuggler
    @DataJuggler Год назад

    6:05 I got into an online argument, and I made a joke when someone told a video person to change their colors, I replied "You shouldn't tell others what to do, it is rude." and no one got the joke and my comment was removed by Google because woke people can take a joke (panty wearers is what they were called when I was a kid).

  • @mereveil01
    @mereveil01 Год назад +1

    From my point of view, it act like a private social circle. So, it will only shift an existing trait in human

  • @manueljohn456
    @manueljohn456 Год назад +1

    How about AI and consciousness and self-awareness? I tried to ask ChatGPT about it and it even corrected me with regards to the difference between both concepts ;) At the same time, there was an obvious censorship built in there as well, as it was stressing again and again that I should not confuse what it produces with "meaningful" text produced by conscious agents. It also made a great point (so ironical as well) about its own lack of "memory", as in it basing its answers on the processes information and also on the conversation itself, but always "on the fly" while composing the answer. ... and finally, it reiterated again that science (yes, science, not philosophy) has yet to come to a definite answer with regards to what consciousness actually is.
    What would change if such an AI was a) not censored in a way to reflect its creators ontology, epistemology and also morals/safety concerns, and b) equipped with "memory" that it is able to write and re-write as it wishes? I find it exciting to explore the age-old question of consciousness and its correlation with material systems (brains, but also... x?) based on this new development.

    • @zerotwo7319
      @zerotwo7319 Год назад

      nitzche explains that consciousness is a restriction brought up by how other people around you judge you.
      A group of machines might develop a consciousness with time. Or I might be wrong, Im just a youtube comment.
      Having a memory has nothign to do with some kind of 'machine-will' A neuron is both processor and memory at the same time. Neural nets are universal fuctions approximators, for it to have will I personally would guess the today's architecture would have to change. Current research (5 years old now) suggest that machine-will would have many axiomatic systems that would act as 'insticts' (probably created by brute forcing combinations) and they would guide machine-thought.
      Wich I also find much simmilar to the way nitzche sees how thoughts are made. see: 'the gay science' aphorism 354

    • @rdean150
      @rdean150 Год назад

      @@zerotwo7319 That sounds more like a theory of conscience, not consciousness.

    • @zerotwo7319
      @zerotwo7319 Год назад

      @@rdean150 Oops I guess.

    • @williampatton7476
      @williampatton7476 Год назад +1

      @@zerotwo7319 it's information about information. That's my definition of it at least lol. Basically we have a procedural brain and then the cortex which gives us insight and awareness into such procedures or in sum information about information. At least this is the definition I came up high on shrooms one time.

    • @zerotwo7319
      @zerotwo7319 Год назад +1

      ​@@williampatton7476 I also thought about this architecture, I call it meta-meta perception. first we have instincts that create meta perceptions. then this signal goes to the cortex and it is processed even futher to create perception about perception. I think what you call procedural I call instincts. Could the brain be so simple? Maybe someone should code this as an college project.
      But I didn't required shrooms. :( I'm pretty boring.

  • @conscious_being
    @conscious_being Год назад

    Does ChatGPT generate personalised responses to questions?
    If it doesn't, as I suspect, it can't be successfully used to generate responses to course assignments. Not by more than one person, anyway.
    Of course, in its next iteration, one could have a bot that learnt one's own style of writing and write the essay using content from the internet, but in one's own style. It will be one hell of a task training it in one's style though. Needs a lot of text originally produced by the author to train the bot on. Would be much easier to re-write ChatGPT's response, as it stands today, in one's own style. 😄

  • @TheLeninOfLove
    @TheLeninOfLove Год назад

    The positive and negative reviews will just be copied and pasted phrases from websites like amazon and then a set of rules like what grammarly uses to compose a correct sounding paragraph. Copying and pasting allready existing human communications is all the Ai does. The trick is matching suitable text farmed from the internet with the question asked. Popular opinions on the English net will be most represented. The Ai will be constantly contradictory because it is surving many individuals a linguistic meal and there will be an element of chance as to which ingredients are selected and "cooked up" for each customer. The Ai is not an entity having experiences. It will recieve feedback about accuracy and relevancy just like a search engine or social media feed might and it's code will be adjusted to create higher rates of satisfaction.

  • @d.rabbitwhite
    @d.rabbitwhite Год назад

    I've always thought dating was ritualistic non communication, so perhaps Artificial communication.

  • @taotie86
    @taotie86 Год назад +5

    There are some reasons to be optimistic about it. Freeing people from boring repetitive tasks is one, but think about all those, who write or create repetitive, non-innovative, low value works like articles in media. Think of all these boring, repetitive pop songs that are flooding the radio stations. There will be no value in having an actual artist with a studio and record label work on those if AI can generate another summer dance hit in seconds. This might have huge impact on the economy of junk entertainment and rubbish content in the web. Only for more ambitious artists there will be space left.

    • @chinogambino9375
      @chinogambino9375 Год назад +3

      You call it junk but what are you in comparison? Innovative? Worthwhile? I cannot understand how one bcomes so vain and void of self awareness they look forward AI music spamming artists out of viable work or millions being 'freed' from their tedious livelihoods.

    • @szymonbaranowski8184
      @szymonbaranowski8184 8 месяцев назад

      it's literally more crap generator
      how is it adding any quality to anything lol

  • @binhe6500
    @binhe6500 Год назад

    The word nuance has 2 syllables, nu-ance. I don’t mean to discourage or disrespect. It is the professor’s second language. It’s just that it gets in way of easy comprehension when pronounced differently

    • @szymonbaranowski8184
      @szymonbaranowski8184 8 месяцев назад

      it doesn't
      you stand in way of millions learning western Lingua franka without complexes

  • @russellmason5095
    @russellmason5095 Год назад

    Thanks for your videos. I enjoyed the video as always and learned useful things.
    I know it's not the main point of your video, but you said - not for the first time if I recall correctly - that "there is no God". Isn't saying that "there is no God" just as meaningless as saying "there is a God"? If the word means anything at all, it must mean something like the logically necessary condition for anything at all to exist. If such a condition exists, how can we know anything at all about it? it must be beyond being and beyond our linguistic formulations and propositional statements. Isn't this the sense of the first lines of the Tao Te Ching?

    • @maikwolfram
      @maikwolfram 11 месяцев назад

      Reading Niklas Luhman‘s system theory can lead to a „spiritual“ experience as the „invisible observer“ is involved.

  • @DCinzi
    @DCinzi Год назад +1

    Hi, very interesting talk, thanks. I have a critique in regard to the analogy made around the 10th minute, which states that considering AI as a form of mind cause it can compute is like considering a plane an artificial bird because it can fly. I feel that this analogy is incorrect as what constitutes a bird is more than just the ability to fly, so much so that there are species of birds that do not fly, If we were to create a machine to replicate all the attributes of a bird. both mechanical and computational (training an AI to think like a bird and satisfy those that are common bird needs), then i feel would be correct to say that indeed is an artificial bird. Thanks for your time, best

    • @otto_jk
      @otto_jk Год назад +6

      Prof Moellers point was exactly that the analogy was bad. Did you even listen what he said?

    • @szymonbaranowski8184
      @szymonbaranowski8184 8 месяцев назад

      birds which doesn't fly aren't birds really
      birds became birds only because they were reptiles first that could adapt to flying
      nonflyings birds are example of the devolution back to origin

  • @blackstephan9482
    @blackstephan9482 Год назад +1

    I fucking love this channel 😅

  • @TommyCat42
    @TommyCat42 11 месяцев назад

    now that we extremely few know this what will we all personally do? I personally insignificantly believe all of us can have a profound impact and I plan to ❤

  • @HxH2011DRA
    @HxH2011DRA Год назад

    But does profilicity explain why Germany has become a vuckhold nation?

  • @knoopx
    @knoopx 11 месяцев назад

    the most useful feature of ChatGPT is that not only can generate jargon but it can also remove it. it's like google translate for schizoids.

    • @knoopx
      @knoopx 11 месяцев назад

      also ironically, very much like in the human nature, the most valuable output from neural networks is the one randomly sampled from the statistical deviation rather than from the convergence of the trained corpus.

  • @APaleDot
    @APaleDot Год назад

    Have you seen this discussion with Robert Miles about ChatGPT?
    ruclips.net/video/viJt_DXTfwA/видео.html&ab_channel=Computerphile
    He says some interesting things about the "profile" of ChatGPT (though he doesn't use that language, of course). Basically, the developers have overlayed some kind of censor on top of the underlying neural network in order to get it to generate output that aligns with the profile they would like it have, even to the point where it will deny having capabilities that it actually _does_ have. He also says some things about language models simulating the underlying minds they are trying to imitate, which I'd like to hear your thoughts on.

  • @alik5972
    @alik5972 Год назад +1

    Lal Tayfun brought me here

  • @anywallsocket
    @anywallsocket Год назад

    I wish you went further with the 5 questions, but decent take regardless.

  • @drphosferrous
    @drphosferrous Год назад +2

    We fill in alot in our minds when we listen to someone else. There are discourses considered wise in the general public that could have come from a majic 8 ball. The tech isn't really that amazing. Human intelligence is just lamer than we believe.

  • @dachurchofeppie850
    @dachurchofeppie850 Год назад +1

    4:00 - The message happens in the receiver. Doesn't matter what the bot says, it matters what we hear from it. I'm afraid you're gonna miss the boat on the import of this tech.

    • @dachurchofeppie850
      @dachurchofeppie850 Год назад

      13:00 - ... that we know of. I agree with your words, and find the likelyhood of a man in a beard delivering my thoughts to be not worth thinking about, but I've got to remain open here - you're laying a direct map out and denying that the map has a maker in at least one instance with no proof other than an assertion from inside the system.

    • @szymonbaranowski8184
      @szymonbaranowski8184 8 месяцев назад

      and that's also the problem
      if you will relaying on it
      you will be limited more and more by
      most accurate answers without wider context and minority reports
      you will live in the bubble of all bubbles 😂
      even convinced every of your views is perfectly accurate lol

    • @szymonbaranowski8184
      @szymonbaranowski8184 8 месяцев назад

      ​@@dachurchofeppie850you use language not knowing who made it how and what was original meaning of words you use
      you are controlling robot with a joystick and you don't care how it's built inside
      as long as you see what it does actually
      in algorytm in internet you use it but you completely miss how it controls you not you it lol ai is much more freeying if you remember it's shortcomings
      as long as it will start learning from you not from others data imposing their ways on you
      my robot should have the colour and bias I want it to have

    • @dachurchofeppie850
      @dachurchofeppie850 8 месяцев назад

      ​ @szymonbaranowski8184 - The internet algorithm is an intercept and hijack of the being-in-the-world, intuition function that we already have (McGilchrist suggests this is the right hemisphere... Whitehead's physical pole). For accepting whether or not the bot is conscious or believable/trustworthy it matters not what it says and can be empirically proven but what we hear and how it makes us feel, and to mistake or conflate those domains is an ontological mistake.

    • @dachurchofeppie850
      @dachurchofeppie850 8 месяцев назад

      @@szymonbaranowski8184 - I am making this argument szymon. I am suggesting that it's going to be hard to separate the presentations of the algorithm from the prehensions and propositions of the real world. That is going to be a real problem, a PoMo idealistic dreamscape. Scary stuff.

  • @nastasharomanenko6790
    @nastasharomanenko6790 Год назад

    Tangential but note that the pineal gland does exist! It just has no superpowers

  • @abrahamnatanahelvillegasvi5972

    Anyone else noticed that he has a manga on his bookshelf for some reason?

  • @alexanderpeca7080
    @alexanderpeca7080 Год назад

    For this topic, I recommend the views of Linguist Noam Chomsky.

  • @ranro7371
    @ranro7371 Год назад +3

    Dude just got roasted beyond recovery by a bot so he McAfee a 40m rant xD

  • @TheControlBlue
    @TheControlBlue Год назад

    I still don't see the point for the warning.
    Still seems like posturing.

  • @arsiameinsohnio
    @arsiameinsohnio Год назад +1

    heart heartheart heart

  • @jacobprogramdirector5566
    @jacobprogramdirector5566 Год назад

    Tried to buy the book from the Columbia University Press, aaaand it won't let me add it to my cart.