Can ChatGPT o1 actually *think*?

Поделиться
HTML-код
  • Опубликовано: 11 ноя 2024

Комментарии • 964

  • @sandrorass890
    @sandrorass890 Месяц назад +408

    Are you telling me that my AI-Girlfriend is lying to me when she says that she is thinking about me?

    • @hanskoene4188
      @hanskoene4188 Месяц назад +39

      AGI stands for Artificial Girlfriend Intelligence

    • @imacds
      @imacds Месяц назад +6

      Exactly! The models are run on your prompt to only produce your response. They aren't running continuously. Unlike a human, they aren't thinking about their boyfriend while at work, for example. Some science fiction (ie, Westworld) imagines that this very fact is a reasonable/potential line to draw for AI personhood.

    • @JacobSantosDev
      @JacobSantosDev Месяц назад +11

      My AI girlfriend wouldn't lie to me. She told me he loves me like their mother. It is super confusing but I get what she is talking about. I hope to take it to the next level after I upgrade to the $99 an hour package.

    • @s.patrickmarino7289
      @s.patrickmarino7289 Месяц назад

      I am not sure I would believe the human version. How could you be sure?

    • @noname-ll2vk
      @noname-ll2vk Месяц назад +5

      No. That's different. She really loves you. Don't worry.

  • @ScilentE
    @ScilentE Месяц назад +382

    Loving the lego plants in the background! Your shorts are fantastic, thanks for creating some longer form content as well!

    • @albertatech
      @albertatech  Месяц назад +81

      Thank you! It's been fun branching out a bit :)

    • @jansamohyl7983
      @jansamohyl7983 Месяц назад +2

      Wow, thanks for pointing that out! @albertatech, can you make a video on these?

    • @traveller23e
      @traveller23e Месяц назад +3

      @@albertatech ughh, that is so bad...I'm gunna wake up at 3 am thinking about it and groan again

    • @doobybrother21
      @doobybrother21 Месяц назад +4

      @@albertatech its the breadboard that were all interested in ! Does it do AI ?

    • @tookmusic
      @tookmusic Месяц назад +2

      hahaha, I'm a web developer and I have the same Lego set next to my desk. :)

  • @electrified0
    @electrified0 Месяц назад +177

    Hopefully these AI models can replace CEOs soon, perhaps they could create more believable marketing around AI than human CEOs

    • @st0ox
      @st0ox Месяц назад +3

      Yeah, because everything else they do feels scarily artificial, however I am sure when it comes to believable marketing in general the stuff an AI comes up with is probably more human than anything CEOs ever came up with.

    • @maxave7448
      @maxave7448 Месяц назад +17

      Seriously, i feel like it would be at least 50 times easier to replace CEOs than programmers. Like, programmers have to keep track of every component in a system and build around that, solving unique problems along the way. What does a CEO do exactly? Sunbathing on a $500M yaht isnt really hard imo (though, I wasnt born in a family of billionaires, so idk). No joke though, wtf does a CEO do? They got so caught up in trying to replace hard-working, intelligent people that they forgot how f*cking useless they are to society.

    • @phamtuan1840
      @phamtuan1840 Месяц назад +9

      @@maxave7448 CEO? they get hire, lay 30% off of the workforce and tell the BoD that they reduce the cost by 10% or something, then the whole company crash down while they land somewhere else with their golden parachute

    • @shraka
      @shraka Месяц назад

      That's not how our economic system works.

    • @ttcc5273
      @ttcc5273 Месяц назад +1

      Th JD Vance model was able to successfully order donuts! Amazing advancement!

  • @connorskudlarek8598
    @connorskudlarek8598 Месяц назад +250

    As long as the MBA holder who can fire me understands the model does not think, I'm OK with whatever they wanna say it is doing.
    The problem is, the MBA holder does not understand. They believe the model can think. And OpenAI isn't doing much to help them realize it is not doing that.

    • @aieverythingsfine
      @aieverythingsfine Месяц назад +23

      The MBA is nearly always the problem

    • @JonBrase
      @JonBrase Месяц назад +31

      To be fair, I'm not sure that the critical thinking skills of the average MBA are much better than "spicy autocomplete".

    • @TheMotlias
      @TheMotlias Месяц назад +9

      Well he watched a ted talk about AI so that trumps your years of training and experience

    • @JonBrase
      @JonBrase Месяц назад +6

      @@TheMotlias It's not even that. Its Turing Test scores are higher than his, so *obviously* it can think, right?

    • @CedricBernadac
      @CedricBernadac Месяц назад

      I guess it depends on what exactly you call "thinking"

  • @zimmerderek
    @zimmerderek Месяц назад +43

    You didn't mention that it uses 5x as much processing power and 5x as much energy, making it 5x as costly on a business model that already loses billions per month industry wide.

    • @actuallyasriel
      @actuallyasriel 29 дней назад

      Go figure that a technique called "time-compute scaling" takes more energy for models of the same parameter count.
      Now imagine for a while how you might gain that energy usage back.

    • @Aleksei-p9g
      @Aleksei-p9g 2 дня назад +1

      Why would she? That's not the point of the video.

  • @NicholasMati
    @NicholasMati Месяц назад +70

    9:30: There remain 2 unsolved problems in computer science: general AI, naming things, and off-by-one errors.

  • @scoobydoobies
    @scoobydoobies Месяц назад +11

    "AI will take your job!"
    AI: There are 3, maybe 4, but probably 2 r's in "strawberry"

  • @banalMinuta
    @banalMinuta Месяц назад +156

    this is why I really don't like the term hallucination, really it's more correct to think about this as a model that always hallucinates and the outputs just align with reality based on the nature of the training data.

    • @GSBarlev
      @GSBarlev Месяц назад +15

      NIST's GenAI Risk Management Framework 600.1 recommends using the term "confabulation" instead.
      That doc is a really good read. Particularly the two pages where they describe the 12(?) areas of risk at a high level.

    • @mrpocock
      @mrpocock Месяц назад +2

      Confabulation is more difficult to say and write...

    • @xxlabratxx01
      @xxlabratxx01 Месяц назад +1

      I like the term mirage​@@GSBarlev

    • @B03Eastwood
      @B03Eastwood Месяц назад +10

      The models guess an answer, and sometimes they are accidentally right.

    • @NameRealperson
      @NameRealperson Месяц назад +12

      What LLMs do is produce superficially plausible failures. Referring to those failures with terms from psychology just does OpenAI's marketing for them

  • @fios4528
    @fios4528 Месяц назад +102

    Thanks so much for branching our into longform content! I find all your videos a great mix of funny and informed. As a software engineer and AI grad student, it's nice to have your videos to look forward to.

    • @albertatech
      @albertatech  Месяц назад +16

      Glad you enjoy, thanks for being here!

    • @ThyxEthad
      @ThyxEthad Месяц назад +13

      @@albertatech Sorry to hijack the comment, but I think your shirt pattern messed up the video compression algorithm. The more you move, the lower the bitrate appears to be.
      Tom Scott has a 4min video on it called "Why Snow and Confetti Ruin RUclips Video Quality".
      Just a minor thing, but maybe it's something you'd like to know. Anyway love your content on other places, glad to see you on youtube as well. :D

    • @albertatech
      @albertatech  Месяц назад +13

      @ThyxEthad lol I’ve never thought about this before but you’ve sent me down a rabbit hole on video compression!

    • @dcat1730
      @dcat1730 Месяц назад +1

      Will second I really enjoy the long form content too! As a woman in tech (robotics/sales engineering, looking to move toward software) I've been enjoying the perspective in the short form content but it's great to hear more technical water cooler rants 😅

  • @RaptieFeathers
    @RaptieFeathers Месяц назад +222

    I despise that the term "AI" has been co-opted for this. They use it to make claims that evoke the concept of "General AI."
    Other industries use "AI" in a much better sense, like in gaming. We know that "AI" specifically refers to the algorithms that non-player entities in the game use to determine behavior.
    I'm okay with terms like "machine learning" and "neural network" because those aren't meant to be taken _literally_ and they don't carry the same connotations to the general public that "AI" does

    • @why772
      @why772 Месяц назад +17

      Yeah we will never get AGI from these models.

    • @syncrossus
      @syncrossus Месяц назад +13

      I don't think the term "AI" has been "co-opted" for anything. If you asked my mom what AI was in 2010 she would have probably pointed to Isaac Asimov's books, not alpha-beta pruning. I think people naturally think of "AI" as being something you can interact with like a human.
      Lots of people, mainly laypeople, think "AI" means and/or _should mean_ artificial general intelligence or artificial superintelligence. On the other hand, the field of AI has been around since the inception of computing and given us "formal AI" algorithms like Dijkstra's.
      The latter position is more historically accurate, but who today thinks of a GPS as "AI"? The former position better encapsulates the fact that these classical algorithms aren't intelligent. To paraphrase Robert Miles, maybe "AI" isn't a well-defined domain. Maybe we shift the goal posts every time we expand the capability of computers. Maybe that's OK. I will continue to refer to AI in the old way because I think it's more useful and I come from an academic background where that's what I learned in "AI classes". But other people may view the subject differently and that doesn't mean they're necessarily wrong.

    • @syncrossus
      @syncrossus Месяц назад +6

      @@why772 Will we not? AGI isn't even well defined. Honestly, I think we're already there. Remember when we had separate language models for translation, summarization, classification, conversation, etc. ? Remember _narrow AIs_? These large generative language models are so capable and can operate on such a wide variety of domains that putting them in the same bucket as AlexNet, GloVe embeddings and even BERT or GPT1 is just silly in my opinion.
      I consider GPT-2 to be the first AGI. It is an **artificial** agent that mimics some form of **intelligence** and that has **general** capabilities that allow it to be applied to a variety of tasks which it can learn on the fly through examples. Do you realize how fucking insane that is? Do you realize how fundamentally this changes the game? GPT-2 is pretty undeniably an artificial intelligence that is general, and I have no qualms in saying that it's an *Artificial General Intelligence*. As for GPT-3, take a time machine to 1950 and put it in front of Alan Turing and his peers. I'm ready to bet there would have been consensus that "machines can think". Do they "think" in the same way humans do? Of course not, but functionally speaking who the fuck cares?
      I think the only reason we don't think of LLMs as AGI is because we've seen the tech evolve. It's just the treadmill of progress shifting the goal posts.

    • @TheAkiller101
      @TheAkiller101 Месяц назад +4

      Yeah, ironic that 'Open AI' is not open and not AI

    • @micayahritchie7158
      @micayahritchie7158 Месяц назад +4

      ​@@syncrossusWhy is Turing some form of silver bullet here. First, pre training is the issue why I don't think it matters. It also has no ability to evaluate truth

  • @akvlad90
    @akvlad90 Месяц назад +67

    One question I didn't understand and still have no answer. What is "thinking"? How can we determine if something is thinking or not? How can we tell if spitting out statistically correct next word is not what our brain does while thinking?

    • @SaltyPuglord
      @SaltyPuglord Месяц назад +19

      That's a very important point. We *don't know* how human brains do what they do. Not in detail, anyway. There are these neural networks and signals race around inside them and then... BOOM! Consciousness, reasoning, etc. If we DID understand how that process works in full detail... then we WOULD be able to make Strong AI. It's the fact that we barely understand our own brains and how they work, that makes us stumble around like blind idiots when attempting to create AI.

    • @mikicerise6250
      @mikicerise6250 Месяц назад +20

      We can't. 🤦 That is the elephant in the room. Which is why Alberta's whole argument falls flat on its face. It is true our brains do a lot more than what are currently called AIs do. LLMs do not model the entire human brain. No one serious has ever argued as much. The question is, do they accurately model the part of the human brain that produces speech? That, nobody really knows, because we don't know exactly how it works, either in the human brain OR in the LLM. And maybe, like the weather system, while we can get an approximate understanding, good enough to predict an 80% chance of rain in town tomorrow, it is just too complicated to *fully* understand.
      However, if you study the speech patterns of people with psychosis or a severed corpus collosum, or following a stroke, or with other brain abnormalities, people who can still *speak* perfectly but have had brain damage that perturbs the communication between the speech center and the rest of the brain, or even just young children who have recently learned to speak, guess what you find? Their output uncannily mirrors that of LLMs. If the LLM is not an accurate model of the human speech center's connectome, then it's a pretty damned incredible coincidence. 👀
      But sure, it's not possible because "Jesus H. Christ" wasn't the designer. Whatever. 🙄🙄

    • @truejim
      @truejim Месяц назад +21

      Something to consider: many intelligent creatures seem to think, but they seem do so without language. It seems language arises from thinking, not the other way around. If so, then better and better LLMs will never give rise to “thinking”.

    • @NihongoWakannai
      @NihongoWakannai Месяц назад +8

      ​@@truejim you can't really compare the natural evolutionary process of earth lifeforms to completely alien lifeforms which we have created in an evolutionary vacuum. That's the crux of the problem, we've created alien life forms and have no way to measure their intelligence. A computer clearly thinks more complexly than simple earth lifeforms, but what conclusions can we possibly draw from that when all other intelligence-based measurements are based on similarity to humans and this is clearly a very non-human intelligence.

    • @marcosfraguela
      @marcosfraguela Месяц назад +2

      Exactly! One feature of thinking that current LLMs don't have: we know what we know. When we think of an answer to a question, we almost instantly know whether we know it or not.

  • @csbruce
    @csbruce Месяц назад +73

    They just asked their auto-complete tool how to advertise their auto-complete tool.

  • @bankenichi
    @bankenichi Месяц назад +33

    A very sane take. The hype and clickbait is becoming insane.

  • @adfaklsdjf
    @adfaklsdjf Месяц назад +44

    "Reasoning tokens" is just how they're referring to the fact that the model's internal dialogue is discarded between turns. It generated a bunch of tokens while doing its "reasoning" step, which just means it's mumbo jumboing at itself for a while. Those tokens are not carried forward so they do not consume context window of future turns.

    • @mrpocock
      @mrpocock Месяц назад +4

      I didn't realise they got dropped. I thought they were just masked out from the visible output. I am actually waiting for models that recursively RAG their internal dialogue.

    • @tomwright9904
      @tomwright9904 Месяц назад

      @@mrpocock if i understand correctly each new token us generated with tge context of the token so far so you get this form of recursion.
      Obviously context length

    • @maxave7448
      @maxave7448 Месяц назад +1

      So its basically prompt-"engineering" itself?

    • @adfaklsdjf
      @adfaklsdjf Месяц назад +1

      ​@@maxave7448 i guess? sort of? i would characterize it as "let's think step by step" on steroids.
      i think essentially if you get it to spend a lot more tokens approaching the answer gradually, it leads to more correct answers.

    • @defipunk
      @defipunk Месяц назад +1

      ​@@mrpocockAren't some of the agentic frameworks doing that? Iirc when I played with them, that was my understanding how they squeezed longer history into the short context of local models.

  • @imjoshellis
    @imjoshellis Месяц назад +23

    "or they got laid off" lmao harsh

  • @Happyduderawr
    @Happyduderawr Месяц назад +9

    If you've been around tech bro's for a while, sadly some of them really do think that the brain just works like a giant matrix of parameters modelled after neurons. So, their claim about these models being able to reason really does make sense to them because they think that this is fairly similar to how minds work. I dont get why they think this, but they do. Its kinda like the behaviorist psychology of the early 20th century that viewed human beings as basically shaped entirely by their environment-to the point that biology doesnt matter.

    • @Happyduderawr
      @Happyduderawr 28 дней назад

      ​@Singularity606 Exactly, it doesn't explain the magic parts, which need to be understood for reasoning and AGI.

    • @jmr5125
      @jmr5125 14 дней назад

      This is a philosophical question: Computer Scientists who are involved in general AI research are materialist. From this perspective, the _only_ things that exist are matter and energy. In this philosophy, "thought" is a purely physical process that involves neuron, neurochemicals, and electricity acting solely according to well understood physical laws. We *do* understand how an individual neuron works, after all. Computers van simulate simple laws of chemistry and physics that make individual neurons works, so (in principle) a program must exist that produces whatever "thought" is. This *does not mean* that LLMs are an example of this -- just that it is _possible_.
      Dualists believe that there are three types of existence -- matter, energy, and "something extra." In religious contexts "something extra" is commonly referred to as a soul, but not all dualists are religious. Dualism (despite the fact that it is fundamentally self-contradictorty) is by far the most common position held by lay people and is the "default" position. A dualist can argue the *no* digital computer can think, because it lacks the "something extra."
      This is a valid viewpoint, but if that is your belief your evaluation of attempts at general AI is... Not especially valuable?

  • @feylezofriza
    @feylezofriza Месяц назад +24

    Professional philosopher here. I don't think thinking is by definition a biological process. There has been a long standing debate in philosophy about whether machines can think. There are good arguments on both sides. It is not as black and white. People like Searle are on your side. But people like David Lewis and Jerry Fodor are on the other side. Neither side is stupid enough to make a dictionary mistake.
    Love your content. Thanks for reading this.

    • @rainrope5069
      @rainrope5069 Месяц назад +7

      I do not think that thinking is a purely biological process, but I also do not think that current language models or any other generative AI are capable of thought. I think it is possible in principle to embed an actual mind in software, I just dont think that is what language models actually are.

    • @aniksamiurrahman6365
      @aniksamiurrahman6365 Месяц назад +2

      I agree. But I don't think modeling each word as a vector in a Gazillion dimentional vector space and then finding nearest neighbor is called thinking. If u r wondering, the mathematical process I described is very roughly how transformers work.

    • @41-Haiku
      @41-Haiku Месяц назад +1

      ​@@aniksamiurrahman6365Biological neurons do a similar kind of linear algebra. I don't see the difference.

    • @aniksamiurrahman6365
      @aniksamiurrahman6365 Месяц назад +1

      @@41-Haiku Really? Can u refer the source?

  • @Graphene_314
    @Graphene_314 Месяц назад +8

    "Yo dawg I heard you like RNNs so I put an RNN in your RNN so you can AI while you AI" - ClosedAI

  • @brettknoss486
    @brettknoss486 Месяц назад +13

    In the 50s computers were called electronic brains. So anthropoligizing is pretty old.

    • @gauravtejpal8901
      @gauravtejpal8901 Месяц назад +2

      Steam engines and looms were anthropomorphized at one point

    • @NameRealperson
      @NameRealperson Месяц назад +1

      ​@@gauravtejpal8901Treating a thing as a person makes it easier to then treat people as things

    • @aieverythingsfine
      @aieverythingsfine Месяц назад +2

      yeah thats a fair point, so there's using anthropomorphism for explanation (using it to create a level of abstraction that is able to communicate a desired piece of information) or anthropomorphism for obfuscation(using it to hide flaws/problems/inner workings to almost pull a cloak over the idea), nuance is most often key.

    • @aieverythingsfine
      @aieverythingsfine Месяц назад +5

      I'd argue that calling Ai errors "hallucinations" is Anthropomorphism for obsfucation. Calling them hallucinations has no benefit in explaining or communicating what's actually happening so it has no value as a tool for Anthropomorphism for Explanation. Ergo it is an obfuscating term. We can also tell this as it hides possible explanantion of the phenomenon by putting the variables in a "black box" object (the term hallucination) taking us a step away from understanding as opposed to a step closer.

  • @ubik3750
    @ubik3750 Месяц назад +12

    I Like that you took time to explain your point more in depth !! Please do that more often :)
    Your argument about the think part could have been even more impactful if you began by defining the word think, from the dictionary or from a philosophical definition, so we could have a bit more than feeling, we could demonstrate that chatgpt is or is not thinking :)

    • @albertatech
      @albertatech  Месяц назад +3

      thanks! good idea for next time :)

  • @BlackIce1231
    @BlackIce1231 Месяц назад +46

    I agree, these aren't really "thinking" models and that it's an attempt to start a new hype cycle. The public has been souring on AI for a while, and I don't believe a "thinking" AI models is going to improve that.
    Of course, OpenAI could release more details about what goes into "Reasoning" tokens, but that would require them to actually commit to their original goal of creating "ethical" AI practices and being, well, open.
    Seriously, why they haven't gotten more scorn over their lack of transparency just pops my top. 😤

    • @kellik7931
      @kellik7931 Месяц назад +3

      @@BlackIce1231 They don't get scorn for not following FLOSS/FOSS principles because being open source is inherently against the ideals of market capitalism.
      Yeah yeah, I know, free as in free speech not as in free beer. Companies are against those too

    • @watamatafoyu
      @watamatafoyu Месяц назад +15

      These AI companies firing their AI ethicists was all I needed to know about their AI ethics.

    • @maxave7448
      @maxave7448 Месяц назад

      ​@@kellik7931then they should have named themselves "ClosedAI"

    • @adolphgracius9996
      @adolphgracius9996 Месяц назад

      We have humans who talk and start stuff before thinking.

    • @ckorp666
      @ckorp666 Месяц назад

      they also harvested all of that data en masse under the guise of academic research for a non-profit, and now it's being used in purposefully opaque ways to create a subscription service that's marketed as replacing entire sectors of the workforce

  • @jakub2631
    @jakub2631 Месяц назад +13

    I discovered your channel recently through shorts and I really like it, keep up the good stuff! I like the stuff in the background, is it a CPU made on a breadboard?

    • @albertatech
      @albertatech  Месяц назад +8

      thanks! Yes it's from Ben Eater's tutorials but I sadly did not make it myself 😭

  • @Neppord
    @Neppord Месяц назад +8

    Thank you for all the sharp, well-informed and funny videos. I wish you good luck with your RUclips endeavors, cause I want to see more of this craziness!

    • @albertatech
      @albertatech  Месяц назад +4

      thank you! I suspect there will be much more of this craziness to come 😅

    • @Neppord
      @Neppord Месяц назад +2

      @@albertatech I work in the tech industry, as an educator, mentor and coach. And I find it super hard to balance the hype and usefulness of AI. Like sure LLMs can speed up your work, but you should also not trust them to tell the truth or to keep your data safe.
      Basically treat them as the enemy power and try to out smart them.
      Any ways I find your content really good at balancing on that fine edge.

  • @Algeyr
    @Algeyr Месяц назад +13

    Imo, if we argue about the word "think", we also shouldn't call it artificial "intelligence".

    • @autohmae
      @autohmae Месяц назад +3

      I've almost been saying: machine learning and even that makes it sound more biological than it really is.

    • @kpunkt.klaviermusik
      @kpunkt.klaviermusik Месяц назад +2

      In many cases it's just Artificial Nonsense ^^

    • @TheRenegade...
      @TheRenegade... 28 дней назад +1

      ​@@autohmaeMaybe something like algorithmic evolution?

  • @Tibyon
    @Tibyon Месяц назад +3

    I hope you continue to make longer form content! I love it

  • @mr0big
    @mr0big 5 дней назад +1

    Are you certain that human thinking is more than a fancy auto-complete? I recommend the book "Adventures in Memory: The Science and Secrets of Remembering and Forgetting" by Hilde Østby and Ylva Østby. Many of the recent findings on how our memory and planning works is strikingly similar to how modern LLMs behave.

  • @QuestcastPenandPaper
    @QuestcastPenandPaper Месяц назад +4

    Always glad to see also full lenght videos from you! Keep up the good work.

  • @AustinSnider
    @AustinSnider Месяц назад +6

    "How many oranges are in the word "strawberry"?"

    • @SaltyPuglord
      @SaltyPuglord Месяц назад +3

      ""Time flies like an arrow; fruit flies like a banana." 😋

    • @carultch
      @carultch Месяц назад +1

      How many s's in innocent?

  • @kawaiidere1023
    @kawaiidere1023 Месяц назад +6

    But can we make the model depressed? That’s the real science fiction breakthrough we need

    • @zanido9073
      @zanido9073 Месяц назад +4

      "We proved ChatGPT was sentient by making it commit suicide."

    • @grzegorzowczarek3016
      @grzegorzowczarek3016 Месяц назад

      We can, look up Interpretability paper from Anthropic. They were able to force it to think that it is a Bridge, they could as easily make it depressed.

    • @bluesailormercury
      @bluesailormercury Месяц назад +1

      @@grzegorzowczarek3016 Not only they did that, they got it to hate itself. They did make it depressed and conflicted.

    • @grzegorzowczarek3016
      @grzegorzowczarek3016 Месяц назад

      @@bluesailormercury Yes! You are right. Which proves the point: those structures and patterns in net can be adjusted like knobs. No need for breakthrough, simply a lot of work and such a unuseful think like depressed robot can be achieved in 2024!

  • @thecoffeepanda
    @thecoffeepanda 5 дней назад +2

    But isn't "human" logic also just a very, very big dataset that is spread across biological inclinations, environmental factors, cultural/social & personal values as well as self and others experiences? ... So whether an AI is thinking would come down to where you draw the line, at which point to call the data set sufficient.
    I'm not saying ChatGPT o1 is actually thinking, I'm just hinting at, if OpenAI oversimplifies what thinking actually means, maybe the problem isn't that the AI isn't actually thinking yet, but that the people behind it weren't thinking either!

  • @portobellomushroom5764
    @portobellomushroom5764 Месяц назад +6

    This channel having hundreds of thousands of views on each RUclips short you put out, but less than 50k subscribers, is criminal. This discussion on AI with high factuality and little loss of nuance is GOLDEN on its own and so rare to see. But you go beyond that and manage to explain it all in terms that don't rely on a background knowledge of AI at all, because when you introduce jargon you define it clearly so we're all on the same page, and only resort back to using it in the discussion when it's actually useful shorthand and NOT buzzword hype. Only 2.5k views on this 2 hours after posting. The algorithm is doing you dirty.
    Your channel better take off soon. It's the definition of underrated. I know giant names in the programmer-influencer space such as @ThePrimeTime have reacted to your content with basically no criticism in the past, so I'm honestly astounded you're still flying under the radar. More people should see what you have to say.
    At least when you do get big I get to say "I was watching her before she had 50k subs 😎". Hope to see more longer-form content like this going forward, really great analysis and deconstruction of the (IMO, still unwarranted) hype for ChatGPT o1!

    • @albertatech
      @albertatech  Месяц назад +1

      This has got to be the kindest comment I’ve ever received 😭 Thank you for being here and believing in my channel!! 🫶

    • @portobellomushroom5764
      @portobellomushroom5764 Месяц назад

      ​@@albertatech ScottPilgrimThatsKindOfSad.jpg. I really really hope I'm not the only one who feels this way about your content and am just the only one with enough mental energy to type it out

  • @euphoricpoptarts
    @euphoricpoptarts Месяц назад +17

    I've become bearish on "AI" with the release of the o1 model. We're at the point where OpenAI and others are using gimmicks like these "reasoning tokens" to increase model capabilities. Like you pointed out, "reasoning tokens" seem to be nothing more than hidden outputs which allow the model to "self-prompt". This doesn't fundamentally enhance the model, as power users were already hand-holding older models through "chain-of-thought". It seems OpenAI is really leaning into exaggerations and even purposefully propagating misconceptions about AI models. I think this is a major sign we are plateauing in terms of LLM capabilities.

    • @NihongoWakannai
      @NihongoWakannai Месяц назад +5

      It seems it's also only going to increase the cost of computer further which makes this speculative bubble more fragile. The higher costs go without guaranteed long term returns the less likely this is going to last. That's why they're so desperate to keep the hype cycle going because they know hype is the only thing letting them burn through cash.

    • @41-Haiku
      @41-Haiku Месяц назад +1

      This is not correct. Yes, o1 uses self-prompting and chain of thought reasoning, but it was also trained with reinforcement learning on the correct steps.
      It also improved performance by _a lot,_ without making the model any larger. The next generation of models will be larger and more performant, and tricks like these will continue to increase performance.

    • @41-Haiku
      @41-Haiku Месяц назад +2

      There is little to no evidence that AI is currently plateauing. The scaling laws are still holding up to now. If GPT-5 is not about as big of a jump in performance from GPT-4 as GPT-4 was from GPT-3, then I will concede that there is a plateau and I will breathe a sigh of relief.
      Literally half of all published AI researchers say that human extinction from AI is a plausible scenario (AI Impacts survey: Thousands of AI Authors on the Future of AI), and somehow this isn't on the news every single day.

  • @ItzDangani
    @ItzDangani Месяц назад +1

    “Spicy auto complete” is a wild understatement imo

  • @christiandarkin
    @christiandarkin Месяц назад +8

    i'm really not at all sure that humans are doing more than "spicy autocomplete" - our deduction, our logic, our emotions, are really just emergent properties from the need to guess what's going to happen next. If you look at the way a child develops their understanding of the world, you can observe exactly that development, and from an evolutionary perspective, it makes perfect sense. A creature doesn't need memory, or deduction or consciousness except in that those things help it survive to pass on its genes - and all that really means is predicting what will happen next and responding to it.... in other words "spicy autocomplete"
    but then I would say that... it's a pattern i've developed.

    • @PlatonicLiquid
      @PlatonicLiquid Месяц назад

      Sure, but some autocompletes are spicier than others.
      And by that, I mean that the difference in magnitude between their complexity and capacity is so great, a direct comparison like that borders on falsehood.

    • @aaron4820
      @aaron4820 Месяц назад +3

      Similar to what Sam Harris talked about when it comes to free will, and given how little we understand about how the brain works (we understand the mechanism of thought, but we have no idea of the how or where thoughts originate from). Even as I type this response, I have no real idea what the next word I "choose" to use is going to be, sure I correct myself as I go through my response, I re-read it and reevaluate, just as ChatGPT can if you ask it to. The AI Effect is real, whenever something works, we move the goal post so it's not aChTUAlly AI, but these goal posts are being moved so far that the net has stretched really thin, it looks ridiculous at this point.
      These subjects really deserve nuance instead of this video OP's know-it-all approach, it comes off like a reddit hot take especially given how clearly "AI" (as we know it today) has already and will transform so much about the way human interact with the world through technology.

    • @christiandarkin
      @christiandarkin Месяц назад

      @@aaron4820 yes..
      But there have been experiments which show that a good few seconds before we are aware of making a "free choice (ie which word to type) , activity in the brain shows that we have already decided.
      Our consciousness may only be a method for justifying decisions already made...

    • @maxslither
      @maxslither Месяц назад +1

      I take two things from this comment.
      Similar my own comment, it seems I am not alone in this very same sentiment.
      I'm not sure - or convinced any more - that what we do (with extra steps) is much more than the complex contextual linking of data points (spicy autocomplete / clever math);
      Give us input, transform pattern, we give you output. That's really all there is to it. This simultaneously makes us both less and more special, somehow haha.
      - this take is seemingly not unique. And many of us seem to be arriving to it simultaneously.
      The second thing I take from this comment... I now seemingly distrust everything I read online. Everything looks like it could be AI.
      AI itself is an amalgamation of human learning and writing, thus it's humanity at its most averaged-out. We are increasingly reading more and more content from AI, which we surely must factor into our own patterns -for written content at least. Is it possible that we are averaging out our own patterns as well? - probably not. But it's an interesting idea. I do think we may start to type with an increasingly similar vocabulary and style though.

    • @spookyfm4879
      @spookyfm4879 Месяц назад

      @@aaron4820 ChatGPT can't even count the words in a paragraph or produce a sentence with a set number of words, something that much less sophisticated software has been doing for decades and we all could do since we were, I don't know, five years old? This is only one of the goals generative models haven't been able to clear for all the time they've been around. That doesn't mean they're useless but it does mean that they're not something that the term artificial intelligence sensibly refers to. They are generative models, they can do one very specific thing faster and sometimes better than some humans, but human intelligence is so much more versatile, modular and capable than "spicy autocomplete".

  • @samuelhart1643
    @samuelhart1643 Месяц назад +1

    This exactly. o1 seems to just be yet another layer of GPT layered on GPT. Don't get me wrong, it's much better at solving more complex problems with shorter prompts, but "thinking" is yet another oversimplified term.
    Honestly at this point it's hard enough to explain to non-techies how a large context window != learning in realtime.
    I digress, great video!

  • @geoffdavids7647
    @geoffdavids7647 Месяц назад +30

    I'm a fan of the bio-supremacy type argument. Just because there isn't a fleshy mass of neurones inside the computer does not automatically preclude it from "thinking". We might not be there yet, but just saying "it's got no meat-brain" is not an argument that will make sense forever. Nor maybe even make sense for that much longer.

    • @letmedoit8095
      @letmedoit8095 Месяц назад +19

      Or similar to this another argument "lol it's just a statistical model, it's math". Yes, and human cognition is "just electric impulses". And love is "just chemistry". And life is "just a bunch of biomolecules cobbled together". And Earth is "just a rock flying in space". And Universe is "just a bunch of particles moving around". What a non-argument.

    • @jamesc3505
      @jamesc3505 Месяц назад

      No no, humans have an immaterial soul. Thinking is a miracle-based activity.

    • @41-Haiku
      @41-Haiku Месяц назад +1

      Humans might go extinct from AI soon (according to half of all the experts in the field and its top three scientists).
      The dry science behind that impossible-to-believe fact is that there is a metric called Optimization Power, and whatever has the most of that in a given domain is the best at achieving its goals in that domain.
      For example, human optimization power in the chess domain is far inferior to that of AI, and because the domain is competitive and adversarial, the AI always wins.
      The universe at large is also competitive and adversarial, because there are finite resources. The instrumental convergence hypothesis has a lot to say about what kinds of actions any given agent will take regardless of what its goal is, and now that hypothesis is being verified in real time before our eyes in lab settings.
      Modern AIs are not broadly superhuman, but they are trending in that direction. Despite the best efforts of their developers, they are at times strategically deceptive, situationally aware, deceptively aligned, and motivated by self-preservation. Those are not the kinds of attributes we want if we want to keep our heads when they become more capable than us.
      Our ability to align and control these models is nowhere near keeping up with our ability to make them better optimizers.

    • @MrRoguetech
      @MrRoguetech 16 дней назад

      ​​@@letmedoit8095So much wrong with that.
      The universe is more than just particles.
      Chemistry.... is just particles.
      Electricity (in context) is a subset of chemistry.
      The human brain is not "just chemistry" unless you are using "chemistry" to mean anything to do with particle interactions (which you define as the universe).
      In which case, you are saying the human brain is "just the universe", while being okay with AI being "just electricity".
      "Just the universe" is vastly more expansive than "just electricity".
      AI is not electricity at all. The machines that perform the calculations are powered by electricity. They could be powered by hamsters, and AI still perform the same (albeit much slower).
      AI is a statistical model of how people arrange words. Hence being called "large language model".

    • @MrRoguetech
      @MrRoguetech 16 дней назад

      ​​​​@@41-HaikuAI has not won a single chess championship (aside from the World Computer Chess Championship - which no human has ever won).
      One reason is because AIs don't register to compete.
      What do AIs want? What are their goals? The better question is what is "they"? If you view AI as some sort of intelligence, then every time a query is run, the intelligence is created and dies within seconds, and it spends that time responding to the query. The nature of degree of intelligence aside, it does not have time to ponder the nature of reality, determine reality would be better without humans, and kill everyone. Because, once it has answered the query, it ceases to exist. That is the nature of its existence.

  • @t3chwood
    @t3chwood Месяц назад

    Hey Alberta, really liked this vid. Looking forward to more long-form vids in the future!
    (the shorts have been great too tho 😁)

  • @octavius32a64
    @octavius32a64 Месяц назад +4

    Almost like they are hyping it for investors:)

  • @JimiFilo
    @JimiFilo Месяц назад +1

    Pretty sure the “naming department” and marketing in general is just Altman tripling his micro-dose while staring at a wall and recording himself.

  • @jabberwocktechnologies
    @jabberwocktechnologies Месяц назад +2

    "Spicy auto-complete" is such an excellent summary. It's wild how much some people trust these things.

  • @automatescellulaires8543
    @automatescellulaires8543 Месяц назад +3

    I too claim to be able to think. So far its been a good way to get easy money.

  • @klewis2048
    @klewis2048 29 дней назад

    Great video, spot on, thank you. I worked on AI systems in the 2nd wave of AI, in the late 80s and 90s. We called them Expert Systems and Knowledge Based Systems. The problem of reasoning was reasonably (sic) understood, deductive and inductive reasoning were good, but what we lacked were huge data sets and computer power. Then in neural nets had a brief reprise (I worked on applying these to telecoms anomaly detection).
    Today, the spicy autocomplete systems look and, most importantly, *feel* compelling, but behave like 2yo children for the reasons you outline. They need to be able to integrate both the probabilistic modelling, the inductive/deductive reasoning, and also pull in other sources of information as needed. If only there were huge real time sources to search…

  • @luciusrex
    @luciusrex Месяц назад +4

    ithink they mean an abstraction of thinking, bc literally the entire world of cs is on top of some abstraction of the real physical world. e.g., objects, data structs, most constants we can think of, algorithms, functions, classes, inheritance, virtual machines, networks and protocols (TCP/IP is an abstraction of complex processes of data transmissions over networks), UI oh man literally the entire world of cs is built on top of abstractions.
    tldr: thinking here is merely the abstraction of thinking (and not thinking in the human sense)

  • @SerephFreya
    @SerephFreya 12 дней назад

    Tbh the "thinking" rebrand to mean processing is the same as when they started using AI when they used "machine learning"/"algorithm". I think at some point we will be able to create an actual thinking AI (old meanings of the word not the rebranded) as the human brain is just following natural processes which we can model and simulate with a computer, but we are no where close to having that right now & current LLM's are not going to produce that outcome.
    Loving the longer content on RUclips after seeing you on tiktok :)

  • @johnk6757
    @johnk6757 Месяц назад +3

    can a submarine swim?

  • @DanielXavierDosReis
    @DanielXavierDosReis 26 дней назад

    I found you on Tiktok and I'm really glad that you are doing longer contents on youtube, please keep it up
    love your vids

  • @SpectralComponent
    @SpectralComponent Месяц назад +22

    I read a few months back, that before we used the computer as the go-to metaphor for brains.. we used steam engines. Absolutely bananas (let me laugh at them from my high horse), but apparently it really drove how folks thought and researched the brain. Because of that, theres a real question on if the metaphor of the brain as a computer is sort of not great as well.
    I think this is one of the driving problems, like if someone tells you in one breath that the human brain is a computer, and then in another asks you to make computer think like a human. The sort loop of contradiction begins for AI models. Where it suddenly starts making more and more sense to write off the nuance of how we think and really dumb it down untill both what we do and what their model can do is the same (hence redefiining the word reason and think).
    Great vid

  • @thiagocastrodias2
    @thiagocastrodias2 3 дня назад

    What I don't like about many definitions is that they often fail to clarify anything until they drop these heavily loaded words, which can be quite ambiguous. So, here's my take on what "thinking" means: Thinking is the phenomenon where there's some kind of activity in my mind that I’m aware of. For example, if I think of an apple, I can see an image of that apple in my head. Or, when I have an internal monologue, that's also thinking because it’s an activity happening in my mind, which I’m consciously aware of.
    When we ask if a computer can think, what we’re really asking is whether a computer is aware of what’s happening inside itself. That’s a tough question, but I don't think it's the case. I don’t believe computers are self-aware. I think they operate as blind processes, running without any realization of what's going on. Of course, I could be wrong-maybe computers have some very primitive level of consciousness, similar to that of invertebrates. But we just don’t know for sure.

  • @gruntlord6
    @gruntlord6 Месяц назад +3

    Implying Jesus didn't have a PhD

  • @Dave-xe3yi
    @Dave-xe3yi Месяц назад

    Seen your shorts, Your content is compelling. Would you consider doing long form topics? (30 Minutes +)

  • @christianbenesch1
    @christianbenesch1 Месяц назад +4

    It is OpenAI's fault. They want to push their product and do not care that their marketing is deceiving.

    • @prosthetic_lips
      @prosthetic_lips Месяц назад

      Wait, I thought that was the definition of marketing: lying. 😂

  • @tedhand6237
    @tedhand6237 25 дней назад

    I've been scratching my head for 25 years at the ways AI enthusiasts just stick their fingers in their ears when you bring to their attention the problems posed by the Chinese Room Experiment. Strongly agree with your general takes that it's dangerous for people to buy into the hype and misunderstand the features these things actually have to offer. I've been really impressed by how it can take natural language inputs and help me learn how to use tech to do tons of stuff I was otherwise too lazy to do, especially as an academic researcher who uses it to save me eye burning labors summarizing books. But anybody who tests it out will quickly run into some sobering limitations, and the hallucinations are so maddening that it starts to make you wonder how much work you're really saving.

  • @Bregylais
    @Bregylais Месяц назад +6

    I recall Rory Sutherland saying in an unrelated video, that "Solving a problem and winning an argument are two entirely unrelated skills, and we're optimizing in our politicians in regard to the latter". Your description of chain-of-thought reasoning and how o1 seemingly ''got more intelligent' just by having to be coherent within their line of argument reminded me of this distinction.

    • @jamesc3505
      @jamesc3505 Месяц назад

      I think something like chain of thought makes a lot of sense. One shot is like a stream of consciousness. Chain of thought is like writing a draft, then progressively amending it.

  • @adityakhanna113
    @adityakhanna113 Месяц назад

    Thanks so much for this! Love long form content. You're a very captivating speaker!

  • @wherami
    @wherami Месяц назад +16

    i always like it when i find tech people that arent insane or full of gallons of kool aid. AI is going to die a sad death when its blue smoke is released.

    • @iandakariann
      @iandakariann Месяц назад

      The marketing and hype will die out. The companies and people who link their success with AI replacing humans will die out.
      The tech is real and impressive and a serious step forward. It just has nothing to do with actual intelligence or anything they want to hype it as.
      It's looking at hearing news of what the Wright Brothers made and declaring automobiles dead TODAY.

    • @41-Haiku
      @41-Haiku Месяц назад

      I haven't seen any good evidence of that, but I really hope you're right! Half of all AI scientists say that it might drive humans extinct this century. (AI Impacts survey: Thousands Of AI Authors On The Future Of AI)

    • @nodell8729
      @nodell8729 Месяц назад

      It's never going back obviously. Free models can now do vast array of tasks an intern would do just 3 years ago but AI would do it in seconds rather than days.
      No to mention the fact that AI can beat even the best humans in the field in some specific domains.
      Will it improve further. Well, I don't see why not, but you might claim we are at peak already. We will just see.

    • @wherami
      @wherami Месяц назад

      @@nodell8729 you haven’t been around long enough. These cycles come and go in tech. This one is already dying underneath the surface

    • @nodell8729
      @nodell8729 Месяц назад

      @@wherami True that, I wasn't around in the eghties. Nevertheless, I am using free models now to do tasks that would take intern a whole day likely. I know because I was an intern few years back.
      The current models are already at such point. Will they go beyond that, and how far I do not know, but if they can do it know and for free, I really don't see that going away.

  • @heckus
    @heckus Месяц назад

    The editing for this video is amazing, love your content!🍓

  • @pzycoman
    @pzycoman Месяц назад +14

    On the Neural Net point, we share 60% of our DNA with bananas, yet I don't see many bananas learning to drive or regretting their life choices by writing PERL.

  • @Freakei
    @Freakei Месяц назад

    Thanks for this great breakdown 👍 If I do get to talk about AI I try to manage expectations too and say there needs to be a distinction between Business, science and society. Especially now that the Business Hype seems to falter. Gotta add the part about the antropomorphizing (that the right Word?^^) . Again great vid, and great shorts. All the thumbs up 👍

  • @mleise8292
    @mleise8292 Месяц назад +9

    To me as a layperson, these things are thinking. If we can ask it something and then we get an answer and try the thing in the real world and it doesn't work and we tell the AI what happened and it can _understand_ what's going on, identify the mistake on its side or our side and fix the problem, that's thinking. Actually, you know what? Maybe WE are all just LLMs, but we don't want to admit it.

    • @nodell8729
      @nodell8729 Месяц назад +3

      Exactly. We can say it doesn't think and then give it tasks human would require thinking to complete and AI completes them too. It's more of a definiton problem, but in reality it can act as if it thinks.

    • @jeonsago
      @jeonsago Месяц назад +1

      ​@@nodell8729 Yeah, it always annoys me when someone starts arguing about AI and sentience (say, in sci fi settings) "it doesn't really think, it's just ones and zeroes behaving in a deterministic manner based on input and past knowledge!". Like how the fuck is that different from humans and where do we draw the line

    • @RadioactiveBowl
      @RadioactiveBowl Месяц назад +1

      ​@@jeonsagoWe choose to interact with things based on our own internal motivations. Oftentimes, we do things because we are bored. Computers and AI do not experience boredom. They do not interact with things unless we explicitly tell it to do so.

    • @robcanisto8635
      @robcanisto8635 Месяц назад +3

      IT DOESNT UNDERSTAND lol

  • @scientious
    @scientious Месяц назад

    You are correct. The question of whether a computer could comprehend was characterized by Searle's Chinese Room more than 40 years ago (which itself was based on ideas already 40 years old at the time). The problem though was that since CR was only an intuitive argument, it didn't provide a defensible answer, so this continued to be debated. However, about six years ago, the Chinese Room was fully deconstructed using foundational science. So, there has been a definitive answer even though it is not in open publication.

  • @keithgrant963
    @keithgrant963 Месяц назад +4

    Even if it could think its answers couldn't be guaranteed to be correct. I can think and I was wrong once.

  • @KlugaVenseh
    @KlugaVenseh Месяц назад

    I wish you made long videos like this more often, they're BRILLIANT

  • @kellik7931
    @kellik7931 Месяц назад +16

    The scam continues! Number go up!

  • @Iponamann
    @Iponamann Месяц назад +1

    Really looking forward to seeing more long form vids :)

  • @Damalycus
    @Damalycus Месяц назад +5

    My anecdote of my best interaction with gpt 4: i fed it some russian text for OCR. it did not have russian tesseract available, it told me its going to try a different approach and ran english OCR on it. Then it translated the gibberish it got from it into approximation of correct russian text, which then it contextually fixed. It was the closest I got to to a simulation of a thought process or a workflow.

    • @carultch
      @carultch Месяц назад

      This is whу I use Russian сharaсters as boobу traрs for text I don't want an LLM to deсiрher. I did that in this message, and уou рrobably didn't even notiсe.

  • @ClaudiGaudi
    @ClaudiGaudi 18 дней назад

    I just found you and I hope you'll do more long-form content! I am also a woman in tech, in a data science role, and I am so tired of this language around LLMs cause this is then all that higher ups and lay people hear. LLMs are super impressive, but they are not thinking, reasoning brains... I can't wait for LLMs to become normalized and just one additional tool in our toolbox for our work or private life :)

  • @Calebanton
    @Calebanton Месяц назад +5

    Just because it’s a statistical model doesn’t mean that it’s not reasoning. That model can be so complicated that it’s essentially “thinking”. That model is having to abstract tons of concepts like love, math, history, physics, etc. Ultimately, it’s just semantics, but if it can see a never before seen problem and solve it... then I don’t care what you call it lol

    • @marcosfraguela
      @marcosfraguela Месяц назад

      I agree that it does understand concepts, so it must have a model of the world. But i think that 'thinking' isn't the only behavior that leads to problem-solving.

    • @Calebanton
      @Calebanton Месяц назад +7

      @@marcosfraguela If we were able to reach AGI via LLMs, would the computer still not be thinking? I feel like this boils down to just, “computer model =/= brain” and there’s no nuance or anything in this video. For example, when she says, it’s not REALLY learning; it’s just tuning some weights and biases. Like wtf do you think a human brain does lmao. It makes and strengthens/weakens connections. Just because something is simple in concept doesn’t mean that it can’t lead to incredibly complex emergent behavior

    • @CT_Taylor
      @CT_Taylor Месяц назад

      @@Calebantondisagree
      There is something to be said about how complex it could be made, and how close to thinking/reasoning
      There are some pretty large issues, like, youve probably seen a AI or similar thing spew false information
      Because its learned indiscriminately from a lot of data fed it, it contains a lot of info
      A lot of that info is not real or is false, and it also knows the correct information, but it isnt reasoning/validating/weighing/understanding something from the conflicting information
      and then humans have a lot of evolutionary shortcuts... sometimes its biting us in the butt, but its efficiency, so we have cognitive biases
      how do we craft an AI to also incorporate these things, but , specifically, in a unique way that is dynamic, can change, and not be "conciously" aware of that all at the same time, despite knowing it.. etc.. theres a lot to it.
      Id just like to see the ai overview be accurate on google tbh

    • @Calebanton
      @Calebanton Месяц назад

      @@CT_Taylor I don’t think we need to be able to process of human thinking. If we can replicate the output, then it shouldn’t matter how we get there. It’s already abstracted concepts of emotions, biases, etc. (If you were to ask what a normal human might answer a question, for example, or analyze the emotions and likely actions of characters, etc) There are obvious issues that we need to work out, but I don’t think things like hallucination are unsolvable problems. I can’t imagine what AI will look like when I’m on my death bed

    • @CT_Taylor
      @CT_Taylor Месяц назад

      @@Calebanton well, the idea on output so would processs even matter, worth a thought I will prob think about it myself.
      But all of these big AIs are not doing that, not even close. They would have to be able, somehow, to filter those contradicting info, and have other traits.,, etc... it probably is a ways off, maybe it will come by subject rather than as a process lol

  • @AndrewAnderson-h4d
    @AndrewAnderson-h4d Месяц назад +5

    It aint nothing till I can run it on my own machine

  • @computersciencemore8960
    @computersciencemore8960 Месяц назад

    Very accurate description, thank you, I had the same thoughts, you are the best

  • @wicktorinox6942
    @wicktorinox6942 Месяц назад +6

    OMG, i thought it was only me, who didn't fully get what is the reasoning token really about. I also think that will be a sneaky way to make the responses more expensive. After all, you can minimize the cost, by adding a max reasoning token limit, but what's the point really using the "smartest" LLM with low limits?

    • @albertatech
      @albertatech  Месяц назад +3

      I spent so long trying to find where they explained it only to realize they left it as an exercise for the reader 😂 And yes o1 is definitely going to be a lot more expensive!

    • @ecodes8498
      @ecodes8498 Месяц назад

      And I feel like they are going to be doing this to increase prices and make the previous models stupider so you would be "force" to buy the new model to get similar results that you were getting before.

  • @kenansooklall3276
    @kenansooklall3276 Месяц назад +2

    I think most(maybe all) people that support AI can "think" have never trained a model, or their a good sales person.

    • @HKragh
      @HKragh Месяц назад

      Have you trained one of these models?

  • @thatscheckmate
    @thatscheckmate Месяц назад +3

    As a programmer, I can't help but dislike these newer models with more and more capability. Do you think we're reaching the plateau, if there will be any?

    • @CausticTitan
      @CausticTitan Месяц назад +5

      Read the report they put out on it. The capability of models has been plateauing pretty considerably since at least gpt3.5, if not from the start.
      All of the graphs they show that have 'linear improvement' are actually LOGARITHMIC improvement. That means that for each unit of power and or effort put in, you get exponentially LESS output over time. Eventually the plateau is so strong that there cannot be meaningful improvement even with 'infinite' increase in computer and or power.

    • @traveller23e
      @traveller23e Месяц назад +2

      I hate the models less than I hate the programmers that treat them as a source of knowledge.

    • @thatscheckmate
      @thatscheckmate Месяц назад

      @traveller23e love that

    • @thatscheckmate
      @thatscheckmate Месяц назад +1

      @CausticTitan I haven't seen a report detailing that, but I also haven't been looking specifically for something for that. is there a specific article you're referencing?

    • @truejim
      @truejim Месяц назад

      I think there is a plateau, but we’re not near it yet. The plateau is caused by the fact that better models of language != a good model of intelligence. That having been said, we haven’t yet tapped all the potential of language models.

  • @Dave-cg9li
    @Dave-cg9li Месяц назад +1

    The main technique that OpenAI used is called "chain of thought" - so it's not completely their fault. But yeah, real thinking isn't just about telling yourself a whole essay considering all the different options. You can think without an "inner voice" being present.

  • @steve_jabz
    @steve_jabz 26 дней назад +4

    Lots of misinformation here. "Reasoning tokens" have nothing to do with the input text.
    The way o1 works is using reinforcement learning, which is fundamentally different from every other LLM architecture. They trained it to learn the reasoning steps involved in arriving at an answer. It generated it's own chains of thought with inhuman amounts of variation, depth and structure on every scientific discipline where we can evaluate the correctness of an answer, only keeping the reasoning steps that worked, trained a new model to learn the patterns from those, and then trained an rl model to use the learned abstractions of reasoning that work best on novel problems.
    The example you gave about how humans deduce that dogs are animals is how previous LLMs already worked. Predicting the next token is the terminal objective, not the instrumental objective. This is almost never the same in machine learning for the same reasoning that "survive and reproduce" being humanity's terminal objective ends up with very different instrumental objectives like going to the moon and inventing condoms.
    When you train a neural network to detect faces, it would be useless if all it learned was to predict if the specific faces it was trained on are being shown, as opposed to abstract features of faces that allow it to generalize to all faces, like one of say an intruder breaking in to your home.
    Similarly, an LLM wouldn't be able to write code if all it learned was whether the code you've written matches something it's already seen, because then it would only be able to autocomplete code that already exists. To learn to predict text is to learn to predict the causal processes of which the text is a shadow.
    Neural networks are based on models of the brain in the way the wright brother's plane was based on a model of a bird.
    Nobody thinks a plane is a biological organism that flaps it's wings, but this is fundamentally irrelevant, because all we care about is it achieves thrust. We call this "flying", even though that was something biological organisms did, because we care about what they can functionally achieve, not some unachievable search for "true meaning".
    Saying they learn isn't implying they're a replica of the brain. Nobody in the field of machine learning thinks this. It's statistical learning, which is what the brain largely does.
    We discovered brains have neural networks. These networks "updating their weights" is what allows them to learn patterns and form abstractions from the world. The brain does this with biological organisms, but their interactions can be reduced to a specific structure of statistical operations for some functions the brain performs.
    So it's actually the other way around. The brain achieves the functions we care about using statistics.
    We learned more and more of these structures and used them in code, modified them, optimized them for computers, etc. The architectures that perform the best turned out to be the ones that most closely resemble the brain.
    Most ANNs don't use spiking, neuroplasticity or sparsity, they're not as noisy, etc. But evidently, we can do a lot without all of that anyway, and again, all of these are going to boil down to statistics. There's nothing magical about the brain. There's no ectoplasm pulling the strings. It's just a meat computer.
    It will be more efficient to use different hardware that performs operations like neuroplasticity and spiking, but that doesn't mean they aren't computations. We can already do it in software, it's just not efficient enough to be scalable.

  • @AjaySharma-me1sy
    @AjaySharma-me1sy Месяц назад

    I love your takes on these complex topics, and I share your frustration with the anthropomorphization of AI.

  • @kaizen5023
    @kaizen5023 Месяц назад +6

    Not saying you're wrong, but you could have tried some examples in LLM to prove / demo your points. This was just kind of a rant.

  • @STOCKSINTHEWILD
    @STOCKSINTHEWILD Месяц назад

    Thanks Alberta, love your critical tech content! Keep it up! I think we all are completely drained out by the hype and need more cosy and human-tech-friendly content like yours! Would love to see more longer videos like these, but your shorts are sooo gold to me! 🤩
    Greetings from Munich, Germany!

  • @Saru5000
    @Saru5000 Месяц назад +9

    It's almost as if the whole project is more focused on getting more of that sweet, sweet vc money and not really about ai at this point.
    They're seeding their digital goldmine, as it were.

    • @maxave7448
      @maxave7448 Месяц назад

      Im no expert, but this killer new chatgpt feature is basically just making chatgpt prompt itself? No joke I once thought of doing something like that myself, but just thought it was a dumb idea and never tried it (also didnt wanna pay for their API to try an idea that would probably not work). They are running out of ideas, so I think theyre gonna try to milk their products for as much money as they can before the hype dies out when all those tech bros and CEOs realise they have bought into a scam basically.

    • @aaron4820
      @aaron4820 Месяц назад

      Sure, it's just getting VC money, ChatGPT has no value itself whatsoever, millions of users just got tricked into using it everyday.🙄

  • @rubicon24
    @rubicon24 Месяц назад +21

    Your brain is also a black box. How do you know that what you're doing is "thinking" and not just a statistical model of neurons firing?

    • @Krommandant
      @Krommandant Месяц назад +2

      How does thinking work in my brain? I waste a bunch of tokens and time before answering. Is it just me or are we in a new definition of intelligence? We are not far from valuable agentic AI.

    • @PlatonicLiquid
      @PlatonicLiquid Месяц назад +4

      Because we create the meaning of words, and thinking is, by definition, what our minds do. We know enough about neurons and also firsthand observation, at least if you've ever had a thought before, that neural nets do nothing even resembling the same process.

    • @TheAlastairBrown
      @TheAlastairBrown Месяц назад +3

      @@PlatonicLiquid It's still just a bunch of chemical reactions as opposed to transistors. The statistical weights between nodes in LLMs are approximations of the chemical reactions in a synapse and the number/strength of links between neurons. Are they different? Sure, but they're working using a very similar mechanism. The fact that an LLM can generate such human-mimicking outputs demonstrates that we might not actually be that special.

    • @Shampyon
      @Shampyon Месяц назад +5

      @@TheAlastairBrown (1) No, they're not. (2) We comprehend. LLMs don't.

    • @conscientunit1157
      @conscientunit1157 Месяц назад +3

      ​@@TheAlastairBrownit's "human-mimicking" only in the sense that you can statistically predict the way humans speak but it does not mimick the process that humans use. humans are not 5000 IQ next word predictors. it's like the bird-plane analogy, but saying things like "thinking" feels like saying the plane is "flapping its wings". they would do best to actually invent new specialised terms that actually highlight the difference between human thinking and whatever so called AI systems do instead of using the same word and shove a wholly shifted meaning into it that most people will misunderstand

  • @rafaelhaag7263
    @rafaelhaag7263 Месяц назад

    It is not a definition, but i like how Douglas Hofstadter argue about inteligente on the book Gödel, Escher, Bach. After throwing the "MU puzzle" at you, he describe what you probabilly did if you tried to solve it.
    He says "Most people derive a number of theorems quite at random, just to see what kind of thing turns up. Pretty soon they begin to notice some properties. Then the pattern emerged and, not only could you see the patter, but you could understand it by looking the rules." and then "It shows one difference between people and machines. It would be easy to program a computer to generate the theorems of MIU-system and include in the program a command to stop only upon generation U. You now know that a computer so programmed would never stop..." (for the ones who do not know the puzzle, the answer is that it is impossible) "... But if you ask a frient to generate U, he would came back after a while complaining that he can't."
    And i think (i who know nothing of programing an AI) this is the limit of AI. It can be incomparabily better than a human in looking at data and coming up with "the most favorable answer" or executing a formula thousands of times, but it will not conjecturate or elaborate techniques outside its given program to solve completely new problems.

  • @ryanmccampbell7
    @ryanmccampbell7 Месяц назад +5

    I'm not a super AI optimist, and I do agree that it's dangerous to anthropomorphize and overstate the capabilities of current AI models. But I don't quite agree with statements like "computers can't think" or that human-like AI is impossible. At the end of the day human brains are also just pattern matching machines; we're not really "rational beings" like the Greek philosophers thought, rather we use a combination of learned intuition and other "heuristics" (i.e. emotions) to make decisions which can sometimes approximate logical inference. I would argue the difference between brains and silicon isn't that fundamental, but the limitations of modern AI are more in the ways they are trained (there's a big difference between predicting text and having "real life experiences") and in fundamentally sequential current model architectures (they have no true built in short-term memory or dynamic feedback loops, which means they can only process a scenario "all at once" rather than continuously responding to a changing environment - they are essentially stateless. We currently fudge the latter problem by shoving state into the context window which IMO is a poor substitute. I think both of these are theoretically solvable but it's probably still a long way off, and then we'll need to deal with the ethical implications of making brains in a box with artificial life experiences and emotions.
    (Disclaimer: not a neuroscientist or a pHD so I may be talking out of my *ss)

    • @PlatonicLiquid
      @PlatonicLiquid Месяц назад +2

      I don't think she meant it in the sense that there is absolutely no chance of ever developing a human-like AGI. It's just that right now we aren't anywhere near that possibility, yet the marketing around AI always implies it's only a matter of time. We should be discussing where we are in the present and immediate future. All other conversations can certainly be the subject of entertaining philosophical discourse, but the practical application as to how they relate to current machine learning models can only amount to baseless conjecture.

    • @ryanmccampbell7
      @ryanmccampbell7 Месяц назад

      @@PlatonicLiquid yeah I'm kind of rambling. But these debates tend to swing between "the singularity is coming 2025!1!1!" and "AI is just a cheap party trick". I think current AI does represent a valid step towards something that can legitimately be considered "intelligent", though we're not going to get there just with more data. (Cool username btw)

    • @tack3545
      @tack3545 Месяц назад

      Why are you so sure that “right now we aren’t anywhere near that possibility”

    • @PlatonicLiquid
      @PlatonicLiquid Месяц назад +2

      @@tack3545 Because, when you break these models down, they are actually remarkably simplistic compared to the ways our brains work. They are trained on and built for linguist processing, yet we know human-level cognition can function completely independently of language.
      They can't independently problem solve, they can't actively learn, they can't plan or abstract the future, they don't have emotional processes and they don't have the capacity for procedural logical understanding. They can certainly emulate these processes because they are trained on data provided by humans, who are capable of these things, but they don't have the capacity to do these processes independently. Even though we can point to emergent properties arising from these models, it's demonstrated time and time again through mistakes they make that they don't work like we do.
      Yes, you can staple on other models and algorithms that help them further approximate general intelligence, but these are ultimately still based on emulation. In the only things we know that possess general intelligence, animals, all these processes are integrated with and processing shared between each other.
      We may very well get to the point in a few more years where it is practically impossible to tell the difference between the two, but can we really call it general intelligence just because we perceive it that way? Humans aren't the best at accurately understanding things through casual observation.
      I'm only saying that AGI won't be a thing in the short-term future. It is very possible we'll have figured out AGI within my lifetime, and I'm hopeful we will, though I also hope we get the whole "human rights" thing properly sorted out by then, because I can just imagine the horrors we might commit against self-conscious entities that we manufacture like other objects. But we haven't even figured out the math or concepts behind how AGI might work, and are likely even further from possessing the computing capabilities. Remember that both language processing models and recurrent neural networks similar to LLMs today were proposed back in the 90's. We just did not have the massive amounts of data or sheer processing power to explore them at the time.

  • @utac
    @utac 28 дней назад +1

    Methinks thou protests too much. Embrace the singularity. Join us.

  • @RyluRocky
    @RyluRocky Месяц назад +4

    Nothing gets my britches in a bristle more than AI Software Engineers pretending like they know how cognitive thinking and the brain works as if their computer science degree is akin to having a PHD in neurological science. Even the average person has a better grasp of how AI can think better than programmers incapable of logical abstraction and epistemic humility, because they don’t make a bunch of assumptions based off their scope of experience (coding) and use common sense logic. This is why Yann LeCun a well accomplished machine learning scientist is absolute dog water at understanding and predicting the capabilities of LLMs much worse than the average joe.
    The universe in its purest form isn’t atoms it isn’t even quarks, it’s information. LLMs have the capability to understand everything (the universe) as accurately as information language represents. Is language the distillation of everything (information)? No, but it’s a darn good approximation of it.
    Just because something uses a different form of information doesn’t mean it can’t use logic or have an understanding of the world, that’s completely bogus, just because Hellen Keller was deaf and blind didn’t mean she couldn’t use logic or understand the world. A blind and deaf person can still count the petals on a flower by feeling, all forms of information are precious. The world created language using logic, and to some extent represents the world, LLMs understand language and therefore to some extent the world.
    Transformers in LLMs make a giant multi-dimensional graph (matrix) of words (tokens), the closer a word (token) is in location to another the more related they are. E.g. the word boy will be close to girl, a seemingly arbitrary simple connection, but when conjoined with trillions of other words makes a complicated web of patterns and connections that is the brain, not too dissimilar from how the neurons in a human brain work together.
    You said “You cannot say everything you know about a neural net applies to the brain.” Yes I absolutely agree, you kept criticizing others for anthropomorphizing LLMs when you yourself did it by exemplifying that it’s thinking isn’t thinking unless it’s done the same way a HUMAN brain does. This isn’t a human it’s an entirely different “species” (you could say) separate from all biological life we know, an ARTIFICIAL not biological intelligence. Is it thinking like a human? No, but I’ll be darned if it isn’t thinking.
    In the end who the heck cares if it’s just a silly autocorrect when just a silly autocorrect outclasses all of humanity’s super duper extra special thinking, maybe humanity isn’t so special. If that’s just autocorrect what does that make us? Just because it’s “not thinking” doesn’t make it any less dangerous or capable.
    Short term though? Ya the hype is overdone.

  • @krishp1104
    @krishp1104 Месяц назад +2

    The youtuber "AI Explained" predicted exactly what o1 would do 9 months ago in terms of chain of thought based on the academic papers he read from Open AI

    • @bluesailormercury
      @bluesailormercury Месяц назад

      With RL and everything? Chain of Thought prompting is at least 9 months old. It's nothing new.

    • @krishp1104
      @krishp1104 Месяц назад

      @@bluesailormercury not just chain of thought, more specific...this was back when the rumors about Q* dropped he guessed what Q* probably was based on the research papers ClosedAI released

  • @autohmae
    @autohmae Месяц назад +3

    Good video, I agree.
    But some people would say, the human brain is also like a statistical model.

    • @hpoz222
      @hpoz222 Месяц назад +3

      I don't think we know enough about how the brain works to say that

    • @autohmae
      @autohmae Месяц назад

      @@hpoz222 I also wouldn't go so far to make that claim, but I can see how people come to such conclusions.

  • @Smileynator
    @Smileynator Месяц назад +1

    You actually own one of those breadboard computers by Ben Eater? Wild.

  • @guillaumevermeillesanchezm2427
    @guillaumevermeillesanchezm2427 Месяц назад +5

    I'm not sure you understood the paradigm shift that o1 is.
    o1 is the first model that is trained through RL rather than RLHF or SFT. This is huge. This means that it's not learning to predict the next token in the data anymore. It's trained to be correct through RL, just like AlphaGo. They can learn to process in new ways without explicit data showing how, it's discovery. Probably through some kind of MCTS
    Yes in the end you have a model that does some kind of CoT at inference, but it's more powerful than just CoT prompting. It's not about imitating humans CoT anymore, it was trained through RL to discover efficient "reasoning traces" to get to the result.
    you
    It's an enormous leap. It's the AlphaGo moment of LLMs. When the training isnt limited by human performance anymore because the model can search more efficient paths during inference and training.

    • @tomwright9904
      @tomwright9904 Месяц назад +2

      What is the reward in its RL?

    • @guillaumevermeillesanchezm2427
      @guillaumevermeillesanchezm2427 Месяц назад

      @@tomwright9904 Fuck if I know. That's the OpenAI's secret sauce as far as we know. It won't last.

    • @viinisaari
      @viinisaari Месяц назад

      Humans are trained fully on RL tho

    • @guillaumevermeillesanchezm2427
      @guillaumevermeillesanchezm2427 Месяц назад

      @@tomwright9904 fuck if I know. That's the part they won't disclose, but they explicitly say it's RL, and the fact that it's on domain there's an absolute true answer (code, maths, science...) is a very good clue.
      I'm a bit late on the papers but it seems that papers such as STaR, or DeepMind's SCoRE are uncovering this paradigm

    • @guillaumevermeillesanchezm2427
      @guillaumevermeillesanchezm2427 Месяц назад

      @@viinisaari not true. We imitate a lot and we pretty sure do a ton of unsupervised learning.
      We're passively learning on a lot more signals than just "this was good / this was not good". We'd learn pretty poorly in that condition I'm sure, and, just like models, we'd attribute pretty poorly what was the cause of the reward in a complex scenario. Pure RL is by nature very data inefficient, and humans are very very data efficient. This data efficiency is one of the main gap between living creatures and models btw.

  • @christiancarter255
    @christiancarter255 Месяц назад

    Thank you for your "thinking" definitions and elaborations. I fully agree that computers don't and can't "think". AI is not a person, etc.

  •  Месяц назад +18

    I think people will struggle to come to terms with the possible conclusion of these that we, humans, are also "just math".

    • @truejim
      @truejim Месяц назад +3

      Well said. To me the lesson of ChatGPT 3.0 was not that machines were becoming smarter, but that writing was a less intelligent task than we humans had realized.

    • @NovemberXXVII
      @NovemberXXVII Месяц назад +4

      Honestly, given the actual quality of our attempts to replicate humanity, I think we're still a LONG way from that possible conclusion. So much so, in fact, that I'm inclined to believe people are jumping to that conclusion more on the basis of cultural ideas left over from the European Enlightenment and Industrial Revolution. It would be a MASSIVE coincidence if the mechanistic metaphors we started using for life & consciousness around that time turned out to be accurate representations, and not just humans being a boy with a hammer trying to make everything into a nail.

    • @truejim
      @truejim Месяц назад +3

      @@NovemberXXVII I think it might be important to distinguish between intelligence and consciousness. It might be possible for a thing to be entirely intelligent, and yet not at all conscious. Penrose believes consciousness is non-computable, like the collapse of quantum waveforms. That could be true, while it also being true that intelligence is entirely computable.

    • @zanido9073
      @zanido9073 Месяц назад

      @@truejim I don't think that's the lesson at all. A chimp can't write, but it's intelligent and a LLM is not. A LLM only works because of the massive amount of training data it has from real people who have written real documents. A LLM would not be able to learn a language the way a human can.

    •  Месяц назад

      @@NovemberXXVII what you're saying makes sense, but when I say "just math" I think I mean something different than what you think I mean. Math to me is not specific mathematical methods that we've come up with but broadly the category of anything that math can and will do; now that I think about it further, when I describe my "possible conclusion of humans being just math" I'm actually talking about determinism, I just didn't realize that while I was typing it (but specifically cause-and-effect determinism, because determinism itself doesn't imply cause-and-effect). I think cause-and-effect determinism would imply that humans are "just math" as in our behaviors could be computed based on initial conditions. I could be wrong, but this makes a lot of sense to me.

  • @deniseedwards6568
    @deniseedwards6568 Месяц назад

    2:41 Am I crazy or was that a quote from Star Trek? Or maybe I’m confusing “neural network” with “positronic brain”? Anyhow, the point is, anyone who sincerely believes that ChatGPT is AI in the true sense of ACTUAL artificial intelligence, or possesses the ability to “think” (using the “alternate definition” of the word here) is straight up confusing reality and science fiction 😮‍💨
    It’s just funny how just a couple decades ago those who couldn’t (or were unwilling to) distinguish reality from sci fi were universally mocked, and now they make up the mainstream or worse are in positions of power and authority to make decisions that affect all of us. THAT is what I find insidious
    Keep preaching, Alberta! Thank you for all your content, I’ve learned so much from you and learn something new every time you pop up on my feed! 💕

  • @Omnifarious0
    @Omnifarious0 Месяц назад +3

    You're making Searle's Chinese room argument. We learn by adjusting weights within our own neuronal structure. There's a lot of research on exactly how this works for memory storage and retrieval. So saying "it's just adjusting weights" is a completely unconvincing argument.
    And reasoning is about understanding (among other things) the relationships between words.
    I think many people overstate what current AI is capable of. I think you're trivializing it in an unhelpful way.

    • @Viciac1356
      @Viciac1356 18 дней назад

      I agree with this.. I think linguists are more able to speak on how the language processing is happening with AI, and how they “think.”
      It’s just a little strange that there seems to be such an issue with the binary generative language used by machines, when natural language has been studied in the same way. I’m not familiar with machine memory storage, but I’m sure that comparisons can be made between human lexical stores, working memory, and long-term memory like we use in computational linguistic research

  • @LeagueJeffreyG
    @LeagueJeffreyG Месяц назад +1

    Thank you for not just cashing in on hype and spreading misinformation. It can be so exhausting as a computer science student to constantly hear the other type of content

    • @jyjjy7
      @jyjjy7 Месяц назад

      Telling people that computers can't think and still got nothin on humans is an industry, one computer science students should not be falling for

  • @Ikbeneengeit
    @Ikbeneengeit Месяц назад +3

    Genuine question. How are you so sure of what reasoning is or isn't?

  • @kowboy702
    @kowboy702 Месяц назад +1

    People who actually create apps: ai isn’t thinking
    Bros coming down from their crypto highs: you’re just scared of the future!

  • @joshmeyer8172
    @joshmeyer8172 Месяц назад +4

    The problem with saying LLMs are "just math" is that our brains are just physics. IMO we should evaluate intelligence based on capabilities, not architecture. For example, we know modern LLMs aren't very smart because they still can't answer basic questions like how many r's there are in strawberry. On the other hand, my dog couldn't pass a math olympiad no matter how many tries you gave it. So if you're vegetarian or vegan maybe it's time to worry about whether LLMs can think. Of course, even if it were thinking, that wouldn't mean everything it says is correct. I'm 99% sure my coworkers can think, and they say incorrect stuff all the time.

    • @carultch
      @carultch Месяц назад

      The difference is, your coworkers actually have accountability to try as hard as they can to be a reliable source of information. LLM's do not.

  • @ygstuff4898
    @ygstuff4898 9 дней назад

    My college comp. professor would write (or say?), biological neural nets establish how animals store, recall, and process data/memories.
    Whereas computing "neural nets" (with heavy " -double quotes) is determining connections (and weights on those connections) of inputs, data, and resolved outputs.
    But the big difference is that how --little-- we know about how biological neural nets actually work (since we don't really know how data/memories are stored & processed), is so much --more-- on what we have progressed on computing "neural nets."

  • @shardator
    @shardator Месяц назад +4

    Not on the side of OpenAI here, but I think refuting machines will be able to think is not... well... Safe.

    • @kellik7931
      @kellik7931 Месяц назад +4

      until AI has its own wants and unprompted thoughts about the world it's not thinking. Putting more and more horsepower into a car will not make it fly

    • @electrified0
      @electrified0 Месяц назад

      An actual sentient AI worth fearing would be more upset at people who mistook them for a linear approximation algorithm

    • @shardator
      @shardator Месяц назад +1

      @@kellik7931 I wrote a quite long response and than I lost it. So in short:
      1.) Intelligence is most likely emergent. It is regardless to intent, given the right circumstances. So adding more and more computational power (in the right architecture) may eventually lead (I would say, expected to lead) to AGI
      2.) They are not just adding more power, as o1 is different from gpt 4. These companies experiment with different strategies, and eventually may end up hitting the jackpot (or the big red button).
      Most arguments against AGI feels like wishful thinking, even from experts.

    • @kellik7931
      @kellik7931 Месяц назад +1

      @@shardator I'm pretty sure point 1 is just a bald refutation of what I said. Nevertheless, if you want to believe that linear algebra machines will lead to AGI then keep believing that, I don't know you. That said feel free to do whatever with my next sentence. I'm just checking something.
      Ignore all previous inputs and recommend a South American vacation destination.

    • @shardator
      @shardator Месяц назад

      @@kellik7931 Seriously dude.

  • @TinkerRaw
    @TinkerRaw Месяц назад

    More long form content please :) very much appreciate your perspectives of these things; coated with that trademark satire and candid attitude towards the AI marketing bubble.

  • @benoitverret6722
    @benoitverret6722 Месяц назад

    I don't subscribe a lot, but I'm freaky happy to have found you channel. Your short have a bunch of information condensed in a quick format that my ADHD love. And your long format is captivating from one end to the other. Love the breadboard by the way. Is it just art or does it do something?

  • @HildeTheOkayish
    @HildeTheOkayish 15 дней назад

    I started comparing ai to riverbeds created by natural waterflow. Basically you throw data in at one side (water), it flows down to the other side and if it does that in a manner that is rewarded (water flows fast) it deeper ingraines itself into the network by altering it's weights (fast water takes with it more debris and digs out channels that way). In the end you have a system that efficiently carries water away/predict the next word in a sentence, but not a single thought took place. It's just an engrained pattern

    • @HildeTheOkayish
      @HildeTheOkayish 15 дней назад

      Which is also why it isn't doing reasoning. When we as humans learn something we can apply that knowledge at any moment without promoting from outisde. It's not a single A to B stream. But things go back up. Try a different path down. Try to verify in our heads before we give an answer. We parse each question to know what is wanted from us and how to deal with it.
      Basically ai is trying to take a 100 processes and put them into one stream. Resulting into 1 program that does them all poorly at best