Zapier’s Mike Knoop launches ARC Prize to Jumpstart New Ideas for AGI | Training Data

Поделиться
HTML-код
  • Опубликовано: 18 окт 2024

Комментарии • 20

  • @supercurioTube
    @supercurioTube 3 месяца назад +5

    I really appreciate the ARC Prize as a sanity and reality check on the unfounded marketing effort by some LLM companies promising that their next model or the one after will solve all problems, change the world in an instant and make human intelligence obsolete.
    For many, these prediction are a source of profound uncertainty and anxiety fueling a growing, indiscriminate backlash against anything related to AI technologies - despite these near future transformations presented as certainty range between speculation and delusion.
    The ARC Prize helps here by showing how far we are from AGI, by publishing a reasonable metric. I'm thankful for this insight.
    BTW that was a great interview! Well done.

  • @sdmarlow3926
    @sdmarlow3926 3 месяца назад +3

    It's not the challenge (of games, etc) that are just easy to brute force, and we need more ARC-like milestones. It's that very few people are taking the long-term research view of these things. No one in a ruch to get a product to market is trying to "solve" AI/AGI. The failures are just edge cases and longtail problems to be solved later (for them; in their mind). Working on the hard problem of playing games and solving ARC at the architectual/cognitive level means there are no positive results you can use to get funding, but then, once you have results, something like ARC is the best way to set your methods apart from the rest of the AI field (which is 99% just deep learning).

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 3 месяца назад

      Yet u keep ignoring my efforts...

    • @drhxa
      @drhxa 3 месяца назад +1

      How would you get funding if your solution is required to be open source?

    • @sdmarlow3926
      @sdmarlow3926 3 месяца назад

      @@drhxa That was why I had two other comments for this video. ARC needs to be a milestone for ALL, not just those hacking away at LLM prompts, and by making open source a requirement for ALL submissions, they have killed-off something that could lead to actual progress.

  • @sdmarlow3926
    @sdmarlow3926 3 месяца назад +1

    wait wait wait... the wording got a little messy, but sounded like he was saying any efforts on the closed-source set will be made available, which is true of the actual prize aspect, but the field NEEDS to have access to the closed task set as a measure for propriatary systems (not part of the actual prize; not to be stored, reproduced, or examined by those running the benchmark).

  • @jmelco9407
    @jmelco9407 3 месяца назад

    Why that I cant send your link,,I wont be able to invite my friends because of that trouble sending your links

  • @ParnianMotamedi
    @ParnianMotamedi 3 месяца назад

    is the sequoi digital currency?
    it listed in pancake swap?
    Please answer me soon i'm in hurry

  • @ACienciadaEstatistica
    @ACienciadaEstatistica 3 месяца назад

    "The longer it goes, the longer it goes. " It is actually the memoryless property of the exponential distribution P( X> t + k | X>k ) = P( X>t)

  • @NandoPr1m3
    @NandoPr1m3 3 месяца назад

    Just thinking out loud... the energy efficiency of human intelligence is not based in language but an 'instinct' that our thoughts are heading in the right direction (then we can turn that into language and/or actions). We need a 'mental compass' to guide reasoning. Instead of Large 'Language' models, we need Large 'Framework' Models. I don't know if that can be tokenized, but possibly Transformer architecture can be used here as well.

    • @harshnigam3385
      @harshnigam3385 3 месяца назад

      Could you elaborate on Framework Models?

    • @NandoPr1m3
      @NandoPr1m3 3 месяца назад

      @@harshnigam3385 For example, babies are born with the Bradycardic Response (instinctively holding their breath and opening their eyes) when they are underwater. These types of instincts (self preservation, fight or flight, etc.) for lack of a better word, are our Behavior Axioms.
      The same way we base Math on Axioms (which took a while to develop), we need a MODERN reinvention of Axiomatic Reasoning.
      A possible approach would be using LLMs to work backwards and find Modern verified solutions first, then break the steps used to arrive to that solution into something that can be Tokenized (Let's call them Solution Blocks).
      If we did that with 1M+ solutions and the LLM found patterns of Solution Blocks, we can train the Large Framework Model to figure out how to go about Reasoning using those Blocks. This would be done before selecting and sending instructions to it's Mixture of Experts to present the solution.
      On a UI level, this may require chatbots to ask questions to the user first before responding (see Habit #5 in Stephen Covey's 7 Habits of Highly Effective People: Seek First to Understand, Then to be Understood).
      DISCLOSURES: Only put babies underwater in presence of Licensed Swim Instructors. Apologies for being verbose. I'm just a guy with an internet connection.

    • @MikeLee0
      @MikeLee0 3 месяца назад

      When you say "..an 'instinct' that our thoughts are heading in the right direction.." could that be a 'confidence factor' ?

    • @NandoPr1m3
      @NandoPr1m3 3 месяца назад

      @@MikeLee0 While a confidence factor can help to quantify the output, I feel like we need models that follow how a child grows and develops (that's how we develop human instincts). An example would be Ecological Systems Theory. Human reasoning isn't linear, it's layered and influenced by our environment. While it may be hard to build, I'm picturing a MicroSystem AI, that feeds into a MesoSystem AI, then into ExoSystem, MacroSystem and ChronoSystem. I'm curious if this approach could result in the Emergence of reasoning that humans experience. I see it happening in my 4 & 7 year old boys. With enough compute you can 'grow' an expert equivalent to 100 years in 100 days for example. I'm not knowledgeable enough to know if anyone has trained models like this. It always seem like we train them by stacking 100 encyclopedias on top of a 1 day old baby.

    • @NandoPr1m3
      @NandoPr1m3 3 месяца назад +1

      @@harshnigam3385 see my thoughts below on using an Ecological Systems Theory approach (my reply to Mike Lee). The 'world models' we build for ourselves guide our individual reasoning capabilities. Instead of predicting the next word, we need AI that can build the next layer of it's own 'world model' (i.e. framework). A silly example would be that, if you ask an AI, "how can a pig ever be able to fly?" it would need a framework to develop an accurate 'world model' to reason that the pig can be cargo in a plane. But if the AI is only trained to predict the next word than statistically, it could one day respond with "have the pig get a pilot's license". My 2 young boys have developed enough of a foundational framework (world model) to know which response is a hallucination. I don't know how this can be achieved, but maybe the way Anthropic built it's Constitutional AI, we need something bigger/more complex to build Reality Framework AI.
      Thank you for coming to my Ted Talk, lol.

  • @shawnryan3196
    @shawnryan3196 3 месяца назад

    I think LLMs with a hybrid architecture will get there. We need continuous learning with persistent memory. The ability to make many predictions. Right now AI does a forward pass where a human may take 100s or 1000s of passes to answer the same question. The biggest is it needs the ability to update and build mental maps on the fly. I have no doubt we will reach AGI this decade we just need to start putting lots of work into more than just scale.

  • @sdmarlow3926
    @sdmarlow3926 3 месяца назад +1

    Full stop @ 49min. Even Pub leaderboard "holders" have to open source their results? That needs some clarification ASAP.

  • @jmelco9407
    @jmelco9407 3 месяца назад

    Why theres no response on my comment