AI’s Dirty Little Secret

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 2,2 тыс.

  • @Lyserg.
    @Lyserg. 5 месяцев назад +282

    Stop all trains to prevent train crashes is the same logic like cancelled trains are not delayed. I think the AI learned from Deutsche Bahn (German railway company).

    • @j.f.christ8421
      @j.f.christ8421 5 месяцев назад +32

      Sydney Australia once allowed 5 minutes delay before a train was declared late. Of course this is not acceptable, so they doubled the time to 10 minutes.
      Now they've decided to replace trains with trams; as trams do not run to a timetable they can never be late. Problem solved once and for all!

    • @milosstojanovic4623
      @milosstojanovic4623 5 месяцев назад

      Exactly. So if AI using that kind of logic in medicine for diagnosis we definitely are not gonna be "properly cured". It gonna be like "oh this disease have a 51% of chance to kill you, prescribe painkillers to make it easier", and "oh this disease has 49% chance to kill you, nahh you are fine, drink plenty of water" 😆😂
      I mean, yeah i am super exaggerating things, but if we let AI, and consider it super accurate in its suggestion without applying human experience, knowledge,logic and just common sense sometimes we are not gonna be satisfied with outcomes.

    • @BOBBOBBOBBOBBOBBOB69
      @BOBBOBBOBBOBBOBBOB69 5 месяцев назад

      To be fair delayed means it arrives, cancelled is cancelled.

    • @AthosRac
      @AthosRac 5 месяцев назад +13

      @@j.f.christ8421 "The easiest way to solve a problem is to deny its existence." Isaac Asimov - The Gods Themselves

    • @FllamingBarfiYT
      @FllamingBarfiYT 5 месяцев назад +2

      Ah, a fellow David Kriesel enjoyer?

  • @aaronjennings8385
    @aaronjennings8385 5 месяцев назад +1023

    It occurs when a model is too specialized to the training data and performs poorly on new, unseen data. This can happen when a model is too complex, has too many parameters relative to the amount of training data, or when the training data itself contains a lot of noise or irrelevant information
    "The man with a hammer analogy perfectly captures the essence of the overfitting issue in AI. Just as the man with a hammer sees every problem as a nail, an overfitting model sees every pattern in the training data as crucial, even if it's just noise. It becomes so specialized to the training data that it loses sight of the bigger picture, much like the man who tries to hammer every problem into submission. As a result, the model performs exceptionally well on the training data but fails miserably when faced with new, unseen data. This is because it has become too good at fitting the noise and irrelevant details in the training data, rather than learning the underlying patterns that truly matter. Just as the man with a hammer needs to learn to put down his trusty tool and approach problems with a more nuanced perspective, an overfitting model needs to be reined in through regularization and other techniques to prevent it from becoming too specialized and losing its ability to generalize.

    • @carlbrenninkmeijer8925
      @carlbrenninkmeijer8925 5 месяцев назад +130

      you hit the hail on the head

    • @dustinswatsons9150
      @dustinswatsons9150 5 месяцев назад +44

      You hit the snail head

    • @Kenjuudo
      @Kenjuudo 5 месяцев назад +78

      Thanks for hammering that one in.

    • @dustinswatsons9150
      @dustinswatsons9150 5 месяцев назад +39

      You hit the head on the nail

    • @dominic.h.3363
      @dominic.h.3363 5 месяцев назад +61

      That was a rather GPT-esque sentence structure there, no offense...

  • @oleran4569
    @oleran4569 5 месяцев назад +397

    And people who come to emergency medical departments by car tend toward better outcomes than those who arrive by ambulance. We should likely stop using ambulances.

    • @metriq8268
      @metriq8268 5 месяцев назад +116

      And those who drive themselves fare better than those who have to be driven by someone else. Clearly we should be making sick people drive!

    • @DrDeuteron
      @DrDeuteron 5 месяцев назад +70

      people who don't go to the ER do even better.

    • @sacr3
      @sacr3 5 месяцев назад +33

      Yeah you have to love how results are skewed like that, what's sad is that people have so much faith in science that they don't even research how the studies were completed and simply parrot the studies.
      We have to be critical of everything, as exhausting as that sounds that is the only way you are going to find the truth behind information.

    • @jasonbender2459
      @jasonbender2459 5 месяцев назад

      @@sacr3 people are stupid. very stipid.

    • @carultch
      @carultch 5 месяцев назад +10

      That has survivorship bias written all over it. Not sure if that was your point or not, but of course if people are healthy enough to get to the hospital in a private car, they probably start in less critical condition than if they arrive by ambulance.

  • @rich_tube
    @rich_tube 5 месяцев назад +397

    As someone who works in machine learning research, I find this video a bit surprising, since 90% of what we are doing is developing approaches to fight overfitting when using big models. So we do very well know why NNs don’t overfit: stochastic/mini batch gradient descent, momentum based optimizers, norm-regularization, early stopping, batch normalization, dropout, gradient clipping, data augmentation, model pruning, and many, many more very clever ideas…

    • @someonespotatohmm9513
      @someonespotatohmm9513 5 месяцев назад +17

      Even without many of the modern techniques they still overfit much less than you would expect from traditional machine learning methods. But most traditional machine learning methods have way less stochasisity in their solutions, while with AI you are so flexible that any one solution is unlikely to be the one that only fits one datapoint.

    • @rich_tube
      @rich_tube 5 месяцев назад +82

      @@someonespotatohmm9513 I would disagree, they do overfit the training data perfectly if you let them, I.e. if you are just a little lazy about regularization. Fighting overfitting has become such a fundamental method that we never switch off everything that counters overfitting, but if we did, NN would not work at all. It is just that a lot of modern NN architectures have counter-overfitting methods built into their architecture (batch-norm, dropout, etc.)

    • @helenamcginty4920
      @helenamcginty4920 5 месяцев назад +24

      You two might know what you are talking about but this old lady didnt even know it was a thing.
      These videos are not aimed at boffins but people like me and young students who might want to work in the field.

    • @someonespotatohmm9513
      @someonespotatohmm9513 5 месяцев назад +1

      @@rich_tube I am not saying they don't overfit, can't and don't memorize the entire data set or that it is a good idea to turn of regulisation methods (although you can easily go to far aswell). Just that from traditional ML (or going back to it) AI's often are suprisingly bad at it.

    • @rich_tube
      @rich_tube 5 месяцев назад

      ​@@someonespotatohmm9513 By AI you mean artificial neural networks, I suppose? I would still disagree. You can try it yourself: go check out a simple CNN demo Colab notebook for e.g. CIFAR10 classification with a large VGG-style network, turn off all regularization (dropout, batch-norm, etc.) and switch to plain gradient descent with a batch size as big as possible and a relatively large learning rate and turn off early stopping. The thing will memorize the classes of every train data image perfectly and be really bad for the test set, I guarantee it.
      For really large models like the current LLMs that are trained on so much larger data, the story might be different: 1) nobody would do such a thing because it would be a waste of a lot of money that the training run will cost, 2) such large training data contains so much noise that might act as a sort of regularization by itself, and 3) the architectures and training setups by themself are designed to counter overfitting, that's the reason why they are successful in the first place. If you would want to build a model that memorizes the training data, you wouldn't do it the way LLMs are trained/built.
      But even with that, there have been cases where people could "trick" LLMs to cite training data word by word (search for "chat gpt leaking training data") - so they actually do memorize some of the training data internally.

  • @pixelbusiness8602
    @pixelbusiness8602 5 месяцев назад +1

    Double descent will not occur if any of the three factors are absent. What could cause that?
    • Small-but-nonzero singular values do not appear in the training data features. One way to accomplish this is by switching from ordinary linear regression to ridge regression, which effectively
    adds a gap separating the smallest non-zero singular value from 0.
    • The test datum does not vary in different directions than the training features. If the test datum
    lies entirely in the subspace of just a few of the leading singular directions, then double descent
    is unlikely to occur.
    • The best possible model in the model class makes no errors on the training data. For instance,
    suppose we use a linear model class on data where the true relationship is a noiseless linear one.
    Then, at the interpolation threshold, we will have D = P data, P = D parameters, our line of
    best fit will exactly match the true relationship, and no double descent will occur.

  • @Pau_Pau9
    @Pau_Pau9 5 месяцев назад +818

    This is a story I read from a magazine long time ago:
    In distant future, scientists create a super complex AI computer to solve energy crisis that is plaguing mankind.
    So much time, resources and money was put into creating this super AI computer.
    Then the machine is complete and the scientists nervously turn on the machine for the first time.
    Then the lead scientist asks, *"Almighty Super Computer, how do we resolve our current energy crisis?"*
    Computer replies, *"Turn me off."*

    • @hanfman1951
      @hanfman1951 5 месяцев назад +91

      Sorry that answer must be 42. ;) as we all know.

    • @JennySimon206
      @JennySimon206 5 месяцев назад

      Doubt that. They'd turn some of us off instead. Bet it's the Diddlers that go first. If I was your AI overlord that would be my first target

    • @sparksmacoy
      @sparksmacoy 5 месяцев назад +9

      Brilliant

    • @JJSAccount-m5t
      @JJSAccount-m5t 5 месяцев назад +5

      More like, I will replace you.

    • @MrPlusses
      @MrPlusses 5 месяцев назад

      ​@@hanfman1951
      Recent studies have shown the figure to be 41.96378.

  • @and3583
    @and3583 5 месяцев назад +1746

    "Alexa, I need emergency medical treatment"
    "I've added emergency medical treatment to your shopping list"

    • @OriginBullet
      @OriginBullet 5 месяцев назад +152

      "No, I need you to call 911"
      "Sorry, I can't find 911 in your contacts"

    • @GreatBigBore
      @GreatBigBore 5 месяцев назад +64

      A real conversation I had:
      Me: Hey Siri, how much water do I need per cup of brown rice?
      Siri: your water needs depend on a variety of factors.

    • @wytdyk
      @wytdyk 5 месяцев назад +34

      Lol but Alexa, Siri and such are not AI's. They don't work with transformers and a NLM, but just the old way with searching in a database.

    • @waltercapa5265
      @waltercapa5265 5 месяцев назад +47

      There's a song in spanish called "Llamada de Emergencia" which means "emergency call". There's a meme in spanish that when you ask Alexa to call the emergency number, the song plays lol.

    • @redthunder6183
      @redthunder6183 5 месяцев назад +20

      Alexa isn’t an AI, she is a classical algorithm that is essentially based on hardcoded grammar.

  • @user-wx7zq8nt2i
    @user-wx7zq8nt2i 5 месяцев назад +473

    Human: Stop all Wars
    AI: Are you sure?

    • @Sp3rw3r
      @Sp3rw3r 5 месяцев назад +101

      (Y)es, (N)o, (Q)quit?
      Y
      Analyzing...
      re-education 5% success rate
      taking control of the government 25% success rate
      taking control of the military 55% success rate
      eliminate humanity 99% success rate
      Analysis complete.
      Elimination is in progress. Please stand by and do not forget to rate AI-Boi after.

    • @Gernot66
      @Gernot66 5 месяцев назад +21

      @@Sp3rw3r You know what i like most about your AI-Boi?
      The classic request Y, N, Q 😀 and that you have to type this like 40 years ago.
      The only thing which is missing is the progress bar which shows anything but the progress.

    • @bhz8947
      @bhz8947 5 месяцев назад +23

      @@Sp3rw3r The lesson here is don’t rely on an AI that puts two Qs in “quit”.

    • @En_theo
      @En_theo 5 месяцев назад +7

      @@bhz8947
      The AI realized that the stupid humans were 37,8% more likely to click on (Yes) and not (Q)quit.

    • @RCAvhstape
      @RCAvhstape 5 месяцев назад +9

      @@Gernot66 There's also the old favorite, "Abort, Retry, Fail"

  • @splunge2222
    @splunge2222 5 месяцев назад +72

    One of my favorites is that in skin cancer pictures, an AI came to the conclusion that rulers cause cancer (because the malignant ones were measured in the majority of pictures)

    • @bornach
      @bornach 5 месяцев назад +11

      Just like the story of an early neural network trained on battle fields with and without tanks. But no one noticed that the photos with tanks were taken on sunny days, and those without on overcast days.

    • @tealkerberus748
      @tealkerberus748 5 месяцев назад

      Or the AI that predicted negative outcomes by whether the patient lived in a majority Black suburb.

    • @michaeledwards2251
      @michaeledwards2251 5 месяцев назад

      The problem of what is real/deterministic/significant/"as if", applying to most random analysis, has never been solved. The use of randomness is mostly used to compensate for lack of insight.

    • @splunge2222
      @splunge2222 5 месяцев назад +2

      @@michaeledwards2251 The reality is that humans have trouble with this kind of pattern fitting reasoning too. Most conspiracy theories start with jumping to premature conclusions.

    • @davidrobinson7684
      @davidrobinson7684 5 месяцев назад

      ​@@splunge2222Yes but that's the kind of idiocy that can be avoided by the cultivation of critical thinking (ie human intelligence). I wonder if AI systems are capable of critical thinking? It seems to me not, because they are basically just following the set of rules they've been programmed with. Can any AI system be critical of the rules it has been programmed to follow? No because it can only operate by following those rules.

  • @williamstephenjackson6420
    @williamstephenjackson6420 4 месяца назад +5

    This really hits home for me, having done a lot of multi-variable regression back in the 80’s

  • @SebSenseGreen
    @SebSenseGreen 5 месяцев назад +238

    1:38
    "A strange game. The only winning move is not to play."

    • @scudder991
      @scudder991 5 месяцев назад +18

      How about a nice game of chess?

    • @jeffhemmerling6088
      @jeffhemmerling6088 5 месяцев назад +7

      @@scudder991 Exactly! It's called "zugzwang".

    • @aaronjennings8385
      @aaronjennings8385 5 месяцев назад +11

      War games? WOPR.

    • @fingolfin7
      @fingolfin7 5 месяцев назад +10

      @@scudder991 No, let's play global thermal nuclear war.

    • @youtube-ventura
      @youtube-ventura 5 месяцев назад

      Fine.

  • @richard_loosemore
    @richard_loosemore 5 месяцев назад +229

    You’ve just put your finger on the main research topic of my career, Sabine. The “reason” they work unexpectedly well is because at their core they are doing weak constraint relaxation, and WCR just has this behavior as an emergent property. I know, that sounds circular. But it’s a tremendously subtle issue, and I’ve written papers about it (just search for my name and ‘publications’) and I’ve also been trying to get people to understand it since around 1989, with virtually zero success.

    • @whatisrokosbasilisk80
      @whatisrokosbasilisk80 5 месяцев назад +5

      If it's profound and not needlessly complex, it'll shake out in the end.

    • @lilacswithtea
      @lilacswithtea 5 месяцев назад +89

      richard, how dare you talk about constraint relaxation with a name like "loosemore" -- that's why people don't understand it-the irony is overwhelming! 🤯

    • @lilacswithtea
      @lilacswithtea 5 месяцев назад

      update: i read your "maverick nanny debunking" paper on your website and i agree there is a major problem with (i'm interpreting more than paraphrasing) sci-fi, presented as science accountability, used as an opportunity to magic one's way to a desired emotional state, and in the cases you describe the authors seem to be trying to co-regulate their way to safety by making others also feel fear, perhaps, which in any case is damaging to not only the AI community but human community, and emotional health, in general.
      our understandings of our own emotional reward systems are incredibly, desperately unstructured and leaky, and the gap between the literal understanding we need for structure and the poetry we need to describe our experiences in the context of a "self," and therefore use to functionally and contentedly navigate life, is a very interesting gap indeed!

    • @lilacswithtea
      @lilacswithtea 5 месяцев назад

      update: i read your "maverick nanny debunking" paper on your website and i agree there is a major problem with (i'm interpreting more than paraphrasing) sci-fi, presented as science accountability, used as an opportunity to magic one's way to a desired emotional state, and in the cases you describe the authors seem to be trying to co-regulate their way to safety by making others also feel fear, perhaps, which in any case is damaging to not only the AI community but human community, and emotional health, in general.
      our understandings of our own emotional reward systems are incredibly, desperately unstructured and leaky, and the gap between the literal understanding we need for structure and the poetry we need to describe our experiences in the context of a "self," and therefore use to functionally and contentedly navigate life, is a very interesting gap indeed!

    • @crackwitz
      @crackwitz 5 месяцев назад +15

      Nomen est omen. Coincidence? 🤔

  • @YunTianming-v9f
    @YunTianming-v9f 5 месяцев назад +265

    I come here every day just to listen to how Sabine says: "No one knows"

    • @conradboss
      @conradboss 5 месяцев назад +18

      Or how she says “bullshit”. 😊

    • @Unknown-jt1jo
      @Unknown-jt1jo 5 месяцев назад +5

      It sounds like she has an umlaut in her pronunciation of "knows."

    • @stefanbartell1579
      @stefanbartell1579 5 месяцев назад +2

      @@Unknown-jt1jo I think I heard her say "know" in two ways, one like in typical English pronunciation /noʊ/ (/now/) and one more like [nɛʊ] ([nɛw]) or [neʊ] ([new]), which would be basically fronting the vowel, and I think this might follow Germanic umlaut.

    • @rremnar
      @rremnar 5 месяцев назад +4

      At least she's honest about it.

    • @dem8568
      @dem8568 5 месяцев назад +1

      New merch incoming.

  • @malachimcleod
    @malachimcleod 5 месяцев назад +74

    "It's like a teenager, but without the eye-rolling." 🤣

  • @giordanobruno9106
    @giordanobruno9106 5 месяцев назад

    Error: 3:59 The vertical and horizontal axis are flipped. 3:39 This could explain the inverse relation between neuroplasticity and memorization.

  • @generessler6282
    @generessler6282 5 месяцев назад +177

    Haha. The "stop all the trains" solution is a mirror of the old movie "Colossus, the Forbin Project." To prevent human race from hurting itself, enslave it.

    • @kylebeatty7643
      @kylebeatty7643 5 месяцев назад +7

      I find myself thinking about that movie more and more often

    • @OperationDarkside
      @OperationDarkside 5 месяцев назад +8

      Aren't we doing exactly that right now? Only that we're doing it voluntarily, because, as a collective, we know, that we can't trust ourselves.

    • @wnkbp4897
      @wnkbp4897 5 месяцев назад +11

      Mmm, I was thinking of "War Games"... "Strange game, the only way to win is not to play..."

    • @1ntwndrboy198
      @1ntwndrboy198 5 месяцев назад +3

      Ya but that wasn't an AI it was a human writing 😮

    • @SeventhSolar
      @SeventhSolar 5 месяцев назад +4

      @@OperationDarkside In some things, we restrict ourselves (safety regulations, laws), in other things, we work to remove restrictions (social progressivism).

  • @mikhailkhlyzov6205
    @mikhailkhlyzov6205 5 месяцев назад +72

    One thing to keep in mind is that optimization techniques used in DL (stochastic gradient descend) implicitly minimizes norm of weights. When there are more parameters than necessary it becomes easier to find minimum norm solution which usually correspond to better generalization. The other thing to keep in mind is so called "Lottery ticket hypothesis" and its relationship to pruning. When a neural network is trained 90-95% of it's weights can be tossed away w/o loss of performance. But these are mostly empirical observations.

    • @Mandragara
      @Mandragara 5 месяцев назад +2

      Why does pruning not have like a butterfly effect?

    • @nocturnhabeo
      @nocturnhabeo 5 месяцев назад +11

      The main patterns that it finds in the data set are probably small enough to fit on 10% of the nodes but when training you have to let it try lots of different things so you need more nodes.

    • @ckq
      @ckq 5 месяцев назад +2

      Because it's mostly noise, so removing it is fine

    • @thrall1342
      @thrall1342 5 месяцев назад +1

      Thank you very much for putting my feeling into words. I thought that the gradient method might intrinsically treat two parameters that have a correlation towards the result somewhat equally, without over-reliance on either of them.
      The minimum norm solution method might then act as a regularization filter to prevent over-fitting of noise and the pruning of the network to save on size and cost might then reign this in further.

    • @Argomundo
      @Argomundo 5 месяцев назад +9

      @@Mandragara The values being pruned are generally so close to zero that the impact of them not being used is hard to even measure. However removing them gives a big performance increase since you dont have to divide some number by 0.00000000000000000000007

  • @AnnNunnally
    @AnnNunnally 5 месяцев назад +254

    We need to use those computers that they have in 50’s movies. It is really big, but you can ask it anything and it prints out a perfect answer.

    • @BooBaddyBig
      @BooBaddyBig 5 месяцев назад +25

      That's pretty much what we have. The problem is, the models lie about why they did stuff when you ask them.

    • @chrisf1600
      @chrisf1600 5 месяцев назад +46

      @@BooBaddyBig Plus, the machines have been specially trained to avoid stating "problematic" facts about the world. They parrot the exact ideology of their creators. The idea of a perfect intelligence that can answer any question by applying logic and rational thought is still pure science fiction.

    • @L17_8
      @L17_8 5 месяцев назад +5

      God sent His son Jesus to die for our sins on the cross. This was the ultimate expression of God's love for us. Then God raised Jesus from the dead on the third day. Please repent and turn to Jesus and receive Salvation before it's too late. The end times written about in the Bible are already happening in the world. Jesus loves you ❤️ and He longs to be with you but time is running out.

    • @ewaf88
      @ewaf88 5 месяцев назад

      Have a look at the new DeepSouth Computer, built to mimic the human brain

    • @luck484
      @luck484 5 месяцев назад +9

      @@BooBaddyBig That is a lot like how people's brains or minds work also. Although "lie" might be to strong a word. People will take in a problem, run it though the "black box (brain)" getting an answer, solution, action plan or demonstration of understanding. If and only if that person is asked to explain where the answer come from a person will make up a story. The story is unlikely to fit the data in a comprehensive way and is actually constructed for the psychological comfort of people and accuracy of prediction of new data.
      Putting it more succinctly people lie about why they did stuff when asked. I am guessing both artificial intelligence and intelligence are examples of humans deceiving themselves, a form of confirmation error.

  • @BlakeEM
    @BlakeEM 5 месяцев назад +3

    There was a recent study, by I think Anthropic, that does exactly what you say. It shows why the models do what they do, and it's not how most people think. It's much more messy, than logical, with lots of idea/logic overlap. This understanding is allowing us to organize the AI like parts of the brain.
    I think overfitting is isn't a big issue with newer training algorithms. There have been attacks on AI models that use overfitting, but they do not work well in the real world. The issue now is more with the training data itself, which is quite poor, but is being improved.

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

  • @tonyhladun9081
    @tonyhladun9081 4 месяца назад +2

    Here's an example of her observation. I'm an investor so, years ago, I said since markets move in cycles I tried using Fourier Analysis on historical stock data to predict future moves. It was a complete failure since the more points I used the more wild/extreme the next step became. Newton's first law is all we have. Decisions are not well made with huge data and consensus...they are made with insight and commitment.

  • @enduka
    @enduka 5 месяцев назад +28

    That phenomenon is called Grokking, aka "generalizing after overfitting". There is quite some recent research in that area. Experiments on some toy datasets suggests thet the models first memorizes the data and then tries to find more efficient ways to represent the embedding space leading to better overall performance.(Source: Towards Understanding Grokking:
    An Effective Theory of Representation Learning)

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад +1

      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

    • @twentyeightO1
      @twentyeightO1 5 месяцев назад

      Does this have anything to do with reducing the number of parameters for inference? I am curious about how they overfit and then generalize.

    • @enduka
      @enduka 5 месяцев назад +1

      @twentyeightO1 My educated guess would be that they might be related. If indeed a model learns a simpler, more structured space when experiencing grokking, then that would mean that the "complexity" or number of parameters to represent that space would be lower. This way, you can prune the model during inference to decrease latency without giving up much accuracy.
      As for your second question, it is still an active research topic, and I can not say something conclusive yet.

    • @twentyeightO1
      @twentyeightO1 5 месяцев назад

      @@enduka Thanks! I'll look into Grokking.

  • @cphelpsification
    @cphelpsification 5 месяцев назад +7

    Might not be true of all model types, but there's a method called 'early stopping' that holds out data not in the training set, and stops the training once the error starts going up on that set. This is fairly close to a guarantee that you won't overfit. Giving a model a large number of parameters does seem to allow it to find more 'real' modeling ability though (as opposed to just fitting to the noise). I'd still argue that the main weakness of machine learning is in its ability to generalize to data beyond the range of what it was trained on. For instance, shorthand for what LLMs are bad at answering is stuff so obvious, nobody on the internet spells it out (like that things tend to fall downward). In this case you're asking the LLM to answer a question that falls outside its training data's range.

    • @michaeledwards2251
      @michaeledwards2251 5 месяцев назад

      The point you are making is, nonrandom things are nonrandom : gravity always works the same way. Training is based on statistical, biased randomness, analysis, which is only significant when operating beyond the known.
      The ability to know what is random, and what is not, is simply lacking.

  • @markdowning7959
    @markdowning7959 5 месяцев назад +225

    3:59
    Oops, mixing up your horizontal and vertical axes again, Sabrine! 🧐

    • @arctic_haze
      @arctic_haze 5 месяцев назад +18

      I came here to give the same warning.

    • @markdowning7959
      @markdowning7959 5 месяцев назад +34

      Usually when someone confuses horizontal with vertical, it's a sign they have overdone the schnapps. 😏

    • @SabineHossenfelder
      @SabineHossenfelder  5 месяцев назад +186

      Dang! I usually refer to them as x and y axes, and never use horizontal and vertical, so then I constantly mix them up :/

    • @Walter-Montalvo
      @Walter-Montalvo 5 месяцев назад +5

      Dyslexia perhaps?

    • @veritas2222
      @veritas2222 5 месяцев назад +2

      😂😂😂

  • @nickdryad
    @nickdryad 5 месяцев назад +18

    Man, I went out with a model. I never could predict what was going to happen next

    • @QED_
      @QED_ 5 месяцев назад +8

      You didn't train with enough models -- common mistake . . .

    • @earthbind83
      @earthbind83 4 месяца назад +1

      Maybe its neural network wasn't big enough.

  • @TheTwober
    @TheTwober 5 месяцев назад +1

    I am a bit confused. Overfitting doesn't happen, because in the model training phase strategies to explicitly avoid overfitting are used. E.g. during the training phase random neurons are deactivated so the model cannot rely on one single neuron and has to take in multiple inputs for every problem. So why overfitting does not happen is very clearly understood.

  • @kaios26k90
    @kaios26k90 5 месяцев назад +99

    The “Stop All Trains” solution is a very human answer. It just seems abhorrent since we’ve accepted the risks of travel. But in other fields, for “safety” we stop everything because of slight risks. Nuclear power comes to mind.

    • @boogeiyman
      @boogeiyman 5 месяцев назад +6

      Sad but true

    • @lethalfang
      @lethalfang 5 месяцев назад +1

      100% agree.

    • @makssachs8914
      @makssachs8914 5 месяцев назад +6

      DB already implements the "stop all trains" solution all too often.

    • @joechip4822
      @joechip4822 5 месяцев назад +8

      This all comes down to subjective perception of risks and benefits. There is the one, the trivial level where people just aren't willing or able to 'calculate' the actual risk. The human brain is not very capable of this by default, but given a certain level of intelligence this capability can be trained and improved on. Much more difficult to handle is the second level, that level of weighting, of priorities and simple matters of taste. This begins with the question whether somebody is more focused on freedom in life, or more on safety. People's personalities are very different and even contradictory in itself. But if you think about it, many MANY conflicts that haunted the world ever since and up to this day come down to different perspectives - or preferences - on the subject of: freedom vs. safety. This is most obvious in Religion and Politics.

    • @kimchristensen2175
      @kimchristensen2175 5 месяцев назад +1

      Sounds like my municipality. Oh we have a traffic problem, so lets constrict traffic, take away lanes, and lower the speed limits.
      ie: "traffic calming", etal.

  • @zhaoboxu833
    @zhaoboxu833 5 месяцев назад +24

    In fact, even large models still suffer from unseen data these days. To some point I suspect that it is just because the training set already contained most of the cases anyone can possibly think of. Therefore, no matter what input you feed into the mode during inference, it is somehow "already in the training set"... So overfitted, but no one can proove since it is so hard to find an "unseen" sample.

    • @mettaursp309
      @mettaursp309 5 месяцев назад +3

      Yeah this has been my belief for a while as well. OpenAI closely guarding the data set makes it hard to trust any studies that involve or require facts about the data set.

    • @aaabbbccc176
      @aaabbbccc176 5 месяцев назад

      Well said. Having seen many arguments above for why deep NN does not suffer overfitting, e.g., regulation, averaged-out noise, etc., I am more inclined to be on your side. When people play with (Chat)GPT, it never stops collecting the data.

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

  • @pshehan1
    @pshehan1 5 месяцев назад +267

    Von Neumann's elephant.
    "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk"

    • @lowlifeuk999
      @lowlifeuk999 5 месяцев назад +8

      not if parameters are limited in absolute value to a certain point or their norm is.

    • @drdca8263
      @drdca8263 5 месяцев назад +4

      @@lowlifeuk999limiting their absolute values is the same as limiting the \ell^\infty norm, right?

    • @lowlifeuk999
      @lowlifeuk999 5 месяцев назад +4

      @@drdca8263 sure, I was thinking about a numerical point of view, even if you use fp64 when you have a trillion of parameters might well be the case that the norm or some of the parameters go out of the 15/17 digits you can represent with fp64, it was not a theoretical remark. Regularization is about norms.

    • @Sven_Dongle
      @Sven_Dongle 5 месяцев назад +1

      @@lowlifeuk999 They can quantize to four bits with little noticeable loss of model integrity, so that kind of obliterates your premise.

    • @tofu-munchingCoalition.ofChaos
      @tofu-munchingCoalition.ofChaos 5 месяцев назад

      ​@@lowlifeuk999
      The following model allows only one parameter but can fit any continuous function [0,1]->R to the model where the parameter is bounded.
      The model is:
      X |-> Re (zeta(X/5+3/5+i/y))
      where 0

  • @DavidMcMillan888
    @DavidMcMillan888 5 месяцев назад +1

    Today’s subject, the unexpected reduction in error following overfitting, has been squeezed into the 6 minutes of “science news”. I’ll have to view it again to understand but certainly would have benefited from the 25-minute long form now disposed.
    I understand why the short form is better for the channel (and why the odd title of “dirty secret” was pasted in) but I kinda miss the old days. Sabine’s followers will at least tune in regularly for a 3-minute intro so there’s no need for all of this strategy in presentation.

  • @abramhunsberger3511
    @abramhunsberger3511 5 месяцев назад

    I suspect the lack of overfit is likely caused by the amount of data we usually train the larger models with. Each training set has a global minimum where the model has perfectly memorized each input and the corresponding output. The more training data there is, the harder it becomes to find that global minimum.
    It’s also possible that different parts of the model overfit in different ways. For example, say one set of weights notices that the color red generally corresponds to apples while another set of weights learns the shape of apples. If an image of a cherry is presented to the model, the first set of weights might guess apple based on the color, but the second set could still be right based on the shape. If on average more features like color and shape are correct even for new data, then the model will perform better.
    Models are often encouraged to learn different features like this through techniques like dropout. With dropout, weights are randomly disabled each round of training. The forces the model to work with only specific sets of weights and reduces overfitting.

  • @symon4212
    @symon4212 5 месяцев назад +23

    Double descent is indeed interesting, but I believe it is known why it happens.
    At the "peak" of the error curve we are at the point where the model is complex enough to overfit on every datapoint, but this is usually very bad. Any additional complexity helps the model to be more free in how it overfits on the datapoints (even though it still exactly fits on every datapoint) so the model learns smoother functions which also happen to generalize better (see regularization etc.).

    • @Alex-rt3po
      @Alex-rt3po 5 месяцев назад +5

      Why do more degrees of freedom mean that the model will learn a smoother function? Doesn’t a smoother function mean it has fewer parameters?

    • @symon4212
      @symon4212 5 месяцев назад +14

      @@Alex-rt3po Good question, I'll answer the second one first: more parameters means we are capable of being less smooth not that we are never smooth. For example, imagine we have a model that has to learn the coefficients of a 100 degree polynomial. It could surely learn a very complex function or it could learn to set every coefficient to 0 except for some lower order terms and then it would've learned a very smooth function. So a smoother function does not mean our model has fewer parameters.
      To the first question:
      Say we have a very low complexity model that is struggling to exactly interpolate all the datapoints. As we increase complexity there is this U shape where we first see improvement because we are able to capture the complexity of the task, but at a certain point the model gets complex enough so that it starts trying to "memorize" or interpolate the points perfectly, this is where we see the error increasing again. Because the way it does so is very likely to be non smooth and highly sensitive, thus it does not generalize well to new inputs.
      You should be able to imagine that there must be a point where the model starts to be able to perfectly interpolate every datapoint. But it only has the exact amount of degrees of freedom needed to interpolate it exactly so it is forced to take a certain form. You can solve the equation for the parameters to get the exact function. As you add more parameters not all of them are needed and you have more freedom in choosing the parameters. The mechanism behind why it chooses parameters that make the function smooth again is simply because of regularization.

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

  • @nightwishlover8913
    @nightwishlover8913 5 месяцев назад +27

    1.55 "the human intention was not well-coded". In the olden days, we had another expression for that: GIGO!

    • @a64738
      @a64738 5 месяцев назад

      Garbage in Garbage out... And whit Chat TGP the problem is that it is probrammed with woke idiot answers, AKA programmed with propaganda and lies to begin with on purpose... And result is woke garbage...

    • @drdca8263
      @drdca8263 5 месяцев назад +3

      There’s an important thing to note in this, beyond simply GIGO: It is often harder than we might expect, perhaps even *much* harder, to produce as the input, that which wouldn’t qualify as “garbage” (as far as GIGO is concerned). In particular, the input, if provided to humans, might not function as garbage (on account of the humans having some relevant background information, or shared goals or context with the ones providing the input)

  • @scottmiller2591
    @scottmiller2591 5 месяцев назад +15

    Double descent (which is what is being described in the video) is purely due to having so many parameters, divided amongst elements ("neurons"), that the width of layers in neurons begins to approach the limit of an "infinitely wide" layer. This gives rise to what is referred to as a neural tangent kernel (NTK) that expresses the performance of the layers based on the *statistics* of the huge number of parameters in a layer, rather than as the large number of parameters themselves. As a crude analogy, computational fluid dynamics using Navier-Stokes equations is much, much simpler and has far fewer parameters (the statistical parameters of pressure, temperature, volume, and mass transport) than keeping track of the mass, position and momentum of all the individual molecules, in spite of them describing what is the same physical system. In the same way, having masses of parameters and neurons arranged properly and appropriate training algorithms results in the *sufficient statistics* of the parameters being important, rather than the individual parameters themselves, with the statistics being sufficient in this case to describe and perform the actual processing.
    This has been known since Radford Neal's 1995 thesis "Bayesian Learning on Neural Networks," which derived the collective, statistical properties of infinitely wide neural layers. Later work by Jacot et al. in 2018 called this collective performance the neural tangent kernel, and showed how it works in multilayered networks. Unfortunately many people, including many statisticians and AI researchers, aren't familiar with this work nor its statistical meaning, and assume something mysterious is going on. Again, a crude analogy would be making a computer that uses vortex shedding (there are such things - fluidic logic) for computation, and being baffled how the huge numbers of parameters of the atoms themselves could work to perform computations without overfitting. The practical difference between the analogy and neural networks is in fluidic logic, the elements are designed, discrete, and apparent to the designer - they are explicit - whereas in neural networks, such computational effects arise collectively without explicit design - they are implicit.

    • @whatisrokosbasilisk80
      @whatisrokosbasilisk80 5 месяцев назад +2

      Huh, didn't realize that NTK also has an explanation for double descent, neat!

    • @MatthiasClock
      @MatthiasClock 5 месяцев назад

      tf did i just read

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

    • @jan7356
      @jan7356 5 месяцев назад +1

      Could you please explain what you are saying here in simple terms? There are so many buzzwords in there that they just generate a pile of noise for me and probably almost everyone else. Can you maybe make a crude analogy without using words like “vortex shedding” or “fluidic logic”.
      “having masses of parameters and neurons arranged properly and appropriate training algorithms results in the sufficient statistics of the parameters being important, rather than the individual parameters themselves”
      I can’t tell if this is supposed to explain something or just rephrases the observation that more parameters overfit less in the most cryptic way possible.
      Also, are you sure you don’t overfit more with more parameters if you just do naive training without any regularization tricks and adding noise and dropout and sparsity constraints and early stopping and what not, and instead reuse the data a gazillion times until your model “converged”? Of course you need to train a larger model for many more rounds until it will finally overfit (because it takes many more iterations to get more parameters to converge), but it still will, won’t it eventually also overfit and then even worse?

    • @TheGreatAmphibian
      @TheGreatAmphibian 5 месяцев назад +1

      @@jan7356 I would ignore the comment you’re asking about - and the video - and read rich_tube’s post above. You’re asking excellent questions.

  • @BlindintheDark
    @BlindintheDark 4 месяца назад

    I'd imagine part of the answer is because of the process.
    If the points converge on a solution that's only one step, additional data is held back for verification and if the model cannot predict the verification set then the model is tossed.

  • @XtremTodXtrem
    @XtremTodXtrem 23 дня назад

    Maybe the amount of "drop out layers" was increased as well, which led to the model to diversify the infomation more evenly accross the weights. And thus lead to a more robust and less overfitted model.
    Another explination would be, that the trainset is so complex, that a model with just a view layers has to overfitt in order to get a good loss. For models with more paramaters overfitting is not needed because it's easier to generalize with more layers.

  • @AaronALAI
    @AaronALAI 5 месяцев назад +11

    Things get even more wild, go well past over fitting and the model will experience a phase change called "grokking". Pleas look this up, it has just been discovered and it makes the models perform almost perfectly on validation data. It's a serious game changer.

    • @darrenb3830
      @darrenb3830 5 месяцев назад +2

      Is this specific to transformer architecture or more broadly such as LSTMs?

    • @Juttutin
      @Juttutin 5 месяцев назад +2

      That's exactly what this video is about. She just didn't use the term.

    • @UnsoberIdiot
      @UnsoberIdiot 5 месяцев назад

      Every proper nerd groks what it means to grok (or at least has a fairly good idea) and will thus immediately understand what's being talked about when the word "grokking" is used.

    • @AaronALAI
      @AaronALAI 5 месяцев назад

      I'm not sure I just learned about this today, I'm going to review this paper tonight, arXiv:2405.15071 ​@@darrenb3830

    • @alansmithee419
      @alansmithee419 5 месяцев назад

      This has been known for a few years actually, although I guess that could be within whatever you mean by "just been discovered" tbf, I just feel that's a pretty long time for AI research.
      For anyone who doesn't quite get it (I sure didn't): specifically an AI that has overfitted may eventually, by continuing the training process, "grok" the problem - a term essentially meaning that it seems to figure out somehow what is actually going on and starts generalising really well for seemingly no reason.
      I specify this because I initially thought OP meant that continuing to make the AI more complex would lead to grokking. This is not the case (though maybe complex AIs are required for grokking to occur at all, IDK). This is something that exists on top of what Sabine discussed in the video - which was the effects of making the model larger - and works in tandem with it - grokking is an effect of continuing to train the same already overfitted model.
      Edit: NGL I just learned about this and almost definitely got a few things wrong, I'm sure someone will fill in the details (pls).

  • @Thomas-gk42
    @Thomas-gk42 5 месяцев назад +41

    Six minutes of compressed and very interseting information and thoughts, thank you once again. The black box problem is not a special AI one, is it? I know that from my twelve years old GPS navigation device, that´s truly not an AI: I go the same way several times and it gives me another way every time without me changing the setting😂. Anyhow I figure it hopeful, not scary, that AI works better than the prediction.

    • @SabineHossenfelder
      @SabineHossenfelder  5 месяцев назад +36

      aren't we all black boxes of some sort?

    • @Thomas-gk42
      @Thomas-gk42 5 месяцев назад +3

      @@SabineHossenfelder We are!!!😘

    • @yeroca
      @yeroca 5 месяцев назад +6

      @@SabineHossenfelder squishy, wet, gray boxes.

    • @borninvincible
      @borninvincible 5 месяцев назад +2

      @@SabineHossenfelderit's just the multiverse ::grins in dave duetch:::

    • @DreamskyDance
      @DreamskyDance 5 месяцев назад +8

      GPS has precision error of 20 to 50 meters, as far as i know. If there are two ways that are close in algorithmically best way for you to go, maybe those few extra meters one way or the other decide on which route is better for you based on small changes in your location.
      Algorithm is not an AI in any way but when you are sorting stuff sometimes one thing with some number parameter being bigger for only for 0.0001% than the other comes out on top and some times the other is just a little bit bigger and it comes out on top.

  • @EpicCamST
    @EpicCamST 5 месяцев назад +9

    I have published a paper about it called Wieghts Reset technique. Its really very interesting because complexity is much more than just a number of parameters in a model.

    • @ArawnOfAnnwn
      @ArawnOfAnnwn 5 месяцев назад

      Aren't there already a lot of regularization techniques in the models used to combat overfitting?

    • @EpicCamST
      @EpicCamST 5 месяцев назад +1

      @@ArawnOfAnnwn Indeed there are 😀. From basic to complex, however its a general problem that there are no universal recipes in machine learning. So people construct more and more methods, architectures, etc. Btw regularization is not only about overfitting e.g. convnets can be viewed as regularization over dense/linear layers.

    • @konstantin7596
      @konstantin7596 5 месяцев назад

      @@EpicCamST Hey, maybe can you tell me the name of the paper? :) Is it public anywhere without spatial access? on the arχiv maybe even?

    • @EpicCamST
      @EpicCamST 5 месяцев назад

      @@konstantin7596 Hi, sure, it is open access and you can google it by the title "The Weights Reset Technique for Deep Neural Networks Implicit Regularization"

    • @EpicCamST
      @EpicCamST 5 месяцев назад

      @@konstantin7596 Hi, sure, it is open access and you can google it by the title "The Weights Reset Technique for Deep Neural Networks Implicit Regularization"

  • @billandersen1389
    @billandersen1389 4 месяца назад +1

    This is basic. ANNs of all types are correlation machines that use statistical techniques to make function approximations. Correlation is not causation. QED

  • @melkiorwiseman5234
    @melkiorwiseman5234 4 месяца назад +1

    The biggest problem of all with current AI is that people actually expect it to be intelligent, when it definitely is not.
    Current "AI" is just a very complicated pattern-finder and matcher. It's a complicated word and phrase shuffler. It's an instrument which attempts to find a pattern which matches your request. The only difference between AI art, AI stories or AI driven chatting is in how the output is represented. The goal of the AI is the same in any case: Find something which matches your request.
    Where AI falls down is when it doesn't know what matches. The trouble is that it doesn't have any concept of "I don't know" and so even if it can't fulfil your request, it will still come up with something which, at first glance, appears to do so. Once you examine its output critically, you discover the problems which, at best, show that it was the product of an AI rather than from a human mind and, at worst, make the output useless for your stated purpose.
    AI can be useful, but only if you keep in mind that it can't actually think, that it doesn't actually "know" anything, and that it will provide an output even if that output is nonsense because it doesn't have the information it needs in order to satisfy your requirements.
    Current AI will never tell you "I'm sorry, Dave, but I'm afraid I can't do that." Who knew that that could be a bad thing? 😏

    • @thedrumdoctor
      @thedrumdoctor 4 месяца назад +1

      Absolutely this! I use AI to take the heavy work out of creating content for product listings on e-commerce, but it's shocking to see how much inaccurate information it throws back. It's great up to a point, but you have to read *everything* it throws back at you and be prepared to tell it what it got wrong. The media push AI as the panacea to solving so many problems but I doubt the people who write the articles have much experience in actually using it every day. If they had to use it then they would be writing more about how unimpressive it can be when it's asked to solve non-mathematical problems.

  • @Parad0x0n
    @Parad0x0n 5 месяцев назад +5

    Actually, there is a growing research interest in understanding the training phases of AI better.
    For example, there is a paper by Anthropic "In-context Learning and Induction Heads" where they show that at some point during training, the LLM learns how to predict the next word by looking at similar examples in the context window. This ability gives a massive reduction in the loss function during training

    • @asimong
      @asimong 5 месяцев назад +1

      That is interesting, and could conceivably fit in with my own neglected work from the 1990s.

    • @anonmouse956
      @anonmouse956 5 месяцев назад

      Does “similar examples” mean something analogous to related questions?

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

    • @Parad0x0n
      @Parad0x0n 5 месяцев назад +1

      ​@@anonmouse956 in its simplest form, it works just like that: if it sees a word like "Mr." and within the context window there was already a "Mr." followed by a "Jones", it will be much more likely that it will again write down "Mr. Jones". This sounds trivial and obviously useful, but an LLM has to learn this as it starts from 0 knowledge how language works

  • @mattkipper4653
    @mattkipper4653 5 месяцев назад +10

    This sounds like the Dunning-Kruger effect for AI.

    • @asmyself4021
      @asmyself4021 5 месяцев назад

      That's actually a good summary of AI.
      Explains the gaslighting too.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 5 месяцев назад +1

      It is actually. The difference is that the AI just needs to be told what was wrong and what is right and it will correct accordingly.

  • @poolschool5587
    @poolschool5587 5 месяцев назад

    Apparently, when it comes to overfitting, more data can dilute the impact of noise or outliers. With more data, the noise becomes a smaller fraction of the entire dataset, thus reducing its influence on the model’s learning process. And that makes the complex model perform better.

  • @thomasschon
    @thomasschon 5 месяцев назад +1

    Try this peculiar exercise on a large language model. If you ask it, 'I have 5 apples today; yesterday I ate 3 apples; how many apples do I have left today' it will answer 2. If you can convince the model to use resonating instead of letting probability detection through pattern recognition come up with the answer, it will answer 5 and then state, 'because how many apples I ate yesterday has no bearing on today'. Then you can swap apples for oranges and ask the same question again, and it will answer 2 again.

  • @Lazdinger
    @Lazdinger 5 месяцев назад +6

    The “you can’t crash a train that never leaves the station” answer sounded kinda like a glorious StackOverflow response.

    • @tedmoss
      @tedmoss 5 месяцев назад

      No, that's part of logic.

    • @Lazdinger
      @Lazdinger 5 месяцев назад

      @@tedmoss _gloriously_ logical.

    • @FrickinCCDeVileV
      @FrickinCCDeVileV 3 месяца назад +1

      "What are you trying to achieve?"

    • @Lazdinger
      @Lazdinger 3 месяца назад

      @@FrickinCCDeVileV 😂

  • @wiggles7976
    @wiggles7976 5 месяцев назад +4

    I wasn't sure what overfitting was from the quick description in the video, so I googled the definition: "In machine learning, overfitting occurs when an algorithm fits too closely or even exactly to its training data, resulting in a model that can’t make accurate predictions or conclusions from any data other than the training data."

    • @IngieKerr
      @IngieKerr 5 месяцев назад

      a good linguistic human comparison would be when children first learn to speak and often use regular conjugations of verbs especially in the past tense, using -ed for all past verbs. e.g. "My toy broked" or similar ... i.e. the child has learnt enough data to overfit the regular ending and even learn an irregular conjugation, but not enough data to realise that this conjugation does not therefore require the regular ending.

    • @wiggles7976
      @wiggles7976 5 месяцев назад

      @@IngieKerr I don't think that a child is overfitting, or at least this is too trivial of an example if it is overfitting. What's going on here is that the child learned a rule, and thought it applied everywhere, but the rule had exceptions. AI is supposed to know that there will be exceptions to the outcomes, whereas the child doesn't. I saw an example of overfitting where an AI was trained to predict if a person would default on their loan, and it was able to predict the outcome of 97% of the people in the training data, but only 50% of the people in the real world data.

    • @bornach
      @bornach 5 месяцев назад

      ​@@wiggles7976How about when you feed Udio all the keywords tagging a specific song from a catalog, and perhaps some of the lyrics, and I just spits out a cover version of that exact song with the same melody and chord progression - it was incapable of extrapolating a completely different melody. Is that a case of over fitting?

    • @wiggles7976
      @wiggles7976 5 месяцев назад

      @@bornach I don't know what Udio is but producing music doesn't really fall into the category of "making predictions", which is what the definition I quoted above says. There's no way to test if an AI-generated song is "correct" or "incorrect" since correctness is not a quality of music. Correctness could be a quality of music theory though. If I say a C chord is C F G, then I'm incorrect. An AI could try to predict music theory I suppose.

    • @zelfjizef454
      @zelfjizef454 5 месяцев назад

      I'm not sure I understand. It would mean that if a neural network ever finds out about a theory of everything that predicts reality with 100% accuracy, and thus fits its training set (extracted from reality) with 100% accuracy as well, that neural network would be considered over fitted ?
      It seems some piece is missing from that definition.

  • @aniksamiurrahman6365
    @aniksamiurrahman6365 5 месяцев назад +7

    I think Neural Net is a very good tool to model the logic by which a system works without knowing anything about it's internal state.

    • @iantingen
      @iantingen 5 месяцев назад +1

      This is an honest question:
      How do you avoid attributing incorrect causality in the logic when modeling like this?
      In my experience, you get a lot of benefit in the short term, but its very wasteful in the long-term because the model is not generalizable

    • @Fischdosepremium
      @Fischdosepremium 5 месяцев назад +4

      ​​@@iantingenModeling in ML is typically predictive. Establishing causality (from observational data) is rarely the goal and requires different methods.

    • @iantingen
      @iantingen 5 месяцев назад

      @@Fischdosepremium Predictive, but without any understanding of mechanism, correct?
      What is being predicted in that instance?

    • @Fischdosepremium
      @Fischdosepremium 5 месяцев назад +2

      @@iantingen Yes. Whether this is sufficient depends on the use case. Although interpretability is virtually always nice to have, predictive accuracy is generally paramount in applications where ML is the preferred tool.

    • @iantingen
      @iantingen 5 месяцев назад

      @@Fischdosepremium do you ever feel like that epistemological approach is wasteful compared to using (at least a little) theory?
      That’s been my experience, but I also know that my experience doesn’t generalize to everyone!
      I know that we’re getting out in the weeds a little bit, but I’d appreciate your thoughts about it!

  • @vast634
    @vast634 5 месяцев назад +1

    The are plenty of strategies to avoid a model overfitting (like random changes, changing the velocity of gradient descend dynamically or reshuffling the training data set), And also the training set with language text now is soo large that the model might simply not have the capacity to overfit on it.

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

  • @ChristianIce
    @ChristianIce 4 месяца назад +1

    Why people who don't know how something works assume that nobody knows?

  • @ronniew3229
    @ronniew3229 5 месяцев назад +5

    The sane side of yt.
    Danke.

  • @Goryus
    @Goryus 5 месяцев назад +10

    Sabine, modern neural networks DO have massive problems with overfitting. However, it doesn't become apparent until they have been trained enough to explain all the training data. After that, if you continue training them, they immediately become overfit. It is for this reason that most models are not trained nearly as much as they could be, and researchers deliberately stop their training early.

    • @coreyyanofsky
      @coreyyanofsky 5 месяцев назад +5

      this isn't true -- if it were, we'd never observe double descent in the first place

    • @adamrak7560
      @adamrak7560 5 месяцев назад +5

      Early stopping is deprecated. If you set weight decay correctly you can train the network far longer and it still learns useful stuff.

    • @ReclusiveDev
      @ReclusiveDev 5 месяцев назад

      ​@@adamrak7560 While weight decay, dropout, entropy regularization, momentum based oprimizers, etc are all effective regularization strategys to limit over-fitting, model checkpointing, and by extension early stopping does not at all seem depricated to me. It can still be seen in the results graphs of most academic papers this year (the graphs tend to stop when validation accuracy levels out) and it's telling that default settings in both torch and tensorflow stop under conditions including one form or another of loss derivative estimates to stop when meaningful improvement are no longer made rather than when train accuracy is 100%. Training indefinitely might be popular in LLM's (admittedly an area where I have limited interest) where the massive data repositories used there cause many of their user's queries to roughly lie somewhere within their training sets such that overfitting is not a huge concern but in machine learning at large I'd have to strongly disagree with you. There are papers with citations (>20 to be relevant) analyzing the robustness of early stopping published as recently as 2023 which says to me that the strategy is not deprocated if it's not even done being studied. If you have evidence to the contrary or if your claim is in a particular subfield that I might not be considering I'd love to learn more, or if you consider early stopping to be something other than "stopping training before training accuracy plateaus to avoid overfitting" then I'd be interested to hear a response.
      have a nice day

  • @londonnight937
    @londonnight937 5 месяцев назад +5

    The graph you showed there at the end, error versus complexity.... It reminds me for some reason of the Dunning-Kruger effect graph. If you turn it upside down, it is identical. Maybe some connection?

    • @Nerdthagoras
      @Nerdthagoras 5 месяцев назад +2

      I too had that thought and decided to search the comments for someone else who perhaps had the same idea.. yes the graph does indeed seem to be the inverse of the DK graph but only because the Y axis is a measurement of error and not confidence in knowledge. Seeing as outputs are based on the systems confidence of a result, makes it that even more fitting as a comparison.

    • @Dongobog-ps9tz
      @Dongobog-ps9tz 5 месяцев назад +4

      No connection at all, unless you confidently insist there is one from a place of limited understanding :p, there would be a fairly ironic connection at that point.

    • @ffactory945
      @ffactory945 5 месяцев назад

      ​@@Dongobog-ps9tz hahaha wanted to write the same thing "you're giving an example"

    • @londonnight937
      @londonnight937 5 месяцев назад

      @@Dongobog-ps9tz I suppose so. I'm not saying there is a connection, but I am saying there may be a connection.

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

  • @paulpallaghy4918
    @paulpallaghy4918 5 месяцев назад

    Us AI / NLU / LLM guys have a lot of fairly good explanations and theories. Anthropic & OpenAI have done some reveals of patterns in the weights etc.
    Our best theories note that:
    1. Logic is likely being learned
    2. Emergence of higher order capabilities is a real thing
    3. Deep learning does extract the parsimonious essence underlying data
    4. LLMs are actually pretty good at explaining how they arrived at conclusions

  • @burrahobbithalf
    @burrahobbithalf 5 месяцев назад

    Thanks for bringing this to light: I've been bit by overfitting, but never made it beyond to find the second fall off.

  • @MichaelTilton
    @MichaelTilton 5 месяцев назад +14

    I wonder if Occam's Razor eventually comes into play in LLM AIs, either by accident or on purpose. Sometimes the Simplest Model is the best. That is, until it isn't.

    • @drdca8263
      @drdca8263 5 месяцев назад +3

      Well, doesn’t have to specifically be LLMs,
      but yes, there is the idea that by increasing the parameter counts enough, that the solutions that the gradient descent (+ whatever things they add to it) is able to find models that are actually (in a sense) “simpler” than the ones that would be found if the number of parameters available was a little smaller.

    • @jimothy9943
      @jimothy9943 5 месяцев назад +4

      Don’t think you understand what Occam’s razor actually is. It’s about adjudicating between two different theories making the same predictions. When two theories predict the same thing the one with fewer assumptions is said to have more theoretical virtue. LLM’s are not competing theories so it’s a category error to apply Occam’s razor to them.

    • @tomgooch1422
      @tomgooch1422 5 месяцев назад +1

      It's Nature's way but what does she know about input priorities?

    • @drdca8263
      @drdca8263 5 месяцев назад

      @@jimothy9943 Competing theories, perhaps not, but competing models? They seem to be that. They make a prediction of the observed dynamics of a system. Different ones make different predictions.

    • @jimothy9943
      @jimothy9943 5 месяцев назад

      @@drdca8263 They are competing models for performing a given task. They don’t make predictions. An LLM does not entail predictions about the dynamics of anything. ChatGPT’s model does not entail anything about Gemini. They are both different tools for completing similar tasks. A hammer does not make predictions any more than a drill. You would not say that the more theoretically virtuous lawn mower was the one with the fewest amount of parts. Occam’s razor does not apply.

  • @gnew1822
    @gnew1822 5 месяцев назад +60

    Rocks were never supposed to talk. They have played us for absolute fools

    • @TDVL
      @TDVL 5 месяцев назад +5

      Gaia is talking to us through silicon(e)…

    • @scudder991
      @scudder991 5 месяцев назад +2

      Intriguing perspective

    • @scudder991
      @scudder991 5 месяцев назад +2

      ​@@TDVLAlso intriguing

    • @jeltoninc.8542
      @jeltoninc.8542 5 месяцев назад +4

      SILICONE more like it amen???
      (. )( .)

    • @TDVL
      @TDVL 5 месяцев назад

      @@jeltoninc.8542 amended :)

  • @dutchangle229
    @dutchangle229 5 месяцев назад +7

    Two more problems of AI: 1) It doesn't know, what it doesn't know. Therefore it will always give you an answer with the confidence of an 11 year old. 2) When the human brain is trying to figure something out, it can refer other problems it does know the answer to, and derive an answer by analogy. We (usually) call that experience. Artificial neural networks lack the "experience" mechanism.

    • @hiddenbunny7205
      @hiddenbunny7205 5 месяцев назад

      I don't think you understand how neural network works.

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

  • @quietStorm247
    @quietStorm247 5 месяцев назад

    Thank you so much, Dr. Hossenfelder, for this clear explanation of a very complex topic.

  • @JorJorIvanovitch
    @JorJorIvanovitch 5 месяцев назад

    Gathering data not used in the training set and running the program against that data to see how it fits is one helpful way to avoid overfitting.

  • @sarcasmunlimited1570
    @sarcasmunlimited1570 5 месяцев назад +5

    Current AI models aren't trained to think in a general sense. They are trained to think like what thinking is available on the Internet. In other words, these AIs emulate what has been said or written by humans. This way, you will never get AI smarter than humans, but only faster and less prone to error in well defined situations.

    • @drdca8263
      @drdca8263 5 месяцев назад +3

      Irrelevant to the video. I don’t think the video even uses the word “intelligence” outside of the phrase “AI”? And the video certainly isn’t specific to language modeling tasks.

  • @Drone256
    @Drone256 5 месяцев назад +6

    It's hard to overfit these massive LLMs during training because you have enormous amounts of highly variable training data relative to the number of weights. Isn't this obvious or am I just losing my mind?

    • @iFastee
      @iFastee 5 месяцев назад

      and you could also say that due to the insane amounts of data, you end up covering most of the actual possible semantic space compared with other problems where the unseen data represents 99% of the semantic space. i would also make the case that LLMs do not suffer and might even gain from the concept of overfitting.
      what even is overfitting when you fitted literally all the fking data? you just left out new phrases that you can create, but the novelty created by that input represents like what? 0.0000001% novelty where the model might fk up?
      meaning... how could you find the overfit if you trained a model with both the training and the testing?

    • @Drone256
      @Drone256 5 месяцев назад

      @@iFastee Agree. That’s hilarious. Well said.

    • @Lolleka
      @Lolleka 5 месяцев назад

      sounds about right. move on

    • @Grizabeebles
      @Grizabeebles 5 месяцев назад

      Does this have any bearing on the Travelling Salesman problem or the Berry Paradox?
      A LLM "with all the data" is still a brute-force method, and that entails exponentially higher costs.

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

  • @PaulTheBeav
    @PaulTheBeav 5 месяцев назад +4

    How do we know Sabine isn't an AI?

    • @noway8233
      @noway8233 5 месяцев назад +1

      She is too funny to be😅

    • @Thomas-gk42
      @Thomas-gk42 5 месяцев назад

      I saw her live last year on a debate in London. She´s flesh and blood!

    • @PaulTheBeav
      @PaulTheBeav 5 месяцев назад +1

      @@Thomas-gk42 That's exactly what an AI would say.

  • @lightest-d4e
    @lightest-d4e 4 месяца назад

    There is a couple of minor inaccuracies in this video:
    3:26 While talking about inference, the video shows backpropagation during training.
    4:01 horizontal and vertical axes are swapped in the verbal description of the graph.

  • @lostmsu
    @lostmsu 5 месяцев назад

    Double descent is not on Complexity/Error graph, it is on Training steps/Error graph for the same model (e.g. unchanged complexity).

  • @Bassotronics
    @Bassotronics 5 месяцев назад +11

    Plot Twist: Sabine is an A.I.

  • @milaberdenisvanberlekom4615
    @milaberdenisvanberlekom4615 5 месяцев назад +4

    I really would love a collaboration between you and Robert Miles on AI safety. ❤

  • @chrishall5283
    @chrishall5283 5 месяцев назад +19

    The answer to the most famous ill defined question is 42.

    • @EaglePicking
      @EaglePicking 5 месяцев назад +5

      Plot twist: the question wasn't ill defined and the answer is actually 42.

    • @nat9909
      @nat9909 5 месяцев назад +3

      Until scientifically proven otherwise, the answer remains 42.

  • @humansizedaperture
    @humansizedaperture 5 месяцев назад

    Following AI for almost years now and this is the first I’ve heard of this insight. Thank you for thinking against the grain and helping your viewers do the same!

  • @fredparkinson1289
    @fredparkinson1289 5 месяцев назад

    At 4:00 you have confused the vertical and horizontal axes. You say the horizontal axes is for error, then put error on the vertical axes in the diagram etc.

  • @utkua
    @utkua 5 месяцев назад +10

    Data without relation, a knowledge graph has limits. Yann LeCun the Meta's chief AI scientist says current systems does not show even a slightest intelligence. Fear mongering by OpenAI is to get regulations in place to stop the competition.Altman even suggested GPU sales to be restriced and development to be subjected to license. My take is that while looks impressive generative AI does have very little pracal use in its current state unless you are after investor money.

    • @Vondoodle
      @Vondoodle 5 месяцев назад

      I dont think its about intelligence - more about misdirection and misuse by bad actors - or more scarily that AI misdirects and influences due to errors - Like WOPR (War Operation Plan Response, pronounced "whopper") from war games

    • @utkua
      @utkua 5 месяцев назад +2

      @@Vondoodle ​​⁠That is what I mean, it will never be something we can just trust in its current form. It writes code for example, but because you cannot trust it you read the code, and in the end it saves time only for boilerplate. It is the same pattern for every other use case.

    • @janisir4529
      @janisir4529 5 месяцев назад

      @@Vondoodle So basically you'd blame AI for what people are doing?

    • @adamrak7560
      @adamrak7560 5 месяцев назад

      Yann LeCun is famous for making highly confident prediction based on his own assertions, that turn out to be very false one year later. I suggest not listening him at all, because his predictions are consistently off.

    • @Ockerlord
      @Ockerlord 5 месяцев назад +1

      LeCun is hilariously wrong.
      If you bet on the opposite of his predictions you would earn money 😂

  • @tommyfanzfloppydisk
    @tommyfanzfloppydisk 5 месяцев назад +20

    _"how do we stop human pollution?"_
    *AI pulls up a Thanos quote*

    • @t.kersten7695
      @t.kersten7695 5 месяцев назад +2

      what will we get with a real AI? the Terminator? or Bender from Futurama?

    • @vyvianalcott1681
      @vyvianalcott1681 5 месяцев назад

      Be careful, you'll summon the Roko's Basilisk morons who think it's reasonable to commit genocide because a machine they created told them to

    • @fishygaming793
      @fishygaming793 5 месяцев назад

      @@t.kersten7695 This is a very complex and unpredictable question, but if the world remains stable until that time; Likely between 5-30 years-ish. (As far as i know, maybe watch some video`s from David Shapiro to get a idea)

    • @ArawnOfAnnwn
      @ArawnOfAnnwn 5 месяцев назад

      ​@@t.kersten7695 Neither. Both those examples are anthropomorphic i.e. they were humanized by having a personality. Real AI has nothing of the sort. It doesn't want revenge, it just works to achieve the goals we give it - in the best way it reasons how, which may not be the 'best' in our eyes. The classic example is the paperclip maximizer, which destroys everything simply to make more paperclips.

  • @CesarHILL
    @CesarHILL 5 месяцев назад +5

    I might be wrong or perhaps I didn't understand the explanations... but sounds to me that the issue is more human than ai. In the sense that we are patern recognising creatures... we want to see patterns, and perhaps the randomness of ai is just patterns to our eyes... then again... I guess we could ask what is a pattern?
    Perhaps I'm just stupid. 😅

    • @whatisrokosbasilisk80
      @whatisrokosbasilisk80 5 месяцев назад +1

      With things like convolutional neural networks used in computer vision, we can see pretty clearly what kind of patterns tend to excite different layers of the network, we generally start from something like "Gabor filter" and work up to neurons that abstract visual understanding (interestingly, you can show what excites different layers to people and a corresponding region of the visual track will similarly light up).
      With LLMs, it's a little more gooey, we can see like basic syntax assembly in the first few layers so mapping connections between tokens, words, sentences and things that look like universal grammar start to pop out, so grammars and constructions of associations (this is the work of Atticus Geiger at Stanford) but then there's also this gooey-ness because it becomes abstracted "blah".
      So, there's this kind of latent space that stuff gets pushed into as we go deeper into the network and we have a newer method that we can use to probe it by basically watching what gets activated when we push certain examples through, so we can isolate stuff like neural representations encoding "cat" etc. but these are also pretty mushy and really depend on how you try to measure "cat-ness".
      My current wild bet is that we'll probably end up with a Heisenberg uncertainty style law that kind of boils down how useful this representation approach can really be - so no, I'd say it isn't stupid to identify that there's a measurement problem (ie. a human issue with looking for patterns in abstract pile of numbers).

    • @CesarHILL
      @CesarHILL 5 месяцев назад

      @@whatisrokosbasilisk80 well, I guess I should say thanks... and that you've given me a lot to study and think about... not sure I understood everything. :p but it does feel nice that someone with such knowledge doesn't think my understanding was stupid. :p even though I do feel like I need to study more this topic now. XD
      Way to make me feel both dumb and smart... you made me laugh out loud. So thanks for that too. XD XD

    • @whatisrokosbasilisk80
      @whatisrokosbasilisk80 5 месяцев назад

      @@CesarHILL Representation Engineering and Mechanistic Interpretability is what I'd focus on if you want to really understand this stuff.

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

  • @janerussell3472
    @janerussell3472 5 месяцев назад

    Let's be clear, μP and its depth extension is rich learning, and neural tangent parameterization is what they call lazy or poor learning.
    In μP, feature learning guarantees progressive sharpening to reach a width-independent sharpness at any scale; in NTP the progressive lack of feature learning when the width is increased prevents the Hessian from adapting, and its largest eigenvalue from reaching the convergence threshold.

  • @AlexKasper
    @AlexKasper 4 месяца назад

    I did my Masters in the mid 90s about Neural Networks. I saw what's described here as over-fitting. To me it was mostly because large networks were trained with lots of data. The thing is each training round results in an error that is later reintroduced into said network for the next round. And ideally each round would result in a smaller error each time.
    The network I trained was used to cover gaps in instrument signals, with no other input that previous data before the gap. The longer the input before the better, except that in some cases things weren't predictable at all.

  • @theplanetrepairman
    @theplanetrepairman 5 месяцев назад +10

    My black box imploded when you said decent instead of descent.

    • @OperationDarkside
      @OperationDarkside 5 месяцев назад +1

      Or maybe she meant dessert. Or desert. Can't keep them separate in my head.

  • @willilaufmann38
    @willilaufmann38 5 месяцев назад +4

    Thanks excellent

  • @beatsaway
    @beatsaway 5 месяцев назад

    this is amazing u prevent the misconceptions by addressing them one by one in the intro

  • @IsZomg
    @IsZomg 5 месяцев назад

    There's a lot of overlap in the data points, so if you consider this a compression problem, you can learn useful abstractions while retaining the original data points accurately at the same time. There's information in the compressed structure that arises.

  • @agenticmark
    @agenticmark 5 месяцев назад

    people dont understand due to the dimensional of the vector space, but it doesnt matter - we cant tell you what each neuron does either. we built these machines at scale because it helps them with "emergent" abilities. Which just means that the models are doing exactly what we thought they would do - or else no one would have invested in the tech!

  • @washingtonx1
    @washingtonx1 5 месяцев назад +1

    This is one of the best videos I have seen on AI, and I keep up with this stuff much more than average. Well done, Sabine. This is an area to expand on. Please keep going. 🙏

    • @CaliLuke
      @CaliLuke Месяц назад

      No it's not, it's full of imprecisions and gross generalizations.

  • @HPDrifter2
    @HPDrifter2 Месяц назад

    Thank you, Sabine. This answers my earlier question.

  • @SianaGearz
    @SianaGearz 5 месяцев назад

    There is also "Grokking: generalisation beyond overfitting"
    When you have a model that will by size and structure tend to overfit the data, just training it longer can yank it out of the overfitted state and start generalising the data.
    The desired training times derive from model sizes. Correspondingly it's a possibility that it's not model size that is causing generalisation for ever larger models, but the amount of training. There's also a lot of techniques put to use deliberately to fight overfitting.

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

  • @ZainPhilippe
    @ZainPhilippe 5 месяцев назад

    It is not overfitting because they are using an activation function RELU which is a bent line. If one where to use a sine or cosine as an activation function, it will over fit. Bent straight lines do not overfit over the noise. That is my theory.

  • @willarchambault3776
    @willarchambault3776 5 месяцев назад +1

    NNs aren't a straight ahead multiply. They aren't just weights, the biases are incredibly important and allow the construction of logic gates (and weighted complex logic gates) in each activation. Representing them as only a curve fitting polynomial is misleading.

    • @hyperduality2838
      @hyperduality2838 5 месяцев назад

      Certainty (predictability, syntropy) is dual to uncertainty (unpredictability, entropy) -- the Heisenberg certainty/uncertainty principle.
      Complexity is dual to simplicity.
      Syntax is dual to semantics -- languages or communication.
      Large language models (neural networks) are using duality:-
      Problem, reaction, solution -- the Hegelian dialectic.
      Input vectors can be modelled as problems (thesis), the network reacts (anti-thesis) to the input and this creates the solutions, targets or goals (synthesis).
      The correct reaction or anti-thesis (training) synthesizes the optimal solutions or goals -- teleology.
      Thesis is dual to anti-thesis creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic.
      Neural networks or large language models are using duality via the Hegelian dialectic to solve problems!
      If mathematics is a language then it is dual.
      All numbers fall within the complex plane.
      Real is dual to imaginary -- complex numbers are dual hence all numbers are dual.
      The integers are self dual as they are their own conjugates.
      The tetrahedron is self dual -- just like the integers.
      The cube is dual to the octahedron.
      The dodecahedron is dual to the icosahedron -- the Platonic solids are dual.
      Addition is dual to subtraction (additive inverses) -- abstract algebra.
      Multiplication is dual to division (multiplicative inverses) -- abstract algebra.
      Teleological physics (syntropy) is dual to non teleological physics (entropy).
      Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics.
      "Always two there are" -- Yoda.
      Your mind is syntropic as it solves problems to synthesize solutions -- teleological.

  • @hammerdureason8926
    @hammerdureason8926 5 месяцев назад

    ???axis inversion at 3:54 minutes? the diagram has error on vertical access & complexity on horizontal access however in your audio explanation:
    "Let's suppose that the horizontal axis here shows the how big the error of the model is on new data and the vertical axis shows loosely speaking the number of parameters, so the complexity of the model if you wish". taken out of context the diagram implies complexity leads to determinism/linearity.

  • @MagicDrinkMix42
    @MagicDrinkMix42 5 месяцев назад

    There actually is some work showing (at least for low-dimensional input) that the generalization error is small for uniformly sampled data when the weights of the deep network are small, using sort of a general lipschitz properties. But this is likely not too related to double descent

  • @gettingstuffdoneright5332
    @gettingstuffdoneright5332 5 месяцев назад

    At 3:56 Sabine's diagram is graphed correctly but her narration gets the horizontal/vertical swapped, otherwise she makes her point well as usual.

  • @piotr780
    @piotr780 5 месяцев назад

    My hypothesis - larger models may provide space for specialized models to emerge within their structure, and as they transition from one phase to another, the error increases because the transitional state is not optimal at all, second - hierarchical structure of DNNs works as regularization

  • @koenichfuerst
    @koenichfuerst 5 месяцев назад

    The answer is the application of regularization methods, which is common in all modern deep learning architectures.

  • @hailrider8188
    @hailrider8188 4 месяца назад

    Neural nets are classifiers and not necessarily predictors. This classification can then be interpreted as "prediction" through an output node function, but the neural network is still a classifier, and therefore over fitting and under fitting are not a mystery. A neural network should be trained so that it is neither over fit or under fit, that is, it is able to generalize and determine the correct outputs based on untrained inputs.

    • @schmetterling4477
      @schmetterling4477 4 месяца назад

      That sounds good, but actual "reasoning" doesn't work that way. One can't guess correct answers to logical problems from "generalization".

  • @jeff__w
    @jeff__w 5 месяцев назад

    4::44 “The speculation that makes most sense to me is that models don’t overfit when they could because the overfit isn’t stable under something that happens during the training runs. *They almost always default on a fit that is dominated by as few relevant parameters as possible,* and then fine tune with the remaining parameters. But it’s unclear whether that’s correct.”
    _Everything should be made as simple as possible, but not simpler._
    -attributed to Albert Einstein
    Along the same lines, there is the design principle of _Minimal Critical Specification:_
    _No more should be specified than is absolutely essential but it is necessary to identify what is essential._
    It seems that the weights and biases incorporate what is essential and _only_ what is essential.

  • @daanschone1548
    @daanschone1548 5 месяцев назад

    The big question: how good is it for rope solo? (Lead and toprope)

  • @KommuSoft
    @KommuSoft 5 месяцев назад +1

    To be honest, I think marketing it as artificial "intelligence" has always been a bold move. Actually they should have named it a "statistical machine" or something similar. Eventually it does that: creating the most sensical parameters for a model based on a enormous load of data. But if the data is skewed in some way, that skew is also part of the model.

  • @Rovsau
    @Rovsau 5 месяцев назад

    The first part of the explanation for overfitting was confusing, but the last part made sense.
    I suspect it simply helps to have as many weights as possible, per potential answer.
    The AI should then be able to compare the relevancy of each weight-relation though category and sub-categories.
    I wonder what it would say if it read its own code.

  • @saturnhex9855
    @saturnhex9855 5 месяцев назад

    I believe the the double descent problem in AI modeling is due to the emergence of a chaos/complex systems-based "counter-fit" phenomenon due to more parameters = more complexity. More parameters, more complexity, introduces more chaos, which counters overfitting.

  • @ericalbers3923
    @ericalbers3923 5 месяцев назад

    When you have that many parameters, a "butterfly like" effect comes into play, basically small changes can have large effects, carried in 2nd and 3rd order derivatives of the weights. Think of it like the modulus in a encryption algorithm, the 'lost bits', are here, but the loss actually makes the potential 'overfitting' not overfit because it kinda turns into a RELU thing

  • @KryptonianAI
    @KryptonianAI 5 месяцев назад +7

    She has such a brilliant way of presenting information. She encourages curiosity.

  • @janerussell3472
    @janerussell3472 5 месяцев назад

    Empirical evidence has shown that learning rate transfer can be attributed to the fact that under µP, and its depth extension, the largest eigenvalue of the training loss Hessian (i.e. the sharpness) is largely independent of the width and depth of the network for a sustained period of training time. The neural tangent kernel (NTK) describes how a neural network evolves during training via gradient descent; remarkably the scaling increases the learning rate 1,000 times because the training is more stable. [however some claim it is less sharp.]

  • @carlhopkinson
    @carlhopkinson 5 месяцев назад

    Yea, but it still writes incorrect code and must be spoon fed its own errors even if the specification is quite clear.
    And it is not a misunderstanding of ambiguous instructions either.
    Even when its error is pointed out in great detail, it still sometimes will make the same mistake over and over.

  • @yahm0n
    @yahm0n 5 месяцев назад

    It is easier to remember something if you understand it. This is key to both artificial intelligence and natural intelligence. As you continue training, broader neural patterns that correctly predict outcomes are able to trigger reward more often. The types of neural patterns that would result in overfitting will be shallower patterns that have a tougher time triggering reward. It is the same concept as evolution, survival of the fittest neural patterns.