It gets worse. You have an AI "blackbox" within the classic committee "blackbox", increasing the abstraction and opaquely of government policy and action. Nothing could possibly go wrong there...
Sabine, it's worth mentionint that automated decision making (including AI, including for credit scores), is prohibited in the EU. If there is a national law allowing it, the decision must still be appealable to a human, and one must always be allowed an explanation for any decision (including the parameters taken into account). The most recent, and most impactful, case would be C-634/21, Schufa Holding.
Wish we had this in the US. People get denied all the time for unknowable reasons. It's life or death and we hand it to some poorly written algorithm to avoid liability. The programmers should be held liable for not programming edge case scenarios. The AI will be worse as it currently barely gets anything right.
@@jaylewis9876 Yeah, but you have right to know what parameters it took into account. So if they aren't able to justify it properly in easier way, their only option might become to hand you their full model, as that's basically the justification. And even in that case, it might not be sufficient justification.
@@High-Tech-Geek "edge case scenarios" ... the stuff of nightmares for programmers, well actually, for the public but, it's not really the programmers fault, it's their bosses who design for the edge cases, or don't bother to try and think of them, or think it would cost too much (in programmer's salaries) to implement fixes
@@vulcanfeline almost every human, as an individual, has extenuating circumstances. Programs and algorithms that lump everyone into 3 or 4 buckets should be criminalized. Yet that's how most of our "modern day" services operate today. People are illegally denied x, y, and z and suffer through no fault of their own. All to feed the greed machine.
I read a news article that HR was using AI to scan the resumes of potential employees. It turned out they were all rejected because the guys in HR put in a parameter for experience with an outdated software. Finding out nobody was being hired made upper management looked into this, found out what happened, and fired the entire HR team.
It’s rare that I disagree with Sabine-mainly because I don’t fully understand the fields she’s discussing, though I find them fascinating. However, as a computer scientist and senior full-stack developer with decades of experience, this time, I know my subject well. So, I must say: even if, as Sabine claims, what AI is doing and will be used for is "nothing new," the speed, scale, and impact it will have are unprecedented. Sometimes, sheer speed and scale alone are enough to make a significant difference. AI itself is a prime example: the principles behind it have been around for a long time, but only recently has the hardware existed to run it effectively. Ultimately, though, what AI brings-whether good or bad-will still depend largely on the human decisions shaping its trajectory. And on that point, Harari is more than qualified to speak.
I agree. Law of large numbers, as it were. It becomes its own entropic momentum. We’ll see how things balance out in the end, but it’s not going away, and it’s not the same as before, that’s for certain.
Are you an antrophologist/behavioral scientist too? Because having a computer science degree doesn't give you all the knowledge or experience necessary to accurately predict how societies might react to AI.
@@joechip4822 If AI was a thing maybe I could, but it’s not.. There’s nothing intelligent about it, it’s 50 year old tech, the only difference today is it can pull from the internet.. A program that follows commands is not AI.
As a retired banker, who worked in the industry for 42 years, I remember the first scoring systems used to make loans. Data was put onto a "score sheet" - actual paper - if the score was at or over the minimum the loan was made. The "loan officer" was responsible for entering the data. To me this was the first exposure to AI in its simplest form.
I work in the field of Computer Science, and trust me, my peers have a reputation for making bad decisions or failing to predict outcomes... Tech bros have their heads up their artificial asses, and often get lost with delusions 🤦♂️
Unfortunately those working in that sector don't understand people very well, if at all. Given people are are at least half the equation it's not surprising their predictions aren't accurate.
A long time ago, this channel was about physics . Now it is about money. Probably, an AI proposed to Sabine, how to grow the channel with everyday buzzword topics…
Sabine, as a joke you asked why the EU is pushing for hydrogen. 1) The Sun cannot be made a scarce resource. 2) The ones who manage the oil pipelines are the same who will manage the hydrogen pipelines and they push for a technological model that links oil to hydrogen.
Not to worry! We also built an AI to talk about AIs talking nonsense about AI now. Models suggest it is nonsense, but we have high hopes that we will reach a singularity of recursion.
@@ThePurplePassage that example could be made for AI as well and be just as obvious why it did what it did. But why someone or AI decides on day x to invest y amount in stock z is as intransparant for a persons decision or an AI decision where I would bet if you ask AI it would outperform the human in a clear explanation.
Especially so as apparently we already made the decision in our brains before we are aware of it. So do we rationally decide anything or just react with feelings and prejudice. What about our free will that we like to think we have?
@@SPDLand In many cases it might be understandable why an AI chose what it did, but as I understand many AIs lack the facility to present its 'rationale' or basis for making the decision and simply spit out an answer inscrutably, which is part of what Harari was referring to in the video. Of course you can give AIs the capability to do this and in time they might even become the default, but that would take time and not necessarily become universal in which case any human would outperform the AI. Even if the AI does give its decision basis, I'm not sure that would necessarily be clearer than what a human might say, notwithstanding that humans can be irrational, but then being an emotional being one can potentially understand said irrationality anyway.
One has to understand the appeal of an AI to those who wish to use it. It promises to be faster and cheaper than a human being, but perhaps even more appealing is that nobody can be blamed for the result the AI spits out. "It wasn't me, it was the AI". A newer response than "I was only following orders", and also newer than "I just did what the management consultant suggested in the report".
To extend your point, progression of AI will be slowed significantly (at least in the US) by the legal system, which will seek to establish a framework for assignment of liability/blame. And lawyers aren't going to readily accept AI doing THEIR work.
The appeal of AI is that it can discern patterns that humans cannot. Manipulative people will like to exploit those patterns to their advantage. Cambridge Analytica, anyone?
I doubt the reason will be "I was only following orders." I expect it will be "the AI's advice seemed plausible, and no one was able to prove it was wrong."
AI is rhe beast that will subdue and deceive mankind. People will walk blindly into it. It can already steal peoples voices and image and tell you anything you could want to hear. Mosters from the Id!
@@brothermine2292to me it does pose a problem. Either we trust AI or we trust humans. Which one's reasoning should outweigh the other? May be easy to know now but as AI gets more conplex and its patterns beyond our own, idk at some point noone will question the AI or everyone will. Kinda like in I robot book when donnovan and powel thought there wqs a malfunction but basically the key to FTL was aomething the AI couldnt possibly explain to the humans so it just made it happen to them.
Thank you for describing Yuval Harari as an "expert". It kind of bugs me, I've been working in AI since the 1980's and some guy who studies Medieval History blathers a bunch of nonsense and people don't laugh at him the way they should. I was wondering "why don't I get quoted as an expert in AI?" and then I realized that someone who said "most of what you hear is hype and BS" isn't likely to make the news. Sabine is of course perfectly correct that the Finance industry has been using software to make decisions for decades. When I was a consultant for Accenture we worked with a big financial institution to develop software that could automatically make arbitrage decisions. The nature of Arbitrage makes it an obvious place to use software because being able to make decisions in seconds and even microseconds can result in millions $$ more profit.
@@mandingo1979 Are you sure that is the same kind of chaos? I think OP didn't mean systems with nice statistical robustness but where the result can change unpredictably when changing some initial parameter.
@@mandingo1979 Network does not become smarter with chaos "injection". It becomes more random. Fools may confuse randomness with creativity or even intelligence as results are more unusual.
In this era of generative AI, I actually agree more with Harari. He gave some examples in his latest book Nexus. The generative AI is quite different from previous algorithm (as we have seen in the examples of hallucinating ChatGPT). The previous algorithm was programmed using predefined answers. In contrary the generative AI choses next probable answer from thousands of possible answers. Harari's argument is that people would trust such technology more and more (the generative AI), although we don't know anymore how the generative AI came to a decision. And as they become more sophisticated with billions of tokens, even the engineers, who programme them, don't know anymore how they chose certain answer.
But you also can't tell how any human came to their answer. The difference is only in the assumptions you make, namely that another human works the same as you (which is usually wrong anyways).
You sir are wrong. It has nothing to do with generative AI and there are more than one “algorithm”. Blackbox AI have been around for sometime and are not trained precisely on predefined data. The only type of classical AI I can think of is 60’s Expert Systems.
@@Psicoeducazione Agree. I sometimes laugh at people who think in terms of AI from 10 years ago. Since the invention of transformers, scientists have managed to make the machine learned and be inventive. No wonder the godfather of the AI, like Hinton who was worried and left Google, has warned of unintended consequences of AI application.
We’re slowly approaching towards a cargo cult society on a global scale. Everyone is focused on getting answers quickly without any friction, but it’s in the friction where the learning happens. We still need the ability to understand the fundamentals as well as the stamina to solve problems through first principles if we’re to successfully maneuver our way through all future events we don’t have training for.
Unfortunately, with larger problems that's just not physically possible. Could you solve them manually? Sure, if you had a few million years to do so …
@@harmless6813 I'm not saying it shouldn't be used. Its useful and can provide a lot of insight. For example, they used machine learning to help approximate the qubit wormhole validation with a sparse number of qubits. But you know that's not where its headed. When your supervisor is advised by AI to assign tasks to you that you then feed your own AI and send the response back, that's just an unsatisfying dystopia. And while today's problem solvers do have enough fundamentals under their belts to identify interesting avenues for exploration and validate results, those skills are going to atrophy or never even be developed as the need is no longer there. Hence, I see a stronger reliance on consuming the results without the understanding of what's actually happening unless an ai is involved to help you parse what the other ai is saying.
I couldn't disagree more with our assessment Sabine. AI is being used to remove human decision making, like automation was used to replace manual labour in every situation it could be used (with robots picking up the slack nowadays.).
Arthur C. Clarke wrote a great deal about science and the future, along with science fiction. He had a seemingly poor record of predictions of the future, but that appears now to be mostly a matter of timing - and in science fiction, of the use of popular culture as context. But hisa novel "2001: A Space Odyssey" (written prior to the movie, but published after its release) contained some stunning insights into the problems with AI. His HAL-9000 AI persona was the most accurate prediction of both the technology of AI, and the dangers of relying on it for human survival. I read the novel 45 years ago, but have always remembered Clarke's tale of HAL's development, and in particular a statement that the engineers still didn't understand how HAL's neurons formed - it was enough just that they did. I thought that was a cop out, but now that AI has actually come into being, I see that it was a fictional prediction come true. That makes the major part of the film "2001: A Space Odyssey", a bit more alarming. My respect for Clarke has grown enormously over the years, and his predictions concerning AI have increased it enormously.
Well, I read Harari´s first book "A Brief History of Humankind" some years ago an figured it quite gripping to read, because it´s written nicely and his theses are refreshing original and give a surprisingly divergent perspective from the established view, though there was nothing fundamentally new in it and sometimes it was simply wrong (for me at least). I´m happy to see, that Dr. Sabine´s influence is growing obviously, since she´s heard on "BigBosses"-conferences and on a committee of the british parlament recently. Would be nice to have the link to these events.
The entire recording of the UK debate is here: parliamentlive.tv/event/index/651c93fa-6cc3-4e47-b399-6775472061df The finance event that I mentioned was last year in November (time flies -- I thought it was this year in the Spring), it's called the Finanz Informatik Forum, the website is here: www.f-i.de/Service/FI-Forum I was there on Nov 23, if you scroll down you will find the recording as the 2nd one of the day
I think this perspective can benefit from your AI/data videos. We are and will continue to shift from rule-based computer analysis for decision making to using AI-but the data issues will add more uncertainty on top of any inherent limitations of AI. I think the major difference is that we can easily understand existing rule-based approaches that are everywhere and have existed for decades, but that is not necessarily the case for AI.
Maybe you aught to try "common sense" decision making. Try the skill that comes from MBWA (Management By Walking Around). You don't learn the origins of man (or woman) by contemplating your naval.
@@drewdaly61 my wife is of German descent. Maybe that's why she's always complaining about being constipated, she doesn't know what the right angle is.
Yuval Harari is more a Universal Historian than a medievalist, though doubtless his PhD was on some aspect of medieval history. His first book, _Homo Sapiens_, was a great hit and is pretty impressive: don't scoff at the attempt at universal history, somebody has to put together all the pieces to give us ways of understanding how we got into this mess. He's smart, but he seems to be developing into everyone's favourite guru, which makes an old failed academic like me suspicious. Where he's clearly wrong, in an academic's way, is in thinking that because people don't understand how machines make decisions, they will be unable to override them (as Dr Hossenfelder points out near the end). Many a time a rather ignorant politician has failed to accept the suggestions of experts. Often this is because of bloody mindedness or more or less blatant corruption, but sometimes it's because what politicians are good at telling what people will accept (or, anyway, what about 51% of people will accept).
The missing piece is artificial superintelligence, which has to be biologically inspired to be efficient. That's why he doesn't say humans will be fully repleaced in his prediction.
@@jonatand2045 What would it mean for humans to be "fully replaced"? Why would all this activity, whether machine learning, "AI," or something beyond, be taking place?
@@michaelwright2986 It means what it says. Humans are no longer the dominant species because ai is smarter in all domains. It may decide spending resouces on us is redundant. That is not necessarily bad, because it could eliminate a lot of suffering and build minds that that feel inconmensurable pleasure.
AI will continue to gain accuracy and competence, hallucinate less and less and as they are blackboxes to a large extent already which will only increase with further complexity they will look more and more like oracles. They give us accurate answers and we a lot of the time can't check if it is true. We just have to trust a previous track record and that what it saying is true. They can't really explain how they came to give you a certain answer as that requires it to explain the whole model which is way too large and complicated in the first place.
@ManicMindTrick Transformers are running out of data and becoming too costly to train. Recently there was a paper about differential transformers, but it only somewhat increases performance. It remains to be seen what liquid neural networks can do. If you want true ai, most likely brain simulations have to be scaled.
What’s funny is that I have seen people rely on AI for idiotic trivial things, like what to say to someone else. So when you are talking via email to a colleague, you might in fact be talking to an AI. Yuval is probably entirely correct.
I'd say that particular example is more of an indictment of how pointless most corporate communications are rather than an indictment of AI specifically. If you can't tell whether the response came from your colleague or an AI, then I would question why you're wasting your colleague's time with the question in the first place. Just the AI directly.
Well, he is not wrong in a lot of ways. A lot of this is already happening. Medical procedures are approved based on algorithm which are optimized software or “AI”. If you go to your doctor and can’t get a CT or MRI approved, thank AI and machine learning. What a lot of people fail to realize is that AI utilizes machine learning to improve pattern recognition abilities based on data and to improve the AI decision making. Even the justice system is using AI to predict recidivism for potential parolee’s. As mentioned credit approval, insurance rates, marketing, hiring decisions. It’s not unreasonable to expect this to greatly expand over the next decade
Yes, a lot of this will be improved progressively, without the knowledge of people. Already recruiters are using software tools to extract keywords. Pair this with automated tools to extract the entire social media activities and AI, then you will be able to profile candidates according to political inclinations, medical history, family history. We are not far from the moment when recruiters who are conducting interviews will be able to access your entire life with one click.
Who designed the machine learning ? = HUMANS = human learning communicated by a machine no matter how some want to regard it as the machine using a a non human development of cognition. Theres a massive outbreak of physicist trying to solve the hard problem by deleting all the obstacles also - Entropy usually. Any theory can work on paper like that as its just a case of a A to z topology with anything preventing that route removed. If theres a piece of food at end of an uninterrupted pathway a dog with sniff it out. A rat in a place cell maze will to no matter how many times that A to Z route is altered. Physics papers these days say blah blah blah blah blah blah - and its all logical progression that others could follow. But having deleted every cosmological opposition to such a path. A.i. has done that also & we might as well have oafs with big sticks doing the judging. Fascial recognition A.I. in shops is sensible until someone looks at the security guard & they think it was a dirty look. Unfortunately society is being driven to hate each other is the truth & that persons face could well go a crime database just because painful joints make it hard to walk and that causes complex facial expressions. All the programming of the tech is human & one has to fake the concept that human do not make mistakes, or abuse their responsibilities just in order to pretend that A.I. is something else / is perfect.The same naturally flawed animal reasoning that occasionally locks the wrong persons up for life will program all A.I. is the likely fact = HUMAN.
@mikezappulla4092 AI connects the dots in multiple dimensions of Time, the self called Academia experts are just trying to show off their fake intelligence. 😆
@@cameroncameron2826 Yes, it was humans who decided that some people should not be able to get the medical procedures they need. The AI only follows the rules the humans set up.
The dude is hardly the average person and neither am I. There are no correct answers as to what will happen with AI. Far too many variables that are shifting now.
Ms Sabine, this is a very relevant soft topic you have touched. Amazingly done, too. On one or two to note: I have been in the Bank for over 30 years . I can say the Ministry or the Federal Bank will not give the choice to banks to decide, whether to handover to AI or not. But are likely to be instructed. It may be best if legal and political decisions are throughly scrutinized for airtight and constitutionally fit algorithms, and process and implementation given to AI. I see the ray of hope for humanity, and fair progress.
Police researcher (criminologist) here. UK just put out a call for expert input on a governance framework for data driven technologies i.e. algorithmic decision-making (predictive policing, risk assessments for offenders, etc). US currently trialling algorithmic/computational conduct analysis of body cam footage. Algorithms are coming on hard and fast and big tech reps are filling expert networks/panels. Will be interesting to see how this plays out
It will play out badly and stupidly because of "tech expert" rent seeking.. at least for 20 years, until the technology actually matures and can be trivially purchased. But that too might be monopolised to create a trough into the public purse.
@@johns5558 The police already knows who is doing the crime, but they do nothing because those are protected classes. the hardest hit by this system are legal migrants, they feel the full force of the police and their public image is ruined because they have the same skin colour as the illegals.
So on top of the existing algorithm (white fragility) that fixates on old white blokes they want what exactly once the self interrogations over, the actual confession formula ? Then what ? - the hypnogogic suggestioning linguistics for digging a 6 ft hole and jumping in ?
When NASA had to rely on computers to make decisions at high speed without the time for humans to review and intervene, they began installing multiple computers and having them vote. Essentially they began using computer committees. The Space Shuttle had five computers that voted with the majority decision enacted. While the AI algorithms are somewhat (but not completely) different from the Space Shuttle algorithms, the overall function of a decision making system is the same. So, using AI committees for fast response without human review makes sense as long as the AI's are all trained on different data sets.
@@MDformernavalperson Uh, not related to AI in any way. The 1960's computers in the Apollo were far less powerful than a 1980's calculator. They were not AI capable
Insurance companies already use algorithms to determine your risk factor. We are already using machine learning in a lot of places before ChatGPT. Algorithms is to "AI" as server-side is to "cloud computing" (It's more nuanced but it should be easy to get the point).
I jus t came across her channel the other day randomly but I have to say I find her break down of the topic of each video I've watched so far very compelling. She has the perfect amount of dry humor mixed with common sense and a bit of brutal honesty. This is turning out to be my best channel find of 2024!
I sit on the toilet the "right way" for 6 months now and can't go back. Only the fact that I peeled bananas from the wrong side was more shocking for me.
Teen me was lazy and depressive but not dumb. After pondering if humans did squat shitting for millions of years, why do we sit. And so i thought "what is the closest thing i can do to that" and I agree, its a one way road.
Isaac Asimov wrote a story (in the 1950s?) about a big computer per continent administrating/governing everything. And they behaved like their inhabitants, so he surely would call them "AI" today. This "super computers" actually did a good job, if only we could be so fortunate.
Who was it that wrote the story about the generals not trusting the AI to run the war? So one general fudged the data on its way in to make it more plausible, another fudged the decisions on the way out to make them more reasonable, and the field general having to decide on whether to carry out the orders, flipped a coin. I think it's also Asimov.
@@technoman9000 If so, then I will re-read them. I always paid more attention to Asimov's Foundation saga ; and not so much to the 'Robots' and 'Empire' sagas. My fault probably
Tired of AI gurus talking about something they discovered existed 2 months ago. Why aren’t we instead listening to computer scientists that have been studying AI for years? That’s the people we should listen to.
Because the current paradigm gives results now and while the others are hypothetical even though the brain is proof of principle and requires far less power.
I'm a software engineer and worked at a credit card processador. I didn't understand how the fees were calculated, but not because it was made by an IA, the code was only a bunch of mathematics operations, it's because I don't have enough financial knowledge to understand
I want to sincerely thank you for addressing this matter. It’s truly astounding how people, who have neither written a single line of code nor authored a scientific paper, are now making predictions with such unwavering confidence. They boldly claim what the future will look like in 10 years, as if they have all the answers. Frankly, I can't help but laugh when I hear these proclamations. It’s a strange phenomenon-those without firsthand experience in the complexities of these fields often make the boldest statements. I really appreciate that you've brought some much-needed perspective to the discussion
Instead of calling their predictions bullshit, why don't you explain in detail why you think that. It's always easier to call people out, but what is your reasoning behind your own statement?
There is a certain "inhuman" quality to AI decision making process. Can be well illustrated by modern chess algorithms. Chess is a logical game, supposedly. Turns out humans, even the smartest and the most logical ones, tend to infuse their decisions with emotion. Like, for instance, many games with AI nowadays have revealed that humans tend to unconsciously avoid sacrificing the queen, even in situations where it is the most effective path to victory. As well and do many other "human" things. AI doesn't do that. Guess it's one of the things that could be called "alien".
I remember priority-queue-based decision making in RTS video games being called AI I remember optical character recognition neural nets being called AI I remember google translate being called AI I remember face scanning algorithms being called AI I remember Google's DeepMind and DeepDream being called AI I certainly remember CleverBot being called AI These all existed before GPT. Artificial Intelligence is the name of an entire field of research and engineering, just because you haven't heard about it doesn't mean it didn't exist.
@@Keisuki Yes but he is right about it not being a buzzword yet at that time. AI was a term thrown around by people who dealt with DeepMind, CleverBot and face scanning algorithms. Today it's thrown around by taxi drivers and hairdressers. I work on an assembly line in a car factory. Few days ago I had a 30 min. conversation with my colleague about AI during lunch brake. Ten years ago something like that would be highly unlikely.
@@juremustac3063 It was a buzzword a couple times in history. In the 90's the gaming industry picked it up, and never let go. Any code that controls an agent in a game is called AI, even if it's just a very simple algorithm.
You are right in what you say, but I think Harari was referring to a modern feature of programming. As programs become more complex, it is difficult for their own creators to understand which lines of code are causing a distortion. A great example of this is RUclips itself, I believe that every content creator has had videos demonetized or with reduced reach for reasons that the platform itself does not understand and takes weeks to fix. The danger is not the AI itself, but the fact that there are not enough AI supervisors to make these adjustments and the ethics of the companies that use these programs. I believe that legal regulation is necessary, preferably minimalist, that protects basic consumer rights.
Following and appreciating your content for a while now, but this video is … disappointing? The title is misleading clickbait, the content itself is fluffy, your approach to Harari is dubious, and frankly the jokes are getting a bit stale. I’m thinking you’re maybe demanding too much of yourself wanting to produce a video everyday. I miss the less frequent, but way more in-depth videos you used to make. However, you are still my favorite scientific content creator on YT. 🙌
Thank you for this great video. I only see one thing to worry about: AI replacing a black box committee is an issue for me because AI can be tricked to do something wrong by a single programmer, whereas committees require several persons to agree. And this is the essence of our democracy : never let one person make a decision on his own. Which makes me think that AI should never be programmed by just one person. That is what could probably save democracy. Don’t let one AI decide but make sure several AI’s come to the same conclusion before taking any important decision.
I personally think the biggest threat these AI models pose is the power they give to non-governmental institutions that have never been voted for (I'm especially worried about large global corporations here, but also the ways it can be misused by individuals (I think everyone can imagine the threats of deep fake technology)). One of the largest fields generative AI models are being used in already is content creation. It's not farfetched to assume a company with lots of information about you will get the idea to sell personalised AI-generated content to you. We have already seen the political and social influence content creators may have (popularity of the AfD, conspiracy theories during Covid, etc.). That power in the hand of large companies (worst case: a single corporation with a monopoly) whose sole aim is to increase profits is not exactly a comforting thought. In an over-exaggerated dystopia, this would lead to some Wall-E-esque future with humans having been once and for all demoted "consumer", and the AI (or rather the corporation controlling it) having the sole power to decide your subjective reality as well as that of everybody else.
@@markplutowski Who are we to say that AI bureaucrats and politicians could not be bribed with offers of new servers and other hardware? I really wonder how long that would hold up.
@@crowe6961 Have you read Iain M. Banks "Culture" series - I suppose it's possible they could be bribed using other incentives - "hey'll I'll get you some time on Musk's Collossus cluster, what'ya say?" 😄😄
That's a very long winded way of telling us you agreed with everything Harari said. To be fair, if you're familiar with his other work, say, if you tap into some of his conversations with people like Tristan Harris, he understands full well, agrees with you entirely and communicates openly the fact what AI does indeed use a large number of datapoints human minds can't grasp, you're not really adding anything novel to the conversation there, and that we need benchmarks, checks and balances to sanitize its conclusions - and egulated by government - even if we can't understand them. He advocates for the same stuff you do. So with that minor gripe aside (and you drawing a line to his more distant background and not to his more recent two decades of professional "public conversation about the here and now" background - and of having written Sapiens, Homo Deus and 21 lessons, and now Nexus), it sounds like 1. You might _really_ want to look into reading _at least_ Sapiens, probably Homo Deus and 21 lessons too if you care about emerging stuff that needs regulation and steering... and 2. You're on the same bandwagon everyone else is. A bit late (judging by your seeming unawareness of who the thought leaders of the past 2 decades have been, among which Harari ranks pretty high), but.. welcome onboard nevertheless. Ironic that you levelled exactly that criticism at Germans ;)
AI is a just a buzzword being applied to anything for only one reason: money. As you point out, so much of this AI has existed already as machine learning.
Not only that, but AI as we know it now (LLMs) are incremental improvements on previous ML products. They are still statistical certainty optimizers, just with a broader scope. Still nothing novel coming from modern AI that doesn't have insightful humans at the helm. AI only making choices will contribute to stagnant economies and governments, not help us surpass them, because the powerful few in charge employing them will make sure that is so.
The wisest thing that should be on everyone's mind currently should be to invest in different streams of income that don't depend on the government. Especially with the current economic crisis around the world. This is still a good time to Invest in gold, silver, and digital currencies (BTC, ETH...)
I began investing in stock earlier this year, and it is the best choice I've ever made. My portfolio is rounding up to almost a million and I have realized that when a stock makes it to the news, chances are you're quite late to the party, the idea is to get in early blue chips before it becomes public. There are lots of life changing opportunities in the market, and maximize it.
as a supercomputer operator... no. the problem, i agree, is people relying on systems they don't understand. It does increase the risk of carastrphic failure... but anecdotally... the field heavily selects for people who try to cut corners, try to prove their assertions without controls and who have never heard of the concept of computational complexity. it is not known how deep the grift goes, but the idea that it relies on investor hype is not at all controversial even amongst people who don't see it for the puppet show it is
After using a few AI bots (e.g. ChatGPT, Gemini)... these products keep FORGETTING... previous answers. How can an AI bot do anything sensible if it keeps forgetting previous information?
This falls into the same category as those “we only use 10% of the brain” arguments. Ais can reason their motivations and are just getting better at it, the part he claims we dont understand is LITERALLY like saying “i won’t trust this politician even though they reasoned their position and i understand it because i dont fully understand the neural pathways the information took in their brain’s body of logic”
One of Arthur C. Clarks’s rules of science is that when a senior scientist says something is impossible they are usually wrong. My corollary is that it usually takes a hell of a lot longer than expected.
Dear Sabine, I heartly recommend that you read Harari's new book "Nexus", and maybe "Sapiens". I am sure both of you can learn a lot from each other. Harari does these kinds of podcasts mainly to promote his new book and ideas. In "Nexus" he basically argues why we need to be super carefull about deploying AI. Greetings from Germany
YNH has written about the future including AI in two books - "Homo Deus" and this years "Nexus" He is not a computer scientist or any sort of scientist - he's an historian. If you read his books he makes some good points and produces a relatively coherent argument. I dont think hes right but many of the points he makes are valid and as he is talking broadly what he says gives a reasonable basis upon which to argue. Do have a look - its worth the effort.
It’s interesting to observe that as the shortage of skilled workers grows, the drive for automation is also increasing. Architectural firms, for example, are exploring artificial intelligence to meet client needs in a more cost-effective manner. This raises a valid question: why shouldn’t we consider applying similar automation strategies to bureaucratic processes if they could help reduce costs and the need for personnel?
There's a huge misconception that data/information is - or should be - the most important thing in political-decision making. But wisdom is is far more than knowledge or information. It includes a recognition that ignorance is not only an inevitable part of things, but also that ignorance has a value of its own in helping to guide good decisions. In this respect I highly recommend a 6-part podcast written by Rory Stewart (ex-government minister) for the BBC called "The Long History of Ignorance", which examines the idea that understanding ignorance is as important as acquiring knowledge.
Many years ago I was told this: over the history of computing AI has been many things: parsing (in the 50s), pattern matching, image recognition (e.g. reading car number plates) and so on. They are "AI" until they are solved, then they get a name and the next big thing becomes "AI". What does "AI" really stand for ? "Anything Interesting".
AI predicts the next most expected thing in a pattern, so does that mean it will filter out creative individuals, those who by not fitting the pattern would stand out?
In my opinion, Harari has had a couple of successful books of the popular science type which given him an audience and an opportunity to earn money by making TV appearances etc. He is trying to stay current, that's all. AI and smart phones for kids are no more than the current hand-wringing and drama which began towards the end of the 18th century when people started writing novels. "Good heavens!" said the critics. "All this novel reading is just escapism. No one will bother to go to work and the world will grind to a halt!" Well, it didn't.
Ditto Sabine. Yuval makes great historical observations about prior communication breakthroughs and social impacts, but he is out of his lane when he prognosticates. Suddenly, his thinking becomes shallow and dominated by groupthink. Others have pointed out that he is not an economist, biologist, or environmentalist by profession, and this shows. The future is not like the past.
Good points. I've seen that author making the rounds on the News channels doing interviews. My favorite was the one with Ari Melber, because they get into some deep stuff. It sounds to me like his new book, which I want to read, is comparing AI technology to the printing press. Basically, how a lot of awful stuff has come from the invention of the printing press, but so has a lot of good stuff. And he points out that initially it was a bumpy road for the first 200 years. I am curious about whether it will take us 200 years to smooth out the kinks in AI lol. I'd like to think that we've gotten better to adapting to new tech by now. I think this author makes a lot of good points when he is reminding us what it's been like adapting to new technology in history, but things get a little thin when he tries to imagine the future world with AI. I think for those kind of speculations he should have teamed up with a good Sci Fi author lol. They are much better at imaging the "what ifs" than historians.
I have already found, I'm fighting AI bureaucracy everyday . Autocorrect on my phone, is just one example. Talk to a customer service person, you can tell AI dictates , their responses. I've had opportunities to simply ask the speaker if what they just said to me even makes sense. Many have had to admit, it did not, and, it wasn't really them. 🤔
It's amazing (and maybe a bit terrifying) how many people either in positions of authority or with massive platforms who don't understand what "AI" actually is, revert to a science fiction "model" for it and then talk about it publicly as if they know what they're talking about. It really shows the true state of things. We have all the information at our fingertips and people are more lazy than ever and more willing and more confident to speak on subjects they have no business speaking on. If it feels right or whatever. Cheese and rice...
Great vid as always, but the Lancelot quip about Harari was a little unfair. He was a military historian as an early career scholar, but the last military stuff he published was in the late 2000s. He's since become a rather prominent 'big history' kind of public intellectual, with an acute interest in, for instance, the history of human institutions and the history of knowledge. As such, he was interested in AI and its consequences before we all started talking about it.
Love your content to try to get a handle on scientific developments. 1. Running the world like bureaucracies do now. I get the analogy. Bureaucracies or AI don't control EVERYTHING, now or in the future. 2. You have characterised Harari as just a medievalist. No, he is mostly nowdays a public intellectual with multiple million copy publications behind him. Pretty global. I'm surprised you haven't heard of him in this context. I thought a German parallel might be Precht, ( parallel not necessarily in emphasis of ideas and issues) but don't think his reach is beyond the germanophone world. I speak eng, fr and ger, so that would be my media reach. Like to see a discussion on science and society between you and Harari. All the best
It was nonsense from the start. Its mostly just large language models but people want to "feel it" as something new and special and world breaking... which it probably will be eventually... maybe... idk. Its just marketing speak in 97% of cases right now... which makes it nonsense.
A lot of people don't even have a clue what is being called "AI". "Artificial intelligence" is loose term in computer science, a label used basically for any algorithm with emergent behavior. It has absolutely nothing to do with intelligence as most people understand the term. LLMs are "AI" in the computer science sense. But there is nothing intelligent about LLMs in any common meaning of the word. What most people imagine when they hear "AI" does not exist. We should make sure to always correct uninformed people when they personify chatbots and use them for unreasonable purposes (i.e. querying for facts).
As an AI Engineer I can tell you many of the people talking about AI in popular forums are simply trying to sell something and often don't know much about how it actually works. What you need to remember is that AI is only as good as the data it's trained on. The more uniform, ordered, specific and standardized the data is the better. I've talked to a lot of companies and organizations that really want to use AI in some way cause they've heard it's the future but the truth is most companies and organizations don't have the data to do anything very useful. Government related stuff is often the worst. Have you seen the convoluted language used in regards to several unrelated things in various bills and policies? You don't have to worry about AI running things for a long while. That's for sure.
The scariest part of AI is not the AI itself, but the gatekeeping. Last century the same problem occurred with integrated circuits. While IC are everywhere now, advancements in IC make it nearly impossible to be an IC startup. There are small set of companies responsible for producing all chips all around the world. AI is really bad at exceptions, but to be good enough you need to make it big, so big, that even top of the line personal PC cannot train such models. Creating bigger models will require processing power only a small set of corporations can afford. And there is no telling if such corporations start gatekeeping their trained models preventing any new startup AI companies from appearing.
Hang on. I think, and I may be wrong, you may be mistaking the speaker’s point about interpretability. A bank software’s reason for accepting one person’s loan application and not another’s is certainly “to make the business money” as you’ve correctly stated, but the speaker’s point is subtly different. He is asking “why did the algorithm look at his particular set of features and decide that the person was risky.” With statistical models we had much more interpretability than with modern deep learning. It’s the reason why interpretability is more of an issue now. It’s much easier to determine if a statistical model is doing something for the wrong reason. But there are people working on interpretability! And there are a lot of avenues to explore it.
He said we won’t know why we were refused a loan, and that is a distinction from today. Financials rules are used today, and the managers know exactly what those rules are. AI cannot tell you why it recommended anything; it’s just the output of unquantifiable “learning”. It makes mistakes with facts, hallucinates, and we can’t trace back why it made those mistakes.
This is not new NEWS. In 1986, Robert Hecht-Nielsen co-founded HNC Software, a neural networking startup in San Diego, based on his breakthrough work in predictive algorithms. HNC was perhaps the first start-up in the neural network/machine learning space to be significantly financially successful, and in many ways this is due to Hecht-Nielsen’s deep understanding that given enough training data and computational power, neural networks will nearly always find an optimal solution to a properly posed prediction problem.
AI has really improved my sense of humor. Every time a physics bureaucrat stops by to tell me I have to pay my taxes I get very conCERNed. No, wait, I said that wrong. What I meant was beeeeoooop [abrupt descending tone]
I really liked the parallel between AI "blackbox" and the classic comitee/government blackbox... Makes a lot of sense! Thanks for that
Magic
It gets worse. You have an AI "blackbox" within the classic committee "blackbox", increasing the abstraction and opaquely of government policy and action. Nothing could possibly go wrong there...
@@obsidianjane4413 Yo dawg! I heard you like black boxes, so I put a black box in your black box, so you can be confused while being confused.
Idk ai phone systems are basically the "same" as old phone systems.
Good luck getting a supervisor on the phone.
@@pierrecurie I just did a google image search for black box and I don't think it means what you think it means.
People calling ANY algorithms AI is really triggering, by that term basic math would be "AI"
But AI _is_ basic math. Linear algebra to the max.
Even simple filters hiring firms use are called AI.
Some people call a linear regression "machine learning".
Who cares what people call it? Capability is coming a long quite quickly.
Couldn't agree more.
Sabine, it's worth mentionint that automated decision making (including AI, including for credit scores), is prohibited in the EU. If there is a national law allowing it, the decision must still be appealable to a human, and one must always be allowed an explanation for any decision (including the parameters taken into account).
The most recent, and most impactful, case would be C-634/21, Schufa Holding.
Wish we had this in the US. People get denied all the time for unknowable reasons. It's life or death and we hand it to some poorly written algorithm to avoid liability. The programmers should be held liable for not programming edge case scenarios. The AI will be worse as it currently barely gets anything right.
The ai can present it’s recommended action and a human can hit yes/no 1000 times an hour.
@@jaylewis9876 Yeah, but you have right to know what parameters it took into account. So if they aren't able to justify it properly in easier way, their only option might become to hand you their full model, as that's basically the justification. And even in that case, it might not be sufficient justification.
@@High-Tech-Geek "edge case scenarios" ... the stuff of nightmares for programmers, well actually, for the public
but, it's not really the programmers fault, it's their bosses who design for the edge cases, or don't bother to try and think of them, or think it would cost too much (in programmer's salaries) to implement fixes
@@vulcanfeline almost every human, as an individual, has extenuating circumstances. Programs and algorithms that lump everyone into 3 or 4 buckets should be criminalized. Yet that's how most of our "modern day" services operate today. People are illegally denied x, y, and z and suffer through no fault of their own. All to feed the greed machine.
I read a news article that HR was using AI to scan the resumes of potential employees.
It turned out they were all rejected because the guys in HR put in a parameter for experience with an outdated software.
Finding out nobody was being hired made upper management looked into this, found out what happened, and fired the entire HR team.
So the job didn't require knowledge of old software? It's very common in some industries to have old software requirements.
It’s rare that I disagree with Sabine-mainly because I don’t fully understand the fields she’s discussing, though I find them fascinating. However, as a computer scientist and senior full-stack developer with decades of experience, this time, I know my subject well. So, I must say: even if, as Sabine claims, what AI is doing and will be used for is "nothing new," the speed, scale, and impact it will have are unprecedented. Sometimes, sheer speed and scale alone are enough to make a significant difference. AI itself is a prime example: the principles behind it have been around for a long time, but only recently has the hardware existed to run it effectively. Ultimately, though, what AI brings-whether good or bad-will still depend largely on the human decisions shaping its trajectory. And on that point, Harari is more than qualified to speak.
I agree. Law of large numbers, as it were. It becomes its own entropic momentum. We’ll see how things balance out in the end, but it’s not going away, and it’s not the same as before, that’s for certain.
Are you an antrophologist/behavioral scientist too? Because having a computer science degree doesn't give you all the knowledge or experience necessary to accurately predict how societies might react to AI.
It’s not AI.
@@Zagreus_07
Maybe you should use AI assistance to improve the quality of your comments here...😊
@@joechip4822 If AI was a thing maybe I could, but it’s not.. There’s nothing intelligent about it, it’s 50 year old tech, the only difference today is it can pull from the internet.. A program that follows commands is not AI.
As a retired banker, who worked in the industry for 42 years, I remember the first scoring systems used to make loans. Data was put onto a "score sheet" - actual paper - if the score was at or over the minimum the loan was made. The "loan officer" was responsible for entering the data. To me this was the first exposure to AI in its simplest form.
And all that has changed since is automation and better input options. The underlying system is still exactly the same.
Has nothing to do with AI. Algorithms are not AI.
@@harmless6813 AI *are* algorithms. And so are large parts of your brain.
@@harmless6813 What is AI but algorithms being very complex?
Graphitics. Yall needs some Asimov to get that ref
I work in the field of Computer Science, and trust me, my peers have a reputation for making bad decisions or failing to predict outcomes... Tech bros have their heads up their artificial asses, and often get lost with delusions 🤦♂️
the hype needed to get venture capital is not going to build itself
@@cyko5950you know what really hypes me up? a working product.
All about that investor money...
That's not specific to tech bros. Look around in other groups, and you'll find far worse predictions.
Unfortunately those working in that sector don't understand people very well, if at all. Given people are are at least half the equation it's not surprising their predictions aren't accurate.
"As soon as we started thinking for you it really became our civilization."
- Agent Smith
That is what I was thinking. While the tech has obviously improved, I'm not sure society has.
"The future is our time."
"Won't that be grand? The computers and programs will start thinking, and the people will stop!"
- Walter Gibbs, TRON (1982)
"You know something is happening, but you don't know what it is." -- Bob Dylan
@@ptonpcim sure society has declined
A long time ago, this channel was about physics . Now it is about money. Probably, an AI proposed to Sabine, how to grow the channel with everyday buzzword topics…
Well played.
Sabine, as a joke you asked why the EU is pushing for hydrogen.
1) The Sun cannot be made a scarce resource.
2) The ones who manage the oil pipelines are the same who will manage the hydrogen pipelines and they push for a technological model that links oil to hydrogen.
I wouldn't be surprised if it was AI talking nonsense about AI now.
Allen is not popular amongst the masses.
AI is DEFINITELY talking nonsense about AI now.
@@DoctyrEvilfull circle!
With googles NotebokLM you can do that in podcast form of you want.
Not to worry! We also built an AI to talk about AIs talking nonsense about AI now. Models suggest it is nonsense, but we have high hopes that we will reach a singularity of recursion.
We will never be sure why people decide what they decide either.
Oh I don't know, I think in many cases it's quite obvious - the burglar breaking into your house wants money/things that can be sold for money etc
@@ThePurplePassage that example could be made for AI as well and be just as obvious why it did what it did. But why someone or AI decides on day x to invest y amount in stock z is as intransparant for a persons decision or an AI decision where I would bet if you ask AI it would outperform the human in a clear explanation.
Especially so as apparently we already made the decision in our brains before we are aware of it. So do we rationally decide anything or just react with feelings and prejudice. What about our free will that we like to think we have?
@@peterwilson7532 non-existing
@@SPDLand In many cases it might be understandable why an AI chose what it did, but as I understand many AIs lack the facility to present its 'rationale' or basis for making the decision and simply spit out an answer inscrutably, which is part of what Harari was referring to in the video.
Of course you can give AIs the capability to do this and in time they might even become the default, but that would take time and not necessarily become universal in which case any human would outperform the AI.
Even if the AI does give its decision basis, I'm not sure that would necessarily be clearer than what a human might say, notwithstanding that humans can be irrational, but then being an emotional being one can potentially understand said irrationality anyway.
One has to understand the appeal of an AI to those who wish to use it. It promises to be faster and cheaper than a human being, but perhaps even more appealing is that nobody can be blamed for the result the AI spits out. "It wasn't me, it was the AI". A newer response than "I was only following orders", and also newer than "I just did what the management consultant suggested in the report".
To extend your point, progression of AI will be slowed significantly (at least in the US) by the legal system, which will seek to establish a framework for assignment of liability/blame. And lawyers aren't going to readily accept AI doing THEIR work.
The appeal of AI is that it can discern patterns that humans cannot.
Manipulative people will like to exploit those patterns to their advantage.
Cambridge Analytica, anyone?
I doubt the reason will be "I was only following orders." I expect it will be "the AI's advice seemed plausible, and no one was able to prove it was wrong."
AI is rhe beast that will subdue and deceive mankind. People will walk blindly into it. It can already steal peoples voices and image and tell you anything you could want to hear.
Mosters from the Id!
@@brothermine2292to me it does pose a problem.
Either we trust AI or we trust humans.
Which one's reasoning should outweigh the other?
May be easy to know now but as AI gets more conplex and its patterns beyond our own, idk at some point noone will question the AI or everyone will.
Kinda like in I robot book when donnovan and powel thought there wqs a malfunction but basically the key to FTL was aomething the AI couldnt possibly explain to the humans so it just made it happen to them.
Thank you for describing Yuval Harari as an "expert". It kind of bugs me, I've been working in AI since the 1980's and some guy who studies Medieval History blathers a bunch of nonsense and people don't laugh at him the way they should. I was wondering "why don't I get quoted as an expert in AI?" and then I realized that someone who said "most of what you hear is hype and BS" isn't likely to make the news. Sabine is of course perfectly correct that the Finance industry has been using software to make decisions for decades. When I was a consultant for Accenture we worked with a big financial institution to develop software that could automatically make arbitrage decisions. The nature of Arbitrage makes it an obvious place to use software because being able to make decisions in seconds and even microseconds can result in millions $$ more profit.
Agree, he is not an expert, just a WEF front man pretending to be an expert. His degree is in history, a degree one would do for a hobby.
A guy writes a best seller and from that point on he is an expert on everything
Chaos theory is perhaps a nice start, it will keep AI occupied for ever.
Quite the opposite. Out of chaos comes over every time I inject chaos into my artificial intelligence system it gets smarter
@@mandingo1979 Are you sure that is the same kind of chaos? I think OP didn't mean systems with nice statistical robustness but where the result can change unpredictably when changing some initial parameter.
@@mandingo1979 Network does not become smarter with chaos "injection". It becomes more random. Fools may confuse randomness with creativity or even intelligence as results are more unusual.
Chaos theory is not insoluble. Stress governs expressions in any system. Stress and emphasis govern the expression of information.
It's all judged good and bad or better or worse by statistical variance...
Who is setting the boundaries???
In this era of generative AI, I actually agree more with Harari. He gave some examples in his latest book Nexus. The generative AI is quite different from previous algorithm (as we have seen in the examples of hallucinating ChatGPT). The previous algorithm was programmed using predefined answers. In contrary the generative AI choses next probable answer from thousands of possible answers. Harari's argument is that people would trust such technology more and more (the generative AI), although we don't know anymore how the generative AI came to a decision. And as they become more sophisticated with billions of tokens, even the engineers, who programme them, don't know anymore how they chose certain answer.
But you also can't tell how any human came to their answer. The difference is only in the assumptions you make, namely that another human works the same as you (which is usually wrong anyways).
LOL 😆😆 that's a very big assumption, people are trusting everything less and less, with good reason!
You sir are wrong. It has nothing to do with generative AI and there are more than one “algorithm”. Blackbox AI have been around for sometime and are not trained precisely on predefined data. The only type of classical AI I can think of is 60’s Expert Systems.
Exactly! It is often a bad idea easily dismissing Harari ideas.
@@Psicoeducazione Agree. I sometimes laugh at people who think in terms of AI from 10 years ago. Since the invention of transformers, scientists have managed to make the machine learned and be inventive. No wonder the godfather of the AI, like Hinton who was worried and left Google, has warned of unintended consequences of AI application.
We’re slowly approaching towards a cargo cult society on a global scale. Everyone is focused on getting answers quickly without any friction, but it’s in the friction where the learning happens. We still need the ability to understand the fundamentals as well as the stamina to solve problems through first principles if we’re to successfully maneuver our way through all future events we don’t have training for.
Unfortunately, with larger problems that's just not physically possible. Could you solve them manually? Sure, if you had a few million years to do so …
Stable diffusion "Prompt Engineering" is the most cargo cult thing I've ever seen in modern life.
@@harmless6813 I'm not saying it shouldn't be used. Its useful and can provide a lot of insight. For example, they used machine learning to help approximate the qubit wormhole validation with a sparse number of qubits. But you know that's not where its headed. When your supervisor is advised by AI to assign tasks to you that you then feed your own AI and send the response back, that's just an unsatisfying dystopia. And while today's problem solvers do have enough fundamentals under their belts to identify interesting avenues for exploration and validate results, those skills are going to atrophy or never even be developed as the need is no longer there. Hence, I see a stronger reliance on consuming the results without the understanding of what's actually happening unless an ai is involved to help you parse what the other ai is saying.
There's already people who delegate _answering questions_ to chatGPT. Even I'm not that lazy!
THIS!
I couldn't disagree more with our assessment Sabine. AI is being used to remove human decision making, like automation was used to replace manual labour in every situation it could be used (with robots picking up the slack nowadays.).
Arthur C. Clarke wrote a great deal about science and the future, along with science fiction. He had a seemingly poor record of predictions of the future, but that appears now to be mostly a matter of timing - and in science fiction, of the use of popular culture as context. But hisa novel "2001: A Space Odyssey" (written prior to the movie, but published after its release) contained some stunning insights into the problems with AI. His HAL-9000 AI persona was the most accurate prediction of both the technology of AI, and the dangers of relying on it for human survival. I read the novel 45 years ago, but have always remembered Clarke's tale of HAL's development, and in particular a statement that the engineers still didn't understand how HAL's neurons formed - it was enough just that they did. I thought that was a cop out, but now that AI has actually come into being, I see that it was a fictional prediction come true. That makes the major part of the film "2001: A Space Odyssey", a bit more alarming. My respect for Clarke has grown enormously over the years, and his predictions concerning AI have increased it enormously.
Well, I read Harari´s first book "A Brief History of Humankind" some years ago an figured it quite gripping to read, because it´s written nicely and his theses are refreshing original and give a surprisingly divergent perspective from the established view, though there was nothing fundamentally new in it and sometimes it was simply wrong (for me at least).
I´m happy to see, that Dr. Sabine´s influence is growing obviously, since she´s heard on "BigBosses"-conferences and on a committee of the british parlament recently. Would be nice to have the link to these events.
The entire recording of the UK debate is here: parliamentlive.tv/event/index/651c93fa-6cc3-4e47-b399-6775472061df
The finance event that I mentioned was last year in November (time flies -- I thought it was this year in the Spring), it's called the Finanz Informatik Forum, the website is here: www.f-i.de/Service/FI-Forum
I was there on Nov 23, if you scroll down you will find the recording as the 2nd one of the day
@@SabineHossenfeldervery friendly, thank you!
@@SabineHossenfelder "it's called the Finanz Informatik Forum" was it organised by a 15 year old rapper? "Finanz dudez!"
Harari ? Pass
@@SabineHossenfelder ( 5:18 )°°° [■¡■]
I think this perspective can benefit from your AI/data videos. We are and will continue to shift from rule-based computer analysis for decision making to using AI-but the data issues will add more uncertainty on top of any inherent limitations of AI.
I think the major difference is that we can easily understand existing rule-based approaches that are everywhere and have existed for decades, but that is not necessarily the case for AI.
Maybe you aught to try "common sense" decision making. Try the skill that comes from MBWA (Management By Walking Around). You don't learn the origins of man (or woman) by contemplating your naval.
There is a wrong angle to sit on the toilet? WTF???!!!💩
Relax. Only in Germany.
Yep.
@@drewdaly61 my wife is of German descent. Maybe that's why she's always complaining about being constipated, she doesn't know what the right angle is.
@@JD-hh9io - it's 90*. (right angle. :D )
Actually yes. Or do you believe apes sit on chairs to do their business? It's stunning that most people, especially in the West, don't know it.
Yuval Harari is more a Universal Historian than a medievalist, though doubtless his PhD was on some aspect of medieval history. His first book, _Homo Sapiens_, was a great hit and is pretty impressive: don't scoff at the attempt at universal history, somebody has to put together all the pieces to give us ways of understanding how we got into this mess. He's smart, but he seems to be developing into everyone's favourite guru, which makes an old failed academic like me suspicious.
Where he's clearly wrong, in an academic's way, is in thinking that because people don't understand how machines make decisions, they will be unable to override them (as Dr Hossenfelder points out near the end). Many a time a rather ignorant politician has failed to accept the suggestions of experts. Often this is because of bloody mindedness or more or less blatant corruption, but sometimes it's because what politicians are good at telling what people will accept (or, anyway, what about 51% of people will accept).
The missing piece is artificial superintelligence, which has to be biologically inspired to be efficient. That's why he doesn't say humans will be fully repleaced in his prediction.
@@jonatand2045 What would it mean for humans to be "fully replaced"? Why would all this activity, whether machine learning, "AI," or something beyond, be taking place?
@@michaelwright2986
It means what it says. Humans are no longer the dominant species because ai is smarter in all domains. It may decide spending resouces on us is redundant. That is not necessarily bad, because it could eliminate a lot of suffering and build minds that that feel inconmensurable pleasure.
AI will continue to gain accuracy and competence, hallucinate less and less and as they are blackboxes to a large extent already which will only increase with further complexity they will look more and more like oracles. They give us accurate answers and we a lot of the time can't check if it is true. We just have to trust a previous track record and that what it saying is true.
They can't really explain how they came to give you a certain answer as that requires it to explain the whole model which is way too large and complicated in the first place.
@ManicMindTrick
Transformers are running out of data and becoming too costly to train. Recently there was a paper about differential transformers, but it only somewhat increases performance. It remains to be seen what liquid neural networks can do. If you want true ai, most likely brain simulations have to be scaled.
I like how skeptical you are absolutely everything and everybody and you have a very good reason to be...
What’s funny is that I have seen people rely on AI for idiotic trivial things, like what to say to someone else. So when you are talking via email to a colleague, you might in fact be talking to an AI. Yuval is probably entirely correct.
Well, if my emails suddenly start making more sense, maybe I should thank AI!
@@jackwaterman-lw4cothe trouble with "probably" is it misses out on the possibly.
@@jackwaterman-lw4co Salty
I'd say that particular example is more of an indictment of how pointless most corporate communications are rather than an indictment of AI specifically. If you can't tell whether the response came from your colleague or an AI, then I would question why you're wasting your colleague's time with the question in the first place. Just the AI directly.
So far ten people liked my AI generated reply. Does that count as passing the Turing test?
Well, he is not wrong in a lot of ways. A lot of this is already happening. Medical procedures are approved based on algorithm which are optimized software or “AI”.
If you go to your doctor and can’t get a CT or MRI approved, thank AI and machine learning. What a lot of people fail to realize is that AI utilizes machine learning to improve pattern recognition abilities based on data and to improve the AI decision making.
Even the justice system is using AI to predict recidivism for potential parolee’s. As mentioned credit approval, insurance rates, marketing, hiring decisions.
It’s not unreasonable to expect this to greatly expand over the next decade
Yes, a lot of this will be improved progressively, without the knowledge of people. Already recruiters are using software tools to extract keywords. Pair this with automated tools to extract the entire social media activities and AI, then you will be able to profile candidates according to political inclinations, medical history, family history. We are not far from the moment when recruiters who are conducting interviews will be able to access your entire life with one click.
Who designed the machine learning ? = HUMANS = human learning communicated by a machine no matter how some want to regard it as the machine using a a non human development of cognition. Theres a massive outbreak of physicist trying to solve the hard problem by deleting all the obstacles also - Entropy usually. Any theory can work on paper like that as its just a case of a A to z topology with anything preventing that route removed. If theres a piece of food at end of an uninterrupted pathway a dog with sniff it out. A rat in a place cell maze will to no matter how many times that A to Z route is altered. Physics papers these days say blah blah blah blah blah blah - and its all logical progression that others could follow. But having deleted every cosmological opposition to such a path. A.i. has done that also & we might as well have oafs with big sticks doing the judging. Fascial recognition A.I. in shops is sensible until someone looks at the security guard & they think it was a dirty look. Unfortunately society is being driven to hate each other is the truth & that persons face could well go a crime database just because painful joints make it hard to walk and that causes complex facial expressions. All the programming of the tech is human & one has to fake the concept that human do not make mistakes, or abuse their responsibilities just in order to pretend that A.I. is something else / is perfect.The same naturally flawed animal reasoning that occasionally locks the wrong persons up for life will program all A.I. is the likely fact = HUMAN.
@mikezappulla4092 AI connects the dots in multiple dimensions of Time, the self called Academia experts are just trying to show off their fake intelligence. 😆
@@mikezappulla4092 ahauahag
@@cameroncameron2826 Yes, it was humans who decided that some people should not be able to get the medical procedures they need. The AI only follows the rules the humans set up.
Yes, now that its hit mainstream and so many people have hands in the pie, the average person has become confidently incorrect.
But not my precious Sabine of course
The dude is hardly the average person and neither am I.
There are no correct answers as to what will happen with AI.
Far too many variables that are shifting now.
Did Sabine use a naughty expression at 6:13? I am shocked,lol!
Yes! And its the correct words to use for the EUs stooooopid Hydrogen rubbish.
Ms Sabine, this is a very relevant soft topic you have touched. Amazingly done, too. On one or two to note: I have been in the Bank for over 30 years . I can say the Ministry or the Federal Bank will not give the choice to banks to decide, whether to handover to AI or not. But are likely to be instructed. It may be best if legal and political decisions are throughly scrutinized for airtight and constitutionally fit algorithms, and process and implementation given to AI. I see the ray of hope for humanity, and fair progress.
Police researcher (criminologist) here. UK just put out a call for expert input on a governance framework for data driven technologies i.e. algorithmic decision-making (predictive policing, risk assessments for offenders, etc). US currently trialling algorithmic/computational conduct analysis of body cam footage. Algorithms are coming on hard and fast and big tech reps are filling expert networks/panels. Will be interesting to see how this plays out
Is there a proposal to have precogs submerged in a photon milk bath?
It will play out badly and stupidly because of "tech expert" rent seeking.. at least for 20 years, until the technology actually matures and can be trivially purchased. But that too might be monopolised to create a trough into the public purse.
@@johns5558 The police already knows who is doing the crime, but they do nothing because those are protected classes.
the hardest hit by this system are legal migrants, they feel the full force of the police and their public image is ruined because they have the same skin colour as the illegals.
@technoman9000 Oh no, HAL's developed a sense of humour!
So on top of the existing algorithm (white fragility) that fixates on old white blokes they want what exactly once the self interrogations over, the actual confession formula ? Then what ? - the hypnogogic suggestioning linguistics for digging a 6 ft hole and jumping in ?
When NASA had to rely on computers to make decisions at high speed without the time for humans to review and intervene, they began installing multiple computers and having them vote. Essentially they began using computer committees. The Space Shuttle had five computers that voted with the majority decision enacted. While the AI algorithms are somewhat (but not completely) different from the Space Shuttle algorithms, the overall function of a decision making system is the same. So, using AI committees for fast response without human review makes sense as long as the AI's are all trained on different data sets.
Yes, and there have been 19 astronaut fatalities. Remember the Apollo 1 fire, Challenger, Columbia?
Most will never trust that powerful entities aren't tipping the scale out of site!
@@MDformernavalperson Uh, not related to AI in any way. The 1960's computers in the Apollo were far less powerful than a 1980's calculator. They were not AI capable
@@jay90374 Sure, but the Great Old Ones are not AI so does this go anywhere?
@@kimwelch4652 ⁉️🤔 I don't understand your question 🤷
My job is so secret that even I don't know what I'm doing.
Insurance companies already use algorithms to determine your risk factor. We are already using machine learning in a lot of places before ChatGPT. Algorithms is to "AI" as server-side is to "cloud computing" (It's more nuanced but it should be easy to get the point).
I jus t came across her channel the other day randomly but I have to say I find her break down of the topic of each video I've watched so far very compelling. She has the perfect amount of dry humor mixed with common sense and a bit of brutal honesty. This is turning out to be my best channel find of 2024!
I sit on the toilet the "right way" for 6 months now and can't go back.
Only the fact that I peeled bananas from the wrong side was more shocking for me.
Teen me was lazy and depressive but not dumb. After pondering if humans did squat shitting for millions of years, why do we sit.
And so i thought "what is the closest thing i can do to that" and I agree, its a one way road.
Plus, a bidet shower. Clean thing!
@@andreobarros Proud to be indian 🛶
Isaac Asimov wrote a story (in the 1950s?) about a big computer per continent administrating/governing everything. And they behaved like their inhabitants, so he surely would call them "AI" today.
This "super computers" actually did a good job, if only we could be so fortunate.
Multivac was in over a dozen stories, they're still worth reading.
Who was it that wrote the story about the generals not trusting the AI to run the war? So one general fudged the data on its way in to make it more plausible, another fudged the decisions on the way out to make them more reasonable, and the field general having to decide on whether to carry out the orders, flipped a coin.
I think it's also Asimov.
@@technoman9000 If so, then I will re-read them. I always paid more attention to Asimov's Foundation saga ; and not so much to the 'Robots' and 'Empire' sagas. My fault probably
@@Ukitsu2I’m re-reading Robot Visions right now. A compilation of his robot short stories. Not bad.
Tired of AI gurus talking about something they discovered existed 2 months ago. Why aren’t we instead listening to computer scientists that have been studying AI for years? That’s the people we should listen to.
like Douglas Hofstadter?
ruclips.net/video/Ac-b6dRMSwY/видео.html
like Geoffrey Hinton?
ruclips.net/video/-9cW4Gcn5WY/видео.html
Because the current paradigm gives results now and while the others are hypothetical even though the brain is proof of principle and requires far less power.
The one guy in your short banking clip looked like Lutz Vanderhorst 😀. ~01:06.
I'm a software engineer and worked at a credit card processador. I didn't understand how the fees were calculated, but not because it was made by an IA, the code was only a bunch of mathematics operations, it's because I don't have enough financial knowledge to understand
I want to sincerely thank you for addressing this matter. It’s truly astounding how people, who have neither written a single line of code nor authored a scientific paper, are now making predictions with such unwavering confidence. They boldly claim what the future will look like in 10 years, as if they have all the answers. Frankly, I can't help but laugh when I hear these proclamations.
It’s a strange phenomenon-those without firsthand experience in the complexities of these fields often make the boldest statements. I really appreciate that you've brought some much-needed perspective to the discussion
wait till you hear what some actual experts in the field say to lol
I think some value remains to be proven but it’ll absolutely have a massive impact
@@ALFTHADRADDAD Of course machine learning will be extremely important for science in the next few decades
Instead of calling their predictions bullshit, why don't you explain in detail why you think that. It's always easier to call people out, but what is your reasoning behind your own statement?
@@lorpen4535 because no one can explain to me why models that predicts words will suddenly gain autonomy and consciousness.
Since when did AI become alien intelligence, isn’t it artificial intelligence? Why are we now calling it alien intelligence? What is alien about it?
There is a certain "inhuman" quality to AI decision making process. Can be well illustrated by modern chess algorithms. Chess is a logical game, supposedly. Turns out humans, even the smartest and the most logical ones, tend to infuse their decisions with emotion. Like, for instance, many games with AI nowadays have revealed that humans tend to unconsciously avoid sacrificing the queen, even in situations where it is the most effective path to victory. As well and do many other "human" things. AI doesn't do that. Guess it's one of the things that could be called "alien".
Before Chat GPT, things that were never called AI, now are. It's becoming more of a buzzword.
Were you living in a cave? Everything was called AI at least since the early 90's.
I remember priority-queue-based decision making in RTS video games being called AI
I remember optical character recognition neural nets being called AI
I remember google translate being called AI
I remember face scanning algorithms being called AI
I remember Google's DeepMind and DeepDream being called AI
I certainly remember CleverBot being called AI
These all existed before GPT. Artificial Intelligence is the name of an entire field of research and engineering, just because you haven't heard about it doesn't mean it didn't exist.
@@Keisuki Yes but he is right about it not being a buzzword yet at that time. AI was a term thrown around by people who dealt with DeepMind, CleverBot and face scanning algorithms. Today it's thrown around by taxi drivers and hairdressers. I work on an assembly line in a car factory. Few days ago I had a 30 min. conversation with my colleague about AI during lunch brake. Ten years ago something like that would be highly unlikely.
@@juremustac3063
It was a buzzword a couple times in history. In the 90's the gaming industry picked it up, and never let go. Any code that controls an agent in a game is called AI, even if it's just a very simple algorithm.
I love the no-bs attitude of Sabine
You are right in what you say, but I think Harari was referring to a modern feature of programming. As programs become more complex, it is difficult for their own creators to understand which lines of code are causing a distortion.
A great example of this is RUclips itself, I believe that every content creator has had videos demonetized or with reduced reach for reasons that the platform itself does not understand and takes weeks to fix. The danger is not the AI itself, but the fact that there are not enough AI supervisors to make these adjustments and the ethics of the companies that use these programs.
I believe that legal regulation is necessary, preferably minimalist, that protects basic consumer rights.
So I assume we know of old alien intelligence?
Following and appreciating your content for a while now, but this video is … disappointing? The title is misleading clickbait, the content itself is fluffy, your approach to Harari is dubious, and frankly the jokes are getting a bit stale. I’m thinking you’re maybe demanding too much of yourself wanting to produce a video everyday. I miss the less frequent, but way more in-depth videos you used to make.
However, you are still my favorite scientific content creator on YT. 🙌
I suggest starting with replace all German state services institutions that are processing documents with AI.
AI will have to learn how to use fax machine first.
Thank you for this great video. I only see one thing to worry about: AI replacing a black box committee is an issue for me because AI can be tricked to do something wrong by a single programmer, whereas committees require several persons to agree. And this is the essence of our democracy : never let one person make a decision on his own.
Which makes me think that AI should never be programmed by just one person. That is what could probably save democracy.
Don’t let one AI decide but make sure several AI’s come to the same conclusion before taking any important decision.
I personally think the biggest threat these AI models pose is the power they give to non-governmental institutions that have never been voted for (I'm especially worried about large global corporations here, but also the ways it can be misused by individuals (I think everyone can imagine the threats of deep fake technology)).
One of the largest fields generative AI models are being used in already is content creation. It's not farfetched to assume a company with lots of information about you will get the idea to sell personalised AI-generated content to you. We have already seen the political and social influence content creators may have (popularity of the AfD, conspiracy theories during Covid, etc.). That power in the hand of large companies (worst case: a single corporation with a monopoly) whose sole aim is to increase profits is not exactly a comforting thought.
In an over-exaggerated dystopia, this would lead to some Wall-E-esque future with humans having been once and for all demoted "consumer", and the AI (or rather the corporation controlling it) having the sole power to decide your subjective reality as well as that of everybody else.
"What the world needs is more bureaucrats." -No one, Never
UK Prime Ministers equivalent of Humphrey Appleby and Whitehall want unlimited Civil servants, similar any government!?!
how about bureaucrats who will never take a bribe?
@@markplutowski Who are we to say that AI bureaucrats and politicians could not be bribed with offers of new servers and other hardware? I really wonder how long that would hold up.
@@crowe6961 Have you read Iain M. Banks "Culture" series - I suppose it's possible they could be bribed using other incentives -
"hey'll I'll get you some time on Musk's Collossus cluster, what'ya say?" 😄😄
@@markplutowski Not in full, but I'm certainly familiar. And yeah, they get up to all kinds of hijinks and really don't need to be human for it.
That's a very long winded way of telling us you agreed with everything Harari said.
To be fair, if you're familiar with his other work, say, if you tap into some of his conversations with people like Tristan Harris, he understands full well, agrees with you entirely and communicates openly the fact what AI does indeed use a large number of datapoints human minds can't grasp, you're not really adding anything novel to the conversation there, and that we need benchmarks, checks and balances to sanitize its conclusions - and egulated by government - even if we can't understand them. He advocates for the same stuff you do.
So with that minor gripe aside (and you drawing a line to his more distant background and not to his more recent two decades of professional "public conversation about the here and now" background - and of having written Sapiens, Homo Deus and 21 lessons, and now Nexus), it sounds like
1. You might _really_ want to look into reading _at least_ Sapiens, probably Homo Deus and 21 lessons too if you care about emerging stuff that needs regulation and steering...
and
2. You're on the same bandwagon everyone else is. A bit late (judging by your seeming unawareness of who the thought leaders of the past 2 decades have been, among which Harari ranks pretty high), but.. welcome onboard nevertheless. Ironic that you levelled exactly that criticism at Germans ;)
AI is a just a buzzword being applied to anything for only one reason: money. As you point out, so much of this AI has existed already as machine learning.
Not only that, but AI as we know it now (LLMs) are incremental improvements on previous ML products. They are still statistical certainty optimizers, just with a broader scope. Still nothing novel coming from modern AI that doesn't have insightful humans at the helm. AI only making choices will contribute to stagnant economies and governments, not help us surpass them, because the powerful few in charge employing them will make sure that is so.
Seeing the "Fi Forum" brought back Memories :D I recently found a Flyer from 2018 when I worked there as a student
6:12 😂 Luv ya Sabine!
It's either we invest OR get rid of the Electoral College and let We the People of the United States of America choose our own leaders!
The wisest thing that should be on everyone's mind currently should be to invest in different streams of income that don't depend on the government. Especially with the current economic crisis around the world. This is still a good time to Invest in gold, silver, and digital currencies (BTC, ETH...)
I began investing in stock earlier this year, and it is the best choice I've ever made. My portfolio is rounding up to almost a million and I have realized that when a stock makes it to the news, chances are you're quite late to the party, the idea is to get in early blue chips before it becomes public. There are lots of life changing opportunities in the market, and maximize it.
What opportunities are there in the market and how do I profit from it?
You can make alot of money from the market regardless of whether it strengthens or crashes. The key is to be well positioned.
Do you have an idea of any good broker I can start with?
“The guidance provided by Ai is fabricated by learning, and where Ai is truthful, there is tremendous work required to make it a better tool.”
as a supercomputer operator... no.
the problem, i agree, is people relying on systems they don't understand. It does increase the risk of carastrphic failure... but anecdotally... the field heavily selects for people who try to cut corners, try to prove their assertions without controls and who have never heard of the concept of computational complexity.
it is not known how deep the grift goes, but the idea that it relies on investor hype is not at all controversial even amongst people who don't see it for the puppet show it is
After using a few AI bots (e.g. ChatGPT, Gemini)... these products keep FORGETTING... previous answers. How can an AI bot do anything sensible if it keeps forgetting previous information?
This falls into the same category as those “we only use 10% of the brain” arguments. Ais can reason their motivations and are just getting better at it, the part he claims we dont understand is LITERALLY like saying “i won’t trust this politician even though they reasoned their position and i understand it because i dont fully understand the neural pathways the information took in their brain’s body of logic”
One of Arthur C. Clarks’s rules of science is that when a senior scientist says something is impossible they are usually wrong. My corollary is that it usually takes a hell of a lot longer than expected.
Thank you. Your subtle pivot in the focus is super useful….
I love your sense of humor. You always make me laugh. If you tire of science, you could always do standup comedy.
"Alien Intelligence" is probably the most intuitive description.
Dear Sabine,
I heartly recommend that you read Harari's new book "Nexus", and maybe "Sapiens". I am sure both of you can learn a lot from each other.
Harari does these kinds of podcasts mainly to promote his new book and ideas. In "Nexus" he basically argues why we need to be super carefull about deploying AI.
Greetings from Germany
YNH has written about the future including AI in two books - "Homo Deus" and this years "Nexus"
He is not a computer scientist or any sort of scientist - he's an historian.
If you read his books he makes some good points and produces a relatively coherent argument.
I dont think hes right but many of the points he makes are valid and as he is talking broadly what he says gives a reasonable basis upon which to argue.
Do have a look - its worth the effort.
He said that pandemics were a thing of the past....(before Covid).
This guy is a total fraud and a bad historian.
@@astrident8055is that because he's gay, a jew or nominally of the left? Name calling isnt actually an argument.
@@nickthurn6449 he is a neoliberal and is far from being on the left.
I read Homo Deus and it is full of fallacies
I never thought I would hear Sabine swear. What a day it is today…
4:11 I've been wondering why someone who is 'just a medieval historian' is being interviewed for something called the diary of a CEO... 🤔
Seeing Lutz van der Horst and Sabine Hossenfelder on the same conference is something I now regred that I missed it
It’s interesting to observe that as the shortage of skilled workers grows, the drive for automation is also increasing. Architectural firms, for example, are exploring artificial intelligence to meet client needs in a more cost-effective manner. This raises a valid question: why shouldn’t we consider applying similar automation strategies to bureaucratic processes if they could help reduce costs and the need for personnel?
Botty McBotface is very funny. Good one. They should be required to call it that.
3:12 You too? I’m so glad I’m not the only one lol. 😅
There's a huge misconception that data/information is - or should be - the most important thing in political-decision making. But wisdom is is far more than knowledge or information. It includes a recognition that ignorance is not only an inevitable part of things, but also that ignorance has a value of its own in helping to guide good decisions. In this respect I highly recommend a 6-part podcast written by Rory Stewart (ex-government minister) for the BBC called "The Long History of Ignorance", which examines the idea that understanding ignorance is as important as acquiring knowledge.
Many years ago I was told this: over the history of computing AI has been many things: parsing (in the 50s), pattern matching, image recognition (e.g. reading car number plates) and so on. They are "AI" until they are solved, then they get a name and the next big thing becomes "AI". What does "AI" really stand for ? "Anything Interesting".
AI predicts the next most expected thing in a pattern, so does that mean it will filter out creative individuals, those who by not fitting the pattern would stand out?
In my opinion, Harari has had a couple of successful books of the popular science type which given him an audience and an opportunity to earn money by making TV appearances etc. He is trying to stay current, that's all. AI and smart phones for kids are no more than the current hand-wringing and drama which began towards the end of the 18th century when people started writing novels. "Good heavens!" said the critics. "All this novel reading is just escapism. No one will bother to go to work and the world will grind to a halt!" Well, it didn't.
Ditto Sabine. Yuval makes great historical observations about prior communication breakthroughs and social impacts, but he is out of his lane when he prognosticates. Suddenly, his thinking becomes shallow and dominated by groupthink. Others have pointed out that he is not an economist, biologist, or environmentalist by profession, and this shows. The future is not like the past.
I imagine we'll hear things like: "While we considered what the algorithm showed us it was never a deciding factor" or some such BS.
Good points. I've seen that author making the rounds on the News channels doing interviews. My favorite was the one with Ari Melber, because they get into some deep stuff. It sounds to me like his new book, which I want to read, is comparing AI technology to the printing press. Basically, how a lot of awful stuff has come from the invention of the printing press, but so has a lot of good stuff. And he points out that initially it was a bumpy road for the first 200 years. I am curious about whether it will take us 200 years to smooth out the kinks in AI lol. I'd like to think that we've gotten better to adapting to new tech by now.
I think this author makes a lot of good points when he is reminding us what it's been like adapting to new technology in history, but things get a little thin when he tries to imagine the future world with AI. I think for those kind of speculations he should have teamed up with a good Sci Fi author lol. They are much better at imaging the "what ifs" than historians.
I have already found, I'm fighting AI bureaucracy everyday . Autocorrect on my phone, is just one example. Talk to a customer service person, you can tell AI dictates , their responses. I've had opportunities to simply ask the speaker if what they just said to me even makes sense. Many have had to admit, it did not, and, it wasn't really them. 🤔
I love your dry humor Sabine 😂
It's amazing (and maybe a bit terrifying) how many people either in positions of authority or with massive platforms who don't understand what "AI" actually is, revert to a science fiction "model" for it and then talk about it publicly as if they know what they're talking about. It really shows the true state of things. We have all the information at our fingertips and people are more lazy than ever and more willing and more confident to speak on subjects they have no business speaking on. If it feels right or whatever. Cheese and rice...
Actually, Sabine and Yuval conversation would be a very cool thing to see
It's already frustrating trying to contact companies, anyone whose had a copyright strike on RUclips can attest, it's like trying to talk to a wall
Real intelligent woman with a bit dark humour I loved your podcast.
Great vid as always, but the Lancelot quip about Harari was a little unfair. He was a military historian as an early career scholar, but the last military stuff he published was in the late 2000s. He's since become a rather prominent 'big history' kind of public intellectual, with an acute interest in, for instance, the history of human institutions and the history of knowledge. As such, he was interested in AI and its consequences before we all started talking about it.
Love your content to try to get a handle on scientific developments.
1. Running the world like bureaucracies do now. I get the analogy. Bureaucracies or AI don't control EVERYTHING, now or in the future.
2. You have characterised Harari as just a medievalist. No, he is mostly nowdays a public intellectual with multiple million copy publications behind him. Pretty global. I'm surprised you haven't heard of him in this context. I thought a German parallel might be Precht, ( parallel not necessarily in emphasis of ideas and issues) but don't think his reach is beyond the germanophone world. I speak eng, fr and ger, so that would be my media reach. Like to see a discussion on science and society between you and Harari.
All the best
It was nonsense from the start. Its mostly just large language models but people want to "feel it" as something new and special and world breaking... which it probably will be eventually... maybe... idk. Its just marketing speak in 97% of cases right now... which makes it nonsense.
How could anything that has seen the whole internet still want to take over the world?
That´s a very good and original point!
A lot of people don't even have a clue what is being called "AI". "Artificial intelligence" is loose term in computer science, a label used basically for any algorithm with emergent behavior. It has absolutely nothing to do with intelligence as most people understand the term. LLMs are "AI" in the computer science sense. But there is nothing intelligent about LLMs in any common meaning of the word. What most people imagine when they hear "AI" does not exist.
We should make sure to always correct uninformed people when they personify chatbots and use them for unreasonable purposes (i.e. querying for facts).
I love the small amounts of shade Sabine throws occasionally.
As an AI Engineer I can tell you many of the people talking about AI in popular forums are simply trying to sell something and often don't know much about how it actually works. What you need to remember is that AI is only as good as the data it's trained on. The more uniform, ordered, specific and standardized the data is the better. I've talked to a lot of companies and organizations that really want to use AI in some way cause they've heard it's the future but the truth is most companies and organizations don't have the data to do anything very useful. Government related stuff is often the worst. Have you seen the convoluted language used in regards to several unrelated things in various bills and policies? You don't have to worry about AI running things for a long while. That's for sure.
I recommend you to read his first two books - A Brief History Of Humankind and A Brief History Of Tomorrow. They are both mind-blowing.
The scariest part of AI is not the AI itself, but the gatekeeping. Last century the same problem occurred with integrated circuits. While IC are everywhere now, advancements in IC make it nearly impossible to be an IC startup. There are small set of companies responsible for producing all chips all around the world.
AI is really bad at exceptions, but to be good enough you need to make it big, so big, that even top of the line personal PC cannot train such models. Creating bigger models will require processing power only a small set of corporations can afford. And there is no telling if such corporations start gatekeeping their trained models preventing any new startup AI companies from appearing.
Hang on. I think, and I may be wrong, you may be mistaking the speaker’s point about interpretability. A bank software’s reason for accepting one person’s loan application and not another’s is certainly “to make the business money” as you’ve correctly stated, but the speaker’s point is subtly different. He is asking “why did the algorithm look at his particular set of features and decide that the person was risky.”
With statistical models we had much more interpretability than with modern deep learning. It’s the reason why interpretability is more of an issue now. It’s much easier to determine if a statistical model is doing something for the wrong reason.
But there are people working on interpretability! And there are a lot of avenues to explore it.
He said we won’t know why we were refused a loan, and that is a distinction from today. Financials rules are used today, and the managers know exactly what those rules are. AI cannot tell you why it recommended anything; it’s just the output of unquantifiable “learning”. It makes mistakes with facts, hallucinates, and we can’t trace back why it made those mistakes.
Wait...I'm sitting on the toilet wrong....Aaarggghh!
Something got Sabine fired up.
This is not new NEWS.
In 1986, Robert Hecht-Nielsen co-founded HNC Software, a neural networking startup in San Diego, based on his breakthrough work in predictive algorithms. HNC was perhaps the first start-up in the neural network/machine learning space to be significantly financially successful, and in many ways this is due to Hecht-Nielsen’s deep understanding that given enough training data and computational power, neural networks will nearly always find an optimal solution to a properly posed prediction problem.
AI has really improved my sense of humor. Every time a physics bureaucrat stops by to tell me I have to pay my taxes I get very conCERNed. No, wait, I said that wrong. What I meant was beeeeoooop [abrupt descending tone]