00:31 Risks of large language models (LLMs) include spreading misinformation and false narratives, potentially harming brands, businesses, individuals, and society. 01:03 Four areas of risk mitigation for LLMs are hallucinations, bias, consent, and security. 01:34 Large language models may generate false narratives or factually incorrect answers due to their ability to predict the next syntactically correct word without true understanding. 03:00 Mitigating the risk of falsehoods involves explainability, providing real data and data lineage to understand the model's reasoning. 03:59 Bias can be present in LLM outputs, and addressing this risk requires cultural awareness, diverse teams, and regular audits. 05:06 Consent-related risks can be mitigated through auditing and accountability, ensuring representative and ethically sourced data. 06:01 Security risks of LLMs include potential misuse for malicious tasks, such as leaking private information or endorsing illegal activities. 07:01 Education is crucial in understanding the strengths, weaknesses, and responsible curation of AI, including the environmental impact and the need for safeguards. 07:32 The relationship with AI should be carefully considered, and education should be accessible and inclusive to ensure responsible use and augmentation of human intelligence.
Excellent explanation. However, in terms of bias and audits as a mitigation you did not say who would be doing the audits. The assumption is that it is easy to find unbiased auditors and you immediately run into the problem of "quis custodiet ipsos custodes?" To my mind this is a much greater risk as the potential for misuse and harm is huge.
I hope IBM acknowledges that these risks apply to IBM Watson. If not, go into great detail how you mitigated such risks. How does ibm Watson differ and compare to LLM?
Watson was a very, very different sort of model. Most of these risks didn't apply to Watson because it was far more limited in every aspect. The model didn't use attention or trasnformers, was on a much smaller training data that was much more curated, and if I recall correctly the natural language phrasing of Watson responses was separate from the part that actually generates an answer. Watson wasn't trained to output probable text, it was trained to output factual answers that were then converted to a natural language representation. It's been a long time since I read much on Watson though, so I may have some errors in my memory regarding it.
Chat-GPT and similar Language models tend to let the language model define the answer. It doesn't have true comprehension of what its saying, it's instead just a very fancy autocomplete keyboard. Look up NLU vs LLM for better understanding Chat-GPT has an enormous LLM, and until they shed more light on the data that trained it, the prevailing understanding is Chat-GPT leveraged just having an enormous volume of training data with likely an enormous blend of high and low quality data; relying on frequency statistics to hopefully get to the correct answer at the end of it. Watson and other trained ai's are more train with only a source of truth that is factually correct, and therefore will give answer from those sources of truth. They are more NLU's. Lately Watson and other trained ai's use much smaller LLM's mostly to generate responses that way it feels less like an Alexa answer and more like an ai. My tinfoil hat theory is that Chat-GPT always planned to be bought out OR buyout a good NLU and integrate it into their system eventually eliminating hallucinations.
Quick poll. If companies making LLMs we're going to buy IBM mainframe hardware to train them on and run them on in inference mode, how quickly do you think IBM would pull this video down?
We need to revisit the meaning of "Proof"-- philosophically, semantically, and in everyday usage. Greater attention needs to be paid to the history of the methods and of the data -- the equivalent of a "digital genealogy" but without the "genes." So much of what I see written about AI today reminds me of a quote in Shakespeare's Troilus and Cressida -- "And in such indexes, through but small pricks to their subsequent volumes, lies the giant shape of things to come." Finally, the process of recycling data in and out of these systems describes the "Ouroboros." More thought needs to be given to the meanings of the Ouroboros.
Dude. Decades ago the tech was digital watches and calculators. Now you have chats posing as humans and being used as educators while simultaneously working like a drunkard on drugs. So yeah IBM needs to speak of ethics, law and politics of tech. That's what a responsible company does. Unlike the OpenAi joke.
Love the energy! Educate ... best way to end this presentation as it is really an invitation to press on an learn more. AI is not going away so we need to learn how to use it properly and responsibly. This is not different then any other major advancement humankind has accomplished in the past.
This video raises some very valid points my thoughts are that technology will ultimately be empowering when it is open source and decentralized and ultimately authoritarian when it is proprietary and centrally controlled.
I'm not clear on how to provide consent/accountability. Is there any existing solution that gets permission from the data sources LLMs scrape? Without any basis in reality it doesn't feel like much of a strategy...
In all this hype on generative ai. It seems like we are running before we could even crawl. The new tech comes a roaring like a lion. Great work on achieving but why did Watson not do the same considering it won jeopardy more than a decade ago and project debater. Wow that was revolutionary. The transparency of all these models are datasets we choose. Maybe ensuring that all models met a strict criteria. Hence auditing I guess. I have heard alot of concern from people and tend to agree with these legitimized concerns. It should be able to do what Watson did and not produce an answer till it is ready to run. Watching the jeopardy challenge was an eye opener. Based on percentage an answer was given. Or not at all. That was a good solution. Keep it up and open folks we all need to have the talk. This is new and what we lack is the experience. Sad but aging seems that way too. Just the way of the world I tend to observe. It’s time that will tell this story. Hope we can get it right. Great job folks as always.
A lot of these models fall over when applied in the real world because almost all of them assume stationary data... So even things like the shift in word use frequency when ingesting the Google books corpus are problematic. As are the attitudes towards other sexes and races, etc... In those writings. Watson did well on the game show, but didn't do so hot at their other hopes for it. It largely failed to make headway on biomedical problems and pharmaceuticals and was unable to generate the profit IBM had hoped it might. The people that work in these domains understand how their data works and the potential for sophisticated models to fail in actual application, hence why linear modeling is still so common for them. It isn't just them being Luddites or dinosaurs, it's that it isn't their rodeo. They've been through building sophisticated models and watching them fail to improve real diagnosis rates on new incoming patients because they have more systemic but shifting (non-stationary) differences in their mean and standard deviation, etc... that the models spuriously relied on. I see the same thing all the time now. It goes like this: 1) They gather a nice curated data set 2) They split into test and train 3) They use train to build the model, test on test and results are great, seems like it generalizes to test data. They say, "let's use this for real in the clinic!" 4) They try it in the clinic and results are not necessarily awful,... but they're below the benchmark heuristic or standard measure previously in use. The new data is messier and had systemic differences from the training/testing data, and again, these characteristics drift over time, so the longer they try to keep it going in the clinic, the worse things seem to get. 5) The publish that good part from before they went to the real world use case and the real case is a failure so they don't even try to publish it. People get more hyped and gassed up on the sophisticated new methods. 6) Eventually people become disillusioned, but not until failing themselves. In the macro scale you get things like the "AI Winter." It's difficult to scrutinize the model data because it's so large. Some are also being scraped without appropriate permissions, etc... I absolutely agree that transparency and auditing are crucial in this process.
I would also like to add: AI that is intervening in the user experience in an unwanted and ennoying manner, taking over control of the human user, with pupups of screens that the user did not ask for, adding Apps that the user did not ask for, chaning layout that the user did not ask for... in other words, taking over control of the human user as far as UX is concerned. Mobile Apps that seem innocent can be equipped with AI that start dominating behaviour, habits and life of people...
The only question is risk of error and associated liability; if there is no liability, then the risks associated with making poor inferences (for any AI Model) can be ignored. When there is liability, then the question is what mitigations must be implemented in order to under-write insurance for that liability. The hypothesis that an unexplainable (i.e. stochastic) system may be insured is false; we must look to the multi-phase clinical trials process, especially phase IV surveillance, as a mechanism to provide evidence of safety, efficacy, and monitoring of adverse events.
I can save you all money by telling you to download Ollama, then offload LLM’s onto local systems. There’s your 100% lineage overview capability that you usually don’t get with the wider net of training data
Wow! This is a fabulous U turn by IBM post Ms Ginni Rometty. Unbelievable!! It seems IBM Watson has been hung, drawn, and quartered by the new management.😅
Relying solely for accurate info is still a problem. However if you actually converse with them, hallucination and not being accurate is on some level very similar to humans in the first place.
I have realised that making myself better is more lucrative to me than outsourcing everything to AI. And AI doesn’t bother me it’s the business model. First they want to make you dependent and then you have the ultimate subscription model with credits and shit.
why not just make a barometer where the user can change the gauge? just like temperature. just like you let the investor decide whether to invest in something safe or risky. should big tech be getting involved in baby sitting and deciding what is bias and truth? Isn't that what China does already?
No, China does not take an objective approach while others can. China doesn't make a good faith effort to figure out facts or uncover bias. Note that she also brings up audits as well, ideally by external unbiased auditors.
no one reasonably intelligent should think that correlation is causation. just because poets cited were men doesn't mean women are not qualified. if images of garbage truck drivers were men, does that mean women can't be garbage truck drivers?
Large Neural Networks (LNNs) are the future, dialogic is applied to the execution of quantum computer chip code by dialectic process via modular adjustment available to a FPGA, virtually or Quantum Computer Chip, bosonically; in access to quantum magic states available to Higgs Boson amplification of radio 📻 frequencies. Quantum AI is the key to "Antigravity" as defined by involuntary Large Language Models when applied to a Higgs Engine.
When data base containers meet virtual machines, that have evolved and been turned into a virtual locomotive, you get a TRAIN SET. A TRAIN SET represents the cognitive functions of a Large Neural Network (LNN). Later, a quantum computer can be used to code COGFUNCT across an infinite amount of user device instances.
LLM may have fallacy in some or many areas, but one thing we must understand GPT-x self-improvement is all based on our data. That means we help GPT become stronger and stronger, especially you stating "his" weakness.
That's the thing, LLMs as stated in the video are huge statistical models that do not posses any actual understanding. A simple example would be feeding it with large amounts documents of false (or even partially true) information (or even grammatically/syntactically wrong information). The AI model consequently would spit out gibberish or false information at best. And that's the main problem with statistical models: 1) They are as good as the data that are being fed, 2) There is no actual way for any AI (statistical model) to distinguish truth/accurate from false/inaccurate data. And that's because they are not even close to possessing anything like understanding or reason. As for a GPT-x that gets better by itself still falls to my 2nd point. One major factor in making all these assumptions is the anthropomorphization of such models as a result that it is being fed with human data. On the other hand with game AI like Deepmind's AlphaZero, where the goals and the environment is relatively simple, the models surpass human capabilities and the models "act more like bots/computer" (meaning that certain actions cannot be understood by humans). Now back to LLM, until the time comes where an LLM or any derivative model has like reasoning, we are safe from a world ending AI that can turn itself to a terminator. We should be far more worried with the use of LLMs as a tool rather than an AI overlord.
Moreover, there’re some recent papers showing it’s very easy to “spoil” the results with relatively little malicious data. There’s even a suspicion this has already been abused in the updated models used by GPT. The trustworthiness issue also gets only harder with more data, so this is not going to get any better, it will only become harder to detect.
this is supposed to be purely informative, yet I see politically charged statements being used. Frustrating to see. The point of this is to teach people, people want to learn, not see some bogus poltically charged statement
It is suspected that none of your strategies would work. The current LLMs are just a joke and less like would get substantively better. Try it yourself with a variety of questions and you will see why.
Means AI is genius but totally dump, similar to what happens when there is no emotion associated to the knowledge/information, totally not good for humans.
Your risk column is a great hit list for the woke left…. Would greatly welcome AI without the left spin and biased opinions…. More facts less feelings.
00:31 Risks of large language models (LLMs) include spreading misinformation and false narratives, potentially harming brands, businesses, individuals, and society.
01:03 Four areas of risk mitigation for LLMs are hallucinations, bias, consent, and security.
01:34 Large language models may generate false narratives or factually incorrect answers due to their ability to predict the next syntactically correct word without true understanding.
03:00 Mitigating the risk of falsehoods involves explainability, providing real data and data lineage to understand the model's reasoning.
03:59 Bias can be present in LLM outputs, and addressing this risk requires cultural awareness, diverse teams, and regular audits.
05:06 Consent-related risks can be mitigated through auditing and accountability, ensuring representative and ethically sourced data.
06:01 Security risks of LLMs include potential misuse for malicious tasks, such as leaking private information or endorsing illegal activities.
07:01 Education is crucial in understanding the strengths, weaknesses, and responsible curation of AI, including the environmental impact and the need for safeguards.
07:32 The relationship with AI should be carefully considered, and education should be accessible and inclusive to ensure responsible use and augmentation of human intelligence.
Thank you for taking these notes and sharing 🎉❤
Thank you!! Wish you all the best!
Great insight into the risk and mitigation strategies of LLMs. Thank you.
Glad you added the three dots via Aftereffect. Was a gamechanger.
Excellent explanation. However, in terms of bias and audits as a mitigation you did not say who would be doing the audits. The assumption is that it is easy to find unbiased auditors and you immediately run into the problem of "quis custodiet ipsos custodes?" To my mind this is a much greater risk as the potential for misuse and harm is huge.
I hope IBM acknowledges that these risks apply to IBM Watson. If not, go into great detail how you mitigated such risks.
How does ibm Watson differ and compare to LLM?
Just going psycho.. because they all left behind. Google Brad itself atleast 10 years behind the GPT-3.
@@bibinkunjumonYou haven't got a clue.
Watson was a very, very different sort of model. Most of these risks didn't apply to Watson because it was far more limited in every aspect. The model didn't use attention or trasnformers, was on a much smaller training data that was much more curated, and if I recall correctly the natural language phrasing of Watson responses was separate from the part that actually generates an answer. Watson wasn't trained to output probable text, it was trained to output factual answers that were then converted to a natural language representation. It's been a long time since I read much on Watson though, so I may have some errors in my memory regarding it.
Chat-GPT and similar Language models tend to let the language model define the answer.
It doesn't have true comprehension of what its saying, it's instead just a very fancy autocomplete keyboard.
Look up NLU vs LLM for better understanding
Chat-GPT has an enormous LLM, and until they shed more light on the data that trained it, the prevailing understanding is Chat-GPT leveraged just having an enormous volume of training data with likely an enormous blend of high and low quality data; relying on frequency statistics to hopefully get to the correct answer at the end of it.
Watson and other trained ai's are more train with only a source of truth that is factually correct, and therefore will give answer from those sources of truth. They are more NLU's. Lately Watson and other trained ai's use much smaller LLM's mostly to generate responses that way it feels less like an Alexa answer and more like an ai.
My tinfoil hat theory is that Chat-GPT always planned to be bought out OR buyout a good NLU and integrate it into their system eventually eliminating hallucinations.
@@NerfThisBoardGames "very fancy autocomplete keyboard". Thank you! I was looking for the great analogy for ChatGPT and you gave it to me 😄
Great Explanation! I think the transparency and fair use of training data would be crucial for foundation model
Quick poll. If companies making LLMs we're going to buy IBM mainframe hardware to train them on and run them on in inference mode, how quickly do you think IBM would pull this video down?
Hmm interesting
Explain for dummies
This information is already elsewhere and I don't think IBM is going to be hurt by that scenario lol
Cynical
Very, very quickly.
We need to revisit the meaning of "Proof"-- philosophically, semantically, and in everyday usage. Greater attention needs to be paid to the history of the methods and of the data -- the equivalent of a "digital genealogy" but without the "genes." So much of what I see written about AI today reminds me of a quote in Shakespeare's Troilus and Cressida -- "And in such indexes, through but small pricks to their subsequent volumes, lies the giant shape of things to come." Finally, the process of recycling data in and out of these systems describes the "Ouroboros." More thought needs to be given to the meanings of the Ouroboros.
IBM stopped being a computer company decades ago. This is a perfect reflection of what IBM has become. It is a great legal and financial company.
That's fine but what's the problem with this video?
Dude. Decades ago the tech was digital watches and calculators. Now you have chats posing as humans and being used as educators while simultaneously working like a drunkard on drugs. So yeah IBM needs to speak of ethics, law and politics of tech. That's what a responsible company does. Unlike the OpenAi joke.
Very good talk!
Very Nicely explained the risks and mitigations!! It can't be more simpler than this.
Well done! Remarkable content here thank you
Loving this series!
great video and high quality content, thank you ..
Brilliant Explanation!
Love the energy! Educate ... best way to end this presentation as it is really an invitation to press on an learn more. AI is not going away so we need to learn how to use it properly and responsibly. This is not different then any other major advancement humankind has accomplished in the past.
Insightful speech. Thank you
What do you mean?
I can't believe no one else has noticed how astoundingly good this lady is at writing backwards.
See ibm.biz/write-backwards
🤣🤣
It’s a flipped light board. No one writes backwards in these videos. 😂
This video raises some very valid points my thoughts are that technology will ultimately be empowering when it is open source and decentralized and ultimately authoritarian when it is proprietary and centrally controlled.
Brilliant!
I'm not clear on how to provide consent/accountability. Is there any existing solution that gets permission from the data sources LLMs scrape? Without any basis in reality it doesn't feel like much of a strategy...
In all this hype on generative ai. It seems like we are running before we could even crawl. The new tech comes a roaring like a lion. Great work on achieving but why did Watson not do the same considering it won jeopardy more than a decade ago and project debater. Wow that was revolutionary. The transparency of all these models are datasets we choose. Maybe ensuring that all models met a strict criteria. Hence auditing I guess. I have heard alot of concern from people and tend to agree with these legitimized concerns. It should be able to do what Watson did and not produce an answer till it is ready to run. Watching the jeopardy challenge was an eye opener. Based on percentage an answer was given. Or not at all. That was a good solution. Keep it up and open folks we all need to have the talk. This is new and what we lack is the experience. Sad but aging seems that way too. Just the way of the world I tend to observe. It’s time that will tell this story. Hope we can get it right. Great job folks as always.
A lot of these models fall over when applied in the real world because almost all of them assume stationary data... So even things like the shift in word use frequency when ingesting the Google books corpus are problematic. As are the attitudes towards other sexes and races, etc... In those writings. Watson did well on the game show, but didn't do so hot at their other hopes for it. It largely failed to make headway on biomedical problems and pharmaceuticals and was unable to generate the profit IBM had hoped it might. The people that work in these domains understand how their data works and the potential for sophisticated models to fail in actual application, hence why linear modeling is still so common for them. It isn't just them being Luddites or dinosaurs, it's that it isn't their rodeo. They've been through building sophisticated models and watching them fail to improve real diagnosis rates on new incoming patients because they have more systemic but shifting (non-stationary) differences in their mean and standard deviation, etc... that the models spuriously relied on. I see the same thing all the time now. It goes like this:
1) They gather a nice curated data set
2) They split into test and train
3) They use train to build the model, test on test and results are great, seems like it generalizes to test data. They say, "let's use this for real in the clinic!"
4) They try it in the clinic and results are not necessarily awful,... but they're below the benchmark heuristic or standard measure previously in use. The new data is messier and had systemic differences from the training/testing data, and again, these characteristics drift over time, so the longer they try to keep it going in the clinic, the worse things seem to get.
5) The publish that good part from before they went to the real world use case and the real case is a failure so they don't even try to publish it. People get more hyped and gassed up on the sophisticated new methods.
6) Eventually people become disillusioned, but not until failing themselves. In the macro scale you get things like the "AI Winter."
It's difficult to scrutinize the model data because it's so large. Some are also being scraped without appropriate permissions, etc... I absolutely agree that transparency and auditing are crucial in this process.
Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model?
I asked bing chat a tax return question and it gave me the wrong answer and the sources it used disagreed with it too even 🤷♂.
Very relevant presentation. Who is the presenter?
I would also like to add: AI that is intervening in the user experience in an unwanted and ennoying manner, taking over control of the human user, with pupups of screens that the user did not ask for, adding Apps that the user did not ask for, chaning layout that the user did not ask for... in other words, taking over control of the human user as far as UX is concerned. Mobile Apps that seem innocent can be equipped with AI that start dominating behaviour, habits and life of people...
The only question is risk of error and associated liability; if there is no liability, then the risks associated with making poor inferences (for any AI Model) can be ignored. When there is liability, then the question is what mitigations must be implemented in order to under-write insurance for that liability. The hypothesis that an unexplainable (i.e. stochastic) system may be insured is false; we must look to the multi-phase clinical trials process, especially phase IV surveillance, as a mechanism to provide evidence of safety, efficacy, and monitoring of adverse events.
I think positive and negative abstractions is a better way to say hallucination in this regard.
I can save you all money by telling you to download Ollama, then offload LLM’s onto local systems. There’s your 100% lineage overview capability that you usually don’t get with the wider net of training data
How can I contribute?
*ahem 2:50 Yes, Air Canada, that means YOU. haha
This!
Wow! This is a fabulous U turn by IBM post Ms Ginni Rometty. Unbelievable!! It seems IBM Watson has been hung, drawn, and quartered by the new management.😅
INTRIGUING👍🏾
I always think how they record videos like this?
Why is there nothing about the speaaker?
very interesting stuff.
Very good points but mitigation strategies are not really actionable
Relying solely for accurate info is still a problem. However if you actually converse with them, hallucination and not being accurate is on some level very similar to humans in the first place.
Pretty much. "Use with care".
Wish {even in my “advanced age”} i could intern at IBM. ~ Your Stronghold SHx Project is rather awesome also!
I have realised that making myself better is more lucrative to me than outsourcing everything to AI. And AI doesn’t bother me it’s the business model. First they want to make you dependent and then you have the ultimate subscription model with credits and shit.
why not just make a barometer where the user can change the gauge? just like temperature. just like you let the investor decide whether to invest in something safe or risky. should big tech be getting involved in baby sitting and deciding what is bias and truth? Isn't that what China does already?
No, China does not take an objective approach while others can. China doesn't make a good faith effort to figure out facts or uncover bias. Note that she also brings up audits as well, ideally by external unbiased auditors.
Thank you :)
Ai is a hologram of the colective human knowledge.
So yeah, they have hallucinations and embedded emotions and biases.
no one reasonably intelligent should think that correlation is causation. just because poets cited were men doesn't mean women are not qualified. if images of garbage truck drivers were men, does that mean women can't be garbage truck drivers?
Most people don't meet the bar for being reasonably intelligent. It's sad, but that's the way it is.
It seams you didn't understand what she meant with bias.
Interesting.
You can still take down this video. It's not too late.
Large Neural Networks (LNNs) are the future, dialogic is applied to the execution of quantum computer chip code by dialectic process via modular adjustment available to a FPGA, virtually or Quantum Computer Chip, bosonically; in access to quantum magic states available to Higgs Boson amplification of radio 📻 frequencies.
Quantum AI is the key to "Antigravity" as defined by involuntary Large Language Models when applied to a Higgs Engine.
Nice word soup 🍲
That's why we need more and more prompt engineers nowadays more than ever
When data base containers meet virtual machines, that have evolved and been turned into a virtual locomotive, you get a TRAIN SET. A TRAIN SET represents the cognitive functions of a Large Neural Network (LNN).
Later, a quantum computer can be used to code COGFUNCT across an infinite amount of user device instances.
LLM may have fallacy in some or many areas, but one thing we must understand GPT-x self-improvement is all based on our data. That means we help GPT become stronger and stronger, especially you stating "his" weakness.
Haha, if it would auto improve you would know it by know. World domination in 3 days.
That's the thing, LLMs as stated in the video are huge statistical models that do not posses any actual understanding. A simple example would be feeding it with large amounts documents of false (or even partially true) information (or even grammatically/syntactically wrong information). The AI model consequently would spit out gibberish or false information at best.
And that's the main problem with statistical models: 1) They are as good as the data that are being fed, 2) There is no actual way for any AI (statistical model) to distinguish truth/accurate from false/inaccurate data. And that's because they are not even close to possessing anything like understanding or reason.
As for a GPT-x that gets better by itself still falls to my 2nd point.
One major factor in making all these assumptions is the anthropomorphization of such models as a result that it is being fed with human data. On the other hand with game AI like Deepmind's AlphaZero, where the goals and the environment is relatively simple, the models surpass human capabilities and the models "act more like bots/computer" (meaning that certain actions cannot be understood by humans). Now back to LLM, until the time comes where an LLM or any derivative model has like reasoning, we are safe from a world ending AI that can turn itself to a terminator.
We should be far more worried with the use of LLMs as a tool rather than an AI overlord.
Moreover, there’re some recent papers showing it’s very easy to “spoil” the results with relatively little malicious data. There’s even a suspicion this has already been abused in the updated models used by GPT. The trustworthiness issue also gets only harder with more data, so this is not going to get any better, it will only become harder to detect.
good
Are you guys engineers or lawyers? xd
As I have developed human brain algorithms that exactly simulate brain already before 2016, there is no need for your ai. America is funny.
LLMs often apologise to me
this is supposed to be purely informative, yet I see politically charged statements being used. Frustrating to see. The point of this is to teach people, people want to learn, not see some bogus poltically charged statement
Model kog
too real,,
So LLM don’t actually understand stuff. They just predict the next likely outcome in a sentence
Simple answer is no
AI good
It is suspected that none of your strategies would work. The current LLMs are just a joke and less like would get substantively better. Try it yourself with a variety of questions and you will see why.
Means AI is genius but totally dump, similar to what happens when there is no emotion associated to the knowledge/information, totally not good for humans.
anti AI views
Lol y'all got left behind and now started shilling 😂😂
She lost me at diversity
Your risk column is a great hit list for the woke left…. Would greatly welcome AI without the left spin and biased opinions…. More facts less feelings.