You mean the landmark case headed by the New York Times? Yes! It's not coincidence - it's explaining their case to the public because the NYT is leading a fight against corporations so powerful the United States government is afraid of regulating.
@@gmenezesdea it should be easy to do a research and compare the AI carbon footprint, crypto currency mining , ICE transportation, coal fire power plant,... and the benefit that brought to mankind. And you might find out AI contribute the least carbon foot print but brought the most benefits - my guess.
Good grief I felt like the speed and cadence of speech in this podcast was fine-tuned for those members of the human species unfortunate enough to find themselves on the far left end of the bell curve.
What is most worrying is that the scientists behind AI development are geniuses of scientific knowledge BUT they are ethically corrupt. If they are so ethically corrupt to bend rules, are they really "good" scientists? That so many of them have behaved so unethically may mean something is wrong at the University level where ethics training is not emphasised with equal importance as content-related skills. Something is telling me that this immense deficit will cause us to pay an apocalyptic price in the future.
No, they need vast amounts of data that would take humans millions of years to read. It's not going to be all 'created in-house' unless 'created' means some very superficial transformation. Their LLM's are what they are using to do the automated 'curating' of data anyway, so it amounts to the same thing: feed all our data to the LLM's.
...at least that's what the creators claim. Yet it is a claim that could at any point be disproven using a single well-articulated prompt. This does not seem like a particularly robust business model as far as copyright is concerned.
A question for "The Daily". Is the answer moving forward - A "Free Use" internet, made of : - Free Use Websites, Podcasts, Videos, and Social Media, that automatically anonymizes users? Ongoing content, and context, but no contact.
I repeat myself: this specific family of algorithms seems to display rapidly diminishing returns on investment, for every incremental improvement they require exponentially more data and processing. It might actually be better for the field of research if they were forced to start from the beginning again. A three year old child did not have to absorb the entire internet to learn how to speak badly. They are running a language model that has been exposed to and absorbed (back of napkin calculation) a few thousand hours of verbal input, maybe less than a thousand in some cases. Running on hardware that needs 20 watts or less, and which does other stuff besides language processing. So we know that there is some more elegant and computationally frugal way of solving this task. I think that the large language models might be addicted to brute force through their fundamental architecture. Just a thought.
You seem to have overlooked the fact that surveillance of real life is an unlimited source of data. That's why robots and wearable ai pendants etc. are starting to boom. Put LLM in robot. Put robot in the world, workforce, etc. Collect real life data.
Surely it's just a coincidence that this show, hard fork and the Ezra Klein show all chose to talk about the lawsuit.
You mean the landmark case headed by the New York Times? Yes! It's not coincidence - it's explaining their case to the public because the NYT is leading a fight against corporations so powerful the United States government is afraid of regulating.
it is a coincidence, and don't call me Shirley
Dude, this is your lawsuit. You're podcasting as the plaintiff in a zillion dollar lawsuit. So is this journalism or advertising?
Great reporting in exposing all these parasitic companies syphoning IP from humans...
Solid reporting exposing these parasitic companies syphoning human IP....
No they are not
Yeah, they are aware and discussed it.
Publishing Antitrust Law is what SAG was fighting for.
I don't trust anyone working in AI to have humanity's best interests in mind. Starting with AI's carbon footprint, which is enormous.
carbon footprint of running computers? omg, compare that to ICE on airplanes, cargo ships, trucks & cars...
@@seekingworldlywisdom whataboutism
@@seekingworldlywisdom I hate cars too. I don't even have a driver's license. Still, I urge you to look up "AI carbon footprint".
@@gmenezesdea it should be easy to do a research and compare the AI carbon footprint, crypto currency mining , ICE transportation, coal fire power plant,... and the benefit that brought to mankind. And you might find out AI contribute the least carbon foot print but brought the most benefits - my guess.
Good grief I felt like the speed and cadence of speech in this podcast was fine-tuned for those members of the human species unfortunate enough to find themselves on the far left end of the bell curve.
LOL
What is most worrying is that the scientists behind AI development are geniuses of scientific knowledge BUT they are ethically corrupt. If they are so ethically corrupt to bend rules, are they really "good" scientists? That so many of them have behaved so unethically may mean something is wrong at the University level where ethics training is not emphasised with equal importance as content-related skills. Something is telling me that this immense deficit will cause us to pay an apocalyptic price in the future.
LLM or llama language data mining is nothing compared to AGI 4 or GAI 4&5 coming in a couple few years.
Excellent synopsis of both the technical, economic, and legal landscape of AI
This is so interesting to learn more about. Thanks!
Moloch
Bad Data leads to bad or untrustworthy AI
This is cute. These days, LLMs use artificial highly curated data that is created in-house to train the AI, not copyrighted data.
Shirley they use whatever they can get a hold of⁉️ It's META🌍
No, they need vast amounts of data that would take humans millions of years to read. It's not going to be all 'created in-house' unless 'created' means some very superficial transformation. Their LLM's are what they are using to do the automated 'curating' of data anyway, so it amounts to the same thing: feed all our data to the LLM's.
...at least that's what the creators claim. Yet it is a claim that could at any point be disproven using a single well-articulated prompt. This does not seem like a particularly robust business model as far as copyright is concerned.
Does this mean that every few minutes, the AI bots will loudly exclaim: Mmmmmmmm!
it'll be funny when the people that actually benefit from the lawsuits are other tech companies
This is one of the most interesting episode I listened to 👍
One gigawatt data centers.
A question for "The Daily".
Is the answer moving forward
- A "Free Use" internet, made of :
- Free Use Websites, Podcasts, Videos, and Social Media, that automatically anonymizes users?
Ongoing content, and context, but no contact.
Funny the title of this video. Bible told us a long time ago that Moses went to Mt Sinai.....the spelling isn't coincidence
I repeat myself: this specific family of algorithms seems to display rapidly diminishing returns on investment, for every incremental improvement they require exponentially more data and processing.
It might actually be better for the field of research if they were forced to start from the beginning again. A three year old child did not have to absorb the entire internet to learn how to speak badly. They are running a language model that has been exposed to and absorbed (back of napkin calculation) a few thousand hours of verbal input, maybe less than a thousand in some cases. Running on hardware that needs 20 watts or less, and which does other stuff besides language processing.
So we know that there is some more elegant and computationally frugal way of solving this task. I think that the large language models might be addicted to brute force through their fundamental architecture.
Just a thought.
Good start may be the European model.
Funny how "fair use" isn't mentioned till half of the way through the podcast. 🙄
Kara Swisher has called these folks glorified shoplifters.
Mmhum hmmmm mmhmmm. Get the popcorn babe, Michael is on.
TALK FASTER, DAMMIT !!!
Only 🇪🇺 European Union has willingness to take on big Tech
Yes, they understood their Brexit leader mistakes. Can USA learn anymore in this divided environment?
Europe is a museum.
Regulation not Lawsuits. Dont have the time. EU has done a better job than US.
You seem to have overlooked the fact that surveillance of real life is an unlimited source of data. That's why robots and wearable ai pendants etc. are starting to boom. Put LLM in robot. Put robot in the world, workforce, etc. Collect real life data.
Hmmm, propaganda.