I teach both undergrad and postgrad students in clinical fields and I have become an expert in sifting out essays that have been partially or wholly written using large language models. It is so prevalent in education and it frightens me because these are people who will be tasked with saving your life if you end up in their Intensive Care Unit. We are constantly chasing our tails trying to work out how we can work with the technology rather than against it. We are trying to design assessment tasks that make using AI impossible, or tasks that ask them to critique pieces of AI text. It's added SO MUCH work to our plates and the risks of letting it through are potentially risks to human lives.
@@jmillward It's tricky and that's why i say it's added so much to our marking loads. Firstly, we do as @Ryanneey says and look for common phrases. secondly we check every reference to make sure it a) exists; and b) says what the student tells us it says. Then we check how the student writes in less prescriptive forums like discussion boards. And finally, AI often just skims the surface of a topic so we look for depth of examination. Are the students describing something or are they critically evaluating it. You're right that we get a lot of false positives and it's important not to wrongly accuse. That's why we do all the above before we move it onto academic integrity.
@DrCalamityJan makes sense! I use AI all day every day and have also developed the instinct for sniffing out AI prose. Sometimes it's blatant, but I do think it can be masked by the right prompts and model choice hence why I was curious how you do it. I agree that one of the best methods is checking for confabulation, as you can always rely on laziness (of the person who doesn't want to check every word of the AI's work). However, it's only going to get harder to spot, so I don't envy you. Interesting times to be a student OR teacher...
Pre teenage girls being made to feel insecure about their skin so people can profit is disgusting, I didn’t even know that was a thing. Their mums need to tell them what Marina said- their skin is as healthy now as it will ever be, all those older women are buying the products to look like you, you don’t need to buy them
I'm sure it's similar to all the popular things to own when I was that age - they think they need these things to fit in, and to be part of their tribe. I also remember that if my mum had tried to tell me that I didn't really need these things, then I definitely wouldn't have believed her.
@ I’m a man, it’s not something I know anything about There seems to be something sad and infuriating about this in that it’s based on saying “you could look better” which is the same as “you don’t look great”, and that seems a shitty thing to tell a 12 year old
I was watching the back catalogue of all your episodes all week, almost didn't release I was clicking on a new video. So glad I found this podcast (I have ordered your book Richard)
So general ads for pitta bread snacks and porridge = banned for kids vs £48 unnecessary face cream specifically targeted at children = fill your boots. FFS.
Maybe if the BBC had some respect for the licence fee payers and controlled expenditure they would not need to keep putting the licence fee up and up and up.
2018 Google’s unofficial motto has long been the simple phrase “don’t be evil.” But that’s over, according to the code of conduct that Google distributes to its employees. The phrase was removed sometime in late April or early May, archives hosted by the Wayback Machine show.
I worked several decades as a proofreader of non-fiction. I eventually found myself out of work thanks to computer spell-checkers and Grammarly (despite their shortcomings!). The material that they are using to train AI feels like the work of me and my colleagues, (aka 'the grammar nazis'). Naturally, we won't ever see a penny!
Also regarding the government and culture, they, alongside the previous leaders, have no idea how to support culture and the arts. We have an amazing industry full of stunning talent, but if they’re being squeezed too far, the products come out of where the money is and that won’t be the UK. I’d love to see more British shows that show the spectrum of experience here, every time I speak to my American friends particularly, they absolutely love British shows and talent and rate it far higher than anything in the US. I’ve got loads of writing ideas and tried getting into the industry years ago but just couldn’t get anywhere near, but I still love to write and create.
Marina, "they know everything as they've always had the answer to every single question in their pocket, but they haven't developed any emotional intelligence" ........sounds like kids and AI in a nutshell.
What a fantastic piece on diversity in visual media. I tune in for pieces like this! The only thing I would add is having lived with someone from a different culture to the one I was born into, their Netflix homepage was TOTALLY different to mine. It’s evident that people tend to favour watching programming that speaks to them, and right now Britain is maybe too diverse for a national identity that can fund a single, independent media outlet.
32:00 Maybe I'm not the young people anymore, but being a millennial I remember the start of this discussion. I remember being really into bbc 3 growing up. Great channel. Lots of fun shows. Then they decided to move that to streaming only. Now they've brought it back, but I still get it on streaming because I'm used to that now. And what you've got to bear in mind, is basically every network did this. There were online exclusives for most On demand services so we were basically trained not to watch TV.
Yrs but BBC 3, and maybe 2 used to commission these incredible 7 minute interview shows that lifted the rock on whole areas of society you didn't know existed - the Secret Life of .... [Booksellers, to give an example] was a great one.
I have written to my MP. As a retired It professional who now writes fiction I understand the inevitability of adopting such technologies but also that they need to respect existing laws. Disruption isnt always a good thing. Look at how AirBnB has impacted on the affordability of long term rental properties in popular destinations or how the likes of UBer and others have sought to undermine workers rights by not recognising them as employees at all.
Isn't this somewhat similar to diverting manufacturing to places such as China, losing all our own manufacturing capabilities, and then bemoaning our dependency on that stupid and frankly obvious outcome?
Our local Sephora the consultants are constantly despairing the tween purchases, the store has considered banning sales under a certain age but it is the parents paying and insisting their kids can have this stuff. And while we know the people who run the beauty industry have no morals, I cannot get over the parents paying! Might add I live in a city (not UK) that is affluent and “well” educated, the most progressive and privileged in our country.
Remember when optimistic sci fi authors thought AI would take over the mundane, boring stuff and let people spend more time in creative endeavours? Yeah....
let's face it, all great technological innovations are always sold to us as 'imagine what what this could do for humanity!' (i,e how good it's gonna be), but they always end up as being exploitative tools, because the lust for power is always irresistible. And to have power, you need control. The internet is a prime example of this. I can't even fathom what the world will be like with 'full' AI and quantum computers. I am sure they will acheive a few good things (like curing deseases), but at the end of the day, I am not so sure humanity as a whole will benefit from it.
I used to be a freelance copywriter for a company which no longer pays for human written copy. Just to play devils advocate: If I read a Stephen King novel, am I not then also trained on the work of Stephen King? Would I be violating his copyright if I then write a story aping his style ( assuming I paid for my copy the novel)?
The only skin care regimes for tweens is soap water and sunscreen which will mean they don’t need any of the expensive stuff later on when bank of Mum & Dad won’t be funding it 😎
I work with a tween brand in the USA for boys and i'd say they echo your thoughts entirely. The young ladies want the "feature ingredients" they see advertised for adults (like retinol), even though they would be bad for them. With the brand I work with they understand that and have products designed for younger people to help protect that perfect skin rather than improve it.
I got sent the questionnaire the government has sent out from what I've read nd the questions clearly state that government is going rewrite copyright law in favour of the tech bros.
As a regular reader of Marina Hyde's brilliant Guardian columns I would say that the paragraph of an AI generated Marina Hyde column, while it didn't sound quite right it also didn't sound hilariously wrong either. A new world is coming.
I am regular reader too, I thought it did sound a bit like her, but I wanted to hear the AI version of one of her introductory paragraphs, as I think I could write one of those
I've got two little girls (5 and 2) and I'm now feeling quite terrified of how to deal with all those influencer things as they grow up, I hope it isn't quite as bleak as Marina makes it out to be!!
Back in the 80s wasn’t the product the entertainment? He-Man She-ra Transformers? I remember when cartoons were the influencers that had ads for the toy attached.
The overpushing of diversity in TV and advertising just makes things look unrealistic. It doesn't achieve what it intends and actually has the opposite effect for me personally. Having a black woman and white guy sitting on their DFS sofa with their Southeast Asian kid and their three legged dog just makes me feel alienated. It certainly isn't going to make me want to buy anything or influence my decision either way. DEI for the sake of DEI is at best pointless and at worst very dangerous.
I think that's maybe just you. I like the diversity, even if it's not realistic. TV has never actually been representative, so it's nice to see some different kinds of people at least.
Artificial Intelligence, specifically Generative AI is colonising creativity as we know it. We have to do more to protect creatives, artists, and designers as this is taking from them more than just their creative outputs but also their souls journey of lived experience.
2018 Google’s unofficial motto has long been the simple phrase “don’t be evil.” But that’s over, according to the code of conduct that Google distributes to its employees. The phrase was removed sometime in late April or early May, archives hosted by the Wayback Machine show.
The beeb missed the streaming wars and they had the product and the iplayer. They should've opened it up to the world and charged for the privilege. Instead they sold of the shows to the streamers which are now killing them.
My 82 year old dad steals a budget tape measure every time he goes to B&Q he normally uses the excuse that he’d forgotten he’d put it in his pocket, I’ll suggest next time he blames AI data scraping.
I fear this conversation about AI learning comes too late. I'm facing AI every day in the art world and the theft of copyrighted works has been done, sure you can protect future works but even though the snowflake is part of the avalanche, you're just removing a few snowflakes from that avalanche... The question being asked is the wrong one, it's not if AI can create art to the level of the level of our best and brightest, the question is does it have to? All AI has to do is be good enough to pass a first glance, when it's able to do that it becomes a matter of volume. Reducing the signal to noise ratio and push human creations somewhere in the haystack. In my field I currently see a problem with young artist, who no longer have a chance to hone their skills on these small projects that are absolutely crucial to pay the bills of a young artists. Things like posters for bands, venues... artwork for thumbnails, the local football team's flyers for the barbecue... All those are done by AI today, completely cutting off an entire stream of artists. We're literally going back to the times when art was for the rich. That said, AI companies will be the ones hiring writers and artists to feed their beast, because these models self destruct when they source their own content.
I came across a video of a 3D printer (with software (AI)) that can mimic brushstrokes of artists, to get that impasto, handworked 'product'. Why? Why do technologists want to eradicate any human creativity?
to the question, can AI create art?, my answer woukd be no. Anyone who thinks AI can create art, does not know what art is. Art at its root is a human experience. Can humans use AI to create art? Absolutely. But if you are not including any human expression to what is created, then it is not art. AI can probably create synthesized artefacts, but if we cannot attach meaning to them, then surely you can't call it art. Or am i wrong about this?
@@prickwillowartclub4738 because somewhere there will be a company who thinks they can get the machine to do what the human did for less money to increase their profit margin.
I don't know if they can pay as a licence i.e buy it for 3 years and then renew - because of how it works, once the system is trained, thats it - the content itself isn't keep being accessed actually
It's one thing that it 'owns a copy' of the work, but every time it generates an output that's directly derivative of the work then that work would be liable for charges.
@@NotThatOneThisOne there are some works to aim for that but on the basis of the technology it also doesn't work that way, basically the entire training is used for all answers and they would lean on or sound like A or B upon request because they know what A or B sound like but still all input is used and it is very hard (but, again, there are things doing that) to know exactly what part was the basis for any answer. This isn't to excuse the issue, only to say the compensation might need to be higher as a one-time fee and not on a repeat basis
@@NotThatOneThisOne AI isn't build in a way that forgetting previous learning materials is happening. Sure, they could invent and implement something like an "undo" button, but some powerful governments have to force them to do that. And even if this would happen, an AI will always give an answer. Stephen King's books can't be used, but the AI read on reddit that Grady Hendrix has a similar writing style, would you notice the difference? And if that's off limits too, there might be a self published Stephanie Queen parody somewhere which never sold more than 2 copies. Or the AI considers tv shows like Simpsons, South Park, Rick&Morty, Garth Marenghi's Darkplace and many, many more as acceptable reference. That will work, because Stephen King is a pop culture icon. For people like Marina Hyde there will be less references outside of her own work. But AI would still know that she is a columnist and write a column without telling the user that the style is only guesswork. Because AI is always only guesswork and it's the users job to verify or dismiss it.
It's quite funny that Friends now is considered not very diverse but I remember when watching it for the first time thinking how diverse it was because one of the characters was Italian (Catholic) American and three were Jewish. It was incredibly powerful having non-WASP characters.
Where did you grow up? I feel like Italian Americans and Jewish Americans have been prominent in TV and movies for as long as there has been TV shows and movies.
@@daviebananas1735 Not New York obviously. There have been Italian and Jewish American actors but they were either gangsters, minor characters or playing WASP (or indistinct from WASP) - David Janssen, Leonard Nimoy, Wiiliam Shatner.
@@mjpledger What about Mel Brooks, Jerry Lewis, Gene Wilder, Barbara Streisand, Alan Arkin, The Marx Bros, Woody Allen, George Burns, Jack Benny, Garry Shandling, Jeffrey Tambor, Phil Silvers, Jerry Stiller, Jerry Seinfeld, Don Rickles, Bette Midler, Walter Matthau, Tony Curtis, Larry David, Lenny Bruce, Katey Sagal, Joan Rivers, Carol Kane, Jackie Mason, Dustin Hoffman, Fran Drescher, Rosanne Barr, Jason Alexander, Scott Wolf etc I could go on. And that’s just Jewish Americans.
They tried that in France: creating shows based on bills of charge with ethnicities and sexualities rather than characters but thats what they solely were, clichés or so anti-clichés they were clichés. People hated their lack of depth and authenticity.
I was hoping when Richard read out the AI generated Marina Hyde column he would start with the intro. I’ve been reading her excellent columns for years in the guardian and I think I could spot a well made imitation introduction, and maybe fake one myself
It's the parents who buy the tweens this stuff. Blame them, not the children. I remember wanting a pair of Olivia Newton Johns Grease high-heeled sandals as a 10-year old. You'd think my mum had ripped my arm off the tantrum I threw when she said 'absolutely not, you're far too young'.
I was commissioned to edit an AI generated novel, it was an absolute mess in terms of the writing, tenses were inconsistent, but there were some very good twists that I suspect were from someone else rather than AI inspiration. I gave up in the end, they will get better though, so now us the time to get legislation in place
that's the whole point about 'AI' it's only working off existing material. currently that's still produced mainly by humans but that will flip and probably quicker than we even think.
Ive worked in customer service and hospitality for years. And there is absolutely no way that the make up and skin care companies did not have their finger in the pulse on this from the beginning. When you are behind the counter you will notice the amount of children buying products and the word goes up the chain about it. Make up and skin care should be treated like all "adult" products. Not to be sold to anyone under 18. If we had, as a culture made it clear that make up is an age thing this wouldn't be an issue.
It was the same with the adult industry. These websites came along, basically just stole everything until they got big enough and rich enough to get the law written in their favour. And now the richest person in the adult industry isn't someone who actually makes any of it, but the person who hosts it on their servers.
We need to put more responsibility on companies that advertise their products. Simply make it illegal to advertise to people under the age of 18 and if they claim they can't control that on certain platforms, then they simply can't advertise on those platforms. It seems that in today's society, companies right to bombard people with lies is more protected than the wellbeing of the people.
Visual creators (concept artists, illustrators etc) have been utterly screwed over by gen AI, and personally I have no interest in any media that has been created using it. On top of all the thievery, AI stuff simply looks and feels scammy (and then there's the environmental impact too, oh gee how fun and ethical). I genuinely don't know why anyone would want their brand associated with it. I am only interested in human art made by humans for humans, and that's that.
Usually, nothing can stop Marina when she's in full flow of her stream of consciousness, including Richard's aside jokes which always seem to whoosh past her going unnoticed, but the Sephora joke actually did derail her momentarily - is that a first? 🙂
The AI crawlers don't keep record of what they have copied as they copy it. It take a blistering amount of computing power just to copy all the things, analyze them, and synthesize them into a response to a prompt. It would take *even more* computing power to keep a bibliography of what is being copied, let alone tracking whether or not it is allowed to be copied. And yet, that is exactly what lawmakers should demand tech companies do. The AI must keep bibliography and not copy illegally. All AI that does copy illegally should be classed as a virus, legally and technologically.
The thing is - it does not copy in the sense most people would interpret the word. It statistically analyses the words and plots them in a multi-dimensional vector space that encodes the meaning of words in an extraordinarily detailed way. Each word’s vectors are a result of all the instances of that word on all the material it is trained on. So it is analysing the language across all works, not copying those words.
@@MrNilOrange that is indeed copying. It's not storing, but it is copying. Ironically, the AI actually ought to store the words, with proper attribution. In that multi-dimensional vector space of meaning as you term it, each node should have a bibliography. That way the works that inform that meaning are known and their authors can be credited. Without that attribution, large language models are elaborate theft. Lawmakers ought to advance that principle, but they haven't the understanding or the mettle.
@@MrNilOrange that is indeed copying. It's not storing, but it is copying. Ironically, the AI actually ought to store the words, with proper attribution. In that multi-dimensional vector space of meaning as you term it, each node should have a bibliography. That way the works that inform that meaning are known and their authors can be credited. Without that attribution, large language models are elaborate theft. Lawmakers ought to advance that principle, but they haven't the understanding or the mettle.
@@thescowlingschnauzer Why should we make machines do it when humans don't though? If I write something, I don't have to reference where I first learned every word in this sentence, nor the idea behind it. It's not that I think we should just let AI run riot but I can't see a way of dealing with it without just saying "it's okay for humans do to this, but it's not okay for a human to make a machine to do this" which I find a bit uncomfortable.
28:52 Very hard disagree with Marina’s characterisation of South Korean culture (disappointed Richard didn’t pause her at any point, the video also has a cut suggesting some parts of her anti Korean tirade was edited). Clearly is incredibly biased and ignorant from someone who has never visited themselves. 1. S Korea is a pioneering country in skincare and treatments because, like many other countries, there is pressure to make yourself stand out. The same thing applies to the UK equally where girls attending even primary school apply make up liberally. What is the harm in a country being a pioneer where others can be inspired by it - Korean treatments are never targeted at children 2. Dystopian - reeks of British exceptionalism that a country that is easily a decade ahead in tech (which has had cell network allowing people to watch tv on their smartphones since 2009 when I lived there) is reduced to being a dystopia 3. Massively patriarchal and low birthrate - again reeks of ignorance. Women are highly educated and well represented in upper echelons of leadership both political and commercial. And birthrates are falling globally including in Britain - a very Conservative viewpoint coming to the surface by equating that to being in a patriarchy I would invite Marina to speak to some Korean people herself or even visit the country. Very disappointing to hear this from a journalist of note. While the comments go mad about her choice of book arrangement or quality street prefs (which are both twisted but not the same weight as her rant about S Korea)
Is there really a diversity issue in traditional TV? I can remember reading somewhere recently that statistically there is an overrepresentation if compared to the census data
AI companies paying for back catalog is actually the right thing to do. What's wrong is: once you put the back catalog into the AI, and it copies that data to make connections, it doesn't track which data was used to make a certain connection. AI can't delete / unlearn / selectively forget. At least, not as currently designed. It should be made into law that AI programs must track data in every connection and must have the ability to remove connections based on copyrighted data, and it should be made law that any AI program that fails to do that is a virus and a tool of theft. But legislators don't have the understanding or the balls to do it. Pity, that. Because otherwise Richard's model of licensing back catalog to AI for a certain number of years, it would be a great way for copyrightholders to get paid. In fact, if AI programmers solved the bibliography problem (making AI remember what it copied for each connection) then you could automate the lawyers right out. The moment someone publishes an AI generated work it could trigger digital payment from the publisher to the copyrightholder(s) immediately. No legal interpretation needed.
AI can absolutely be retrained to unlearn selections of datasets though and is something that is done when undesirable effects from model deployment are found. Xu et al. Mar 2024. Neural networks are not designed to have metadata attached to every connection, they're universal function approximators, we don't even know what the model connections even mean. They are black boxes and there is a whole machine learning sub field of researchers that have been spending years trying to understand what the weights between nodes actually mean. You can't just say that node 5 in layer 3 of multi attention head block 3 is connected to this and this and this and this to every single one of the 100s of billions of parameters of these models, it's nonsensical. The challenge in deploying these models is already an issue without having this as well. An easier way of doing this is that any data that is used to train the models should be subject to audits via a third-party or state run organisation that can verify where the data comes from, whether it sourced ethically, whether it was source legally along with all other data protection checks.
Will the logical conclusion be that AI will "create" output which will subsequently be scraped by other AI and re-output ad infinitum? How long before the results become utter gibberish.
Note: don't forget about the creep of predators in the manga anime world in forums online and how that directly connects to kids suddenly transing themselves.
I think teens using botox is far more concerning than having kids know it's important to look after and care for their skin, again its all about getting balance between keeping healthy skin and the scary extremes of beauty obsession
Claiming diversity TV would accumulate more viewers is like saying niche food sells more than mainstream fast-food. What sells the most are what's easily consumed and appeals to large masses. This tends to correlate with lower quality and less diversity since formulas are used to increase appeal at the lowest cost possible which in the end makes all products very similar to each other.
Maybe the arguement becomes more paletable if its about all jobs not just the so-called creative sector...you cant be happy with self service tills and ai bots replacing call centers and then go wait dont take the upper middle class jobs
*Alphaville* The comment about not understanding WWI because AI never read poetry on the topic, it reminds me of the great French movie '"Alphaville". How to undermine the technological system underpining the system. Perhaps ... poetry .... Time for an update perhaps
I think for the most part I completely disagree with Richard about AI (which is unusualy for me). I have an interesting question, If Richard stands up on stage or says on this podcast 'Like a great auther once said "It’s only after we’ve lost everything that we’re free to do anything."' and then goes on to talk about his relationship to that quote, is he not profitting off and stealing someone else's work? Everytime a professor/lecturer or a teaching doctor passes on information that they have learned but belongs to someone else are they not stealing it? If we were to compare an AI trained on the works of just one author to a person who has studied the works of the same author and ask them both a question (whether we have paid them to answer or they have answered freely), aren't they both stealing if one is? There are only two issues I can see here, has the copy of the literature(or whatever medium) been paid for (i.e not pirated), and does the medium have a commercial licence version (i.e. I know some non fiction is printed with the explicit purpose for the person reading to pass it on indirectly, usually for education purposed and these copies cost more). Just because technology can read it faster and make better connections then a person I don't believe this fundamentally changes any morals. At most I think an AI should have to pay a marked up price to read/watch etc the content, it certainly shouldn't be free and/or pirated, but other than that I think its pretty fair game.
No, because referring to a small excerpt from a text, in this example, is legal under copyright's fair use terms. AI scraping isn't designed to onboard small parts; it has to have the whole work in order to analyse it.
@NotThatOneThisOne I'm semi familiar with fair usage. So is it fair usage to on multiple occasions to tell different quotes? Where does frequency play into fair usage? Because again, an ai can do it quicker, the morals don't seem any different. I do agree if you asked an ai to recite a work and it gives you the whole work verbatim that that is morally wrong, but if I asked my professor to tell me an entire poem from memory how is that any different then an ai doing so?
The show has become uninteresting because Richard and Marina are too similar in their backgrounds - both knowledge and political leanings. In this case, both woefully misunderstand how llms function, and the basics of copyright law, but they just reinforce each other and mislead their audience.
I struggle to believe readers want to read AI books. I think it should be the bare minimum to at least have these things clearly marked on storefronts.
blame the parents not companies and definitely not the children, if their parents taught them that those cultures generally eat well, get fresh air and sunlight regularly maybe they would do that?
The skincare issue is worrying, however if parents will pay for these goods for children, despite them not needing them, then there is part of the issue, however, the level of influencing through social media is incredibly intense so it’s no the wonder children want these goods, it’s like the 80s capitalism x 100
AI need to get in the sea. Everyone who works in AI needs to get in the sea. It is now our sworn duty to fill the Internet with as much crap as possible to ensure the scrapers are learning crap.
1. How about having a knowledgeable AI expert in this discussion? 2. There are large bodies of music, writing, images, etc., that have been licensed for all manner purposes. Hard to unring that bell. 3. Richard needs to stop asserting without evidence what was stolen or how AI can be applied.
If an author reads a book and then writes their own book based on what they have learned. How is that any discussion to AI? I could walk into a library in theory, and read every book on the shelf, take all that knowledge and write my own book. Surely AI is doing the same? Unless the work AI produces is the exact same as an existing book then its not copyrighted I would have thought 😅
As a thought exercise sure that's true, but things have worked out so far because a human cannot possibly do that. Now that AI models have made that a reality it's perfectly reasonable to work out parameters on how that is handled without it being an automatic free pass.
I have an argument in favour of AI. The talk of these publishing and movie companies allowing AI companies to use their work for the AI. But that's what already happens with humans. We consume everything, and make something new out of it. Whats that saying about great artists steal
One thing about the AI copyright thing is the ridiculous length of copyright. Every time Disney ip was going to go public domain the law was changed to increase the duration. If copyright was reduced to increase the amount of works in the public domain, then there'd be less demand from tech companies to steal.
I don’t think creatives are happy about AI. I think people who are looking to profit from creativity are. AI doesn’t help someone be creative or have an original thought in any way. AI appeals to people who want a shortcut to creativity and capitalists
Large Language Models is the largest infringement of copyright protections of artists and other creators since copyright laws were established to prevent this type of creative theft.
I rarely disagree with Richard and I hesitate to do so now. But I think he has the wrong concept in his head about how LLMs work. They are not copying or emulating works. They are analysing them semantically. They are effectively learning English at an extraordinarily detailed level by reading everything possible and plotting the relationships between the words in a hugely multi dimensional vector space. So they are learning English and what words mean and how they relate to each other in exactly the same way that all humans and authors learned their language and their craft. If you think they are plagiarising then we have all plagiarised every book we ever read and learned from.
The money is interested in ai, creatives are not. if you go online and look at the dialogue creatives who understand the tech are scared, especially small independent producers who do not have the money to pursue and Sue big tech companies who are stealing their data be that the written word or images or even 3D sculpture now and video. The idea AI is going to be used to empower the workforce And somehow tech pros who now don't pay taxes like they should will out the goodness of their heart pay universal basic income is preposterous. This is a fundamental fight for who owns our own data and our own intellectual property and whether we have a say On how that is used. It's also very short sight of a government because the creative industry is a big industry in the uk and if you go out of your way to destroy it including her little guys ghost a non-gender specific peoples then who's gonna remain to pay the taxes we've already seen record levels of unemployment this is only gonna add to it. But fundamentally it's not even about whether people are going to lose their jobs it's about the permission that we give or not and whether we have the right to withhold that permission. And this from the industry that told us that copying dvds was bad and now they're doing exactly the same thing but it's OK because of the power and the money.
COPYRIGHT and AI: To begin with, the Lords is right - if AI steals copyrighted material it is no different from if someone else does. (Not that I think AI is a someone!) But I am confused about what that stealing is. And the phrase which is relevant here is Teaching Materials. In essence, what is the difference between an English professor tasking their students to read written works to understand about style and construction and a programmer tasking their LLM to do the same? One is obvious - the student would need to get hold of the works somehow. This could be buying them first or second hand, borrowing them from a friend, or borrowing from a library. AI doesn’t have friends so the LLM needs to do one of the others. When the student buys and reads a book, the author earns a single royalty. There is nothing in the licence granted by buying the book (buying does not mean you now own the copyright) that says you can or can’t learn from it. Or that you cannot remember it for future use, or you cannot retain it on your bookshelf for future use. And if that future use is being “inspired” by it, that is covered simply by not being referred to. However, you cannot reprint the book, republish it in any form (including recording it as an audiobook). You need a separate licence for that, which would normally include a payment. And here is the confusion - how do you translate that into a mechanism for a LLM? When is the LLM just learning from source material, and when is it borrowing from source material? This is where agreements with publishers make sense. The company training the LLM essentially sets up some kind of library agreement. This is also where I think previously, they have DEFINITELY stolen - they have had access to books without a library card. They have broken in to the library. But having learnt from the material, how do you stop them plagiarising? Or how do you make sure they pay when they quote and that is not covered by a public interest argument? (My thought on the latter is that they don’t have that option in the first place. Many internet creators use and abuse that idea to breaking point already). I also agree with Stephen Fry. I have been banging on about this forever when it comes to AI narration - there is NOTHING in how AI works or is evolving which means it “understands” an idea. It is a machine that can assimilate and regurgitate knowledge in a consumable form, but it has no sense of understanding. It might be able to impersonate Marina’s writing style, but any understanding of the subject is in the prompting written by Richard, or in knowledge gleaned from sources. It is not in the LLM itself. That is clear enough, but I don’t think it is widely understood. So a company can set up a system where AI will act as an editor for your book, but they will omit to mention that their program won’t understand your book in any way at all. And authors will fall for it and be ripped off. I would rather read a self-published book full of spelling mistakes and grammatical holes, but is the honest work of a human being, than read something that has been passed through a system with no human oversight at all. I use grammar checkers all the time, but it is ME who chooses to accept each and every suggestion - It is not automated and it for me, it never will be. I like a human to be there for every word, and for that human to be me.
As someone who works in this space, your analogy of learning from material or borrowing actually falls into the dichotomy of training the model and prompting it. When these models are trained, they have an architecture of billions of parameters that can be tuned in order to fulfil some task. In the first instance their task (often called pertaining, the P in GPT) is to simply predict the next word (it's actually "token" but I'm simplifying with word) in a sequence. What this actually does is instills a probability distribution of what the next word would be given the context of the previous words. Once we have trained a model to do that you can then go on to "fine-tune" that model on more specific tasks, for example training it to output text in the same style as William Shakespeare. When we do both of these training regimes, the internal parameters of the model are tweaked and changed so that the probability distribution of outputting a particular word is changed. When we prompt the model however, all of those parameters are frozen and what we're instead doing is activating the parameters of the model to predict what the next token is. In the background there will be some formatting done to prepend "User" and "Assistant" before each input and output so the model takes that into consideration as per its training data so the probability distribution remains constant given that other parameters when prompting the model are appropriately set, like "temperature=0". To me, if a model was trained on copywritten materials then the model developer should be paying a licensing fee to have that material in their dataset or remove and retrain the model without it. It brings me to the point that it's not necessarily the models that we should be regulating, it's the data that is used to train these models that should be regulated in some way. If you want to learn more, 3blue1brown has an amazing set of videos on it: ruclips.net/video/aircAruvnKk/видео.html
I teach both undergrad and postgrad students in clinical fields and I have become an expert in sifting out essays that have been partially or wholly written using large language models. It is so prevalent in education and it frightens me because these are people who will be tasked with saving your life if you end up in their Intensive Care Unit. We are constantly chasing our tails trying to work out how we can work with the technology rather than against it. We are trying to design assessment tasks that make using AI impossible, or tasks that ask them to critique pieces of AI text. It's added SO MUCH work to our plates and the risks of letting it through are potentially risks to human lives.
I also work in post-secondary academia, and the use of the phrases "delve into" or 'dive into" is a major red flag :)
How do you validate that your assessments are correct given that there are no accurate AI detection tools?
@@jmillward It's tricky and that's why i say it's added so much to our marking loads. Firstly, we do as @Ryanneey says and look for common phrases. secondly we check every reference to make sure it a) exists; and b) says what the student tells us it says. Then we check how the student writes in less prescriptive forums like discussion boards. And finally, AI often just skims the surface of a topic so we look for depth of examination. Are the students describing something or are they critically evaluating it. You're right that we get a lot of false positives and it's important not to wrongly accuse. That's why we do all the above before we move it onto academic integrity.
@DrCalamityJan makes sense! I use AI all day every day and have also developed the instinct for sniffing out AI prose. Sometimes it's blatant, but I do think it can be masked by the right prompts and model choice hence why I was curious how you do it. I agree that one of the best methods is checking for confabulation, as you can always rely on laziness (of the person who doesn't want to check every word of the AI's work). However, it's only going to get harder to spot, so I don't envy you. Interesting times to be a student OR teacher...
In class tests are the only way to go.
Pre teenage girls being made to feel insecure about their skin so people can profit is disgusting, I didn’t even know that was a thing. Their mums need to tell them what Marina said- their skin is as healthy now as it will ever be, all those older women are buying the products to look like you, you don’t need to buy them
I'm sure it's similar to all the popular things to own when I was that age - they think they need these things to fit in, and to be part of their tribe. I also remember that if my mum had tried to tell me that I didn't really need these things, then I definitely wouldn't have believed her.
@ I’m a man, it’s not something I know anything about
There seems to be something sad and infuriating about this in that it’s based on saying “you could look better” which is the same as “you don’t look great”, and that seems a shitty thing to tell a 12 year old
Insecurity sells.
ruclips.net/video/bbbjWEnC3Gc/видео.htmlsi=-82dCE_VlK41K9mh
Most of this comes down to parenting or lack thereof.
The amount of mothers I see deliberately putting filters on their own kids is frightening
I was watching the back catalogue of all your episodes all week, almost didn't release I was clicking on a new video. So glad I found this podcast (I have ordered your book Richard)
If I miss one I feel like I’ve missed a lesson at school where I’ve missed a key nugget. Each episode is full of absolute gold
So general ads for pitta bread snacks and porridge = banned for kids vs £48 unnecessary face cream specifically targeted at children = fill your boots. FFS.
Let's not forget that the license fee doesn't just pay for the BBC to make programmes; it maintains the infrastructure for broadcast media, too.
Maybe if the BBC had some respect for the licence fee payers and controlled expenditure they would not need to keep putting the licence fee up and up and up.
@@Subcomandante73 Inflation doesn't happen to them?
@@Subcomandante73you never know what you've got till it's gone.
2018
Google’s unofficial motto has long been the simple phrase “don’t be evil.” But that’s over, according to the code of conduct that Google distributes to its employees. The phrase was removed sometime in late April or early May, archives hosted by the Wayback Machine show.
No wonder Australia is banning social media for children. Wish more governments had the balls to do it too.
I worked several decades as a proofreader of non-fiction. I eventually found myself out of work thanks to computer spell-checkers and Grammarly (despite their shortcomings!). The material that they are using to train AI feels like the work of me and my colleagues, (aka 'the grammar nazis'). Naturally, we won't ever see a penny!
Timely thing on skin carre, BBC have an article today by Amy Garcia (17th Dec), titled "Parents warned over child use of skincare products". Crazy
Also regarding the government and culture, they, alongside the previous leaders, have no idea how to support culture and the arts. We have an amazing industry full of stunning talent, but if they’re being squeezed too far, the products come out of where the money is and that won’t be the UK.
I’d love to see more British shows that show the spectrum of experience here, every time I speak to my American friends particularly, they absolutely love British shows and talent and rate it far higher than anything in the US.
I’ve got loads of writing ideas and tried getting into the industry years ago but just couldn’t get anywhere near, but I still love to write and create.
Marina, "they know everything as they've always had the answer to every single question in their pocket, but they haven't developed any emotional intelligence" ........sounds like kids and AI in a nutshell.
They read words they do not understand, and they gain no understanding as they did not do any work to try to understand it.
What a fantastic piece on diversity in visual media. I tune in for pieces like this! The only thing I would add is having lived with someone from a different culture to the one I was born into, their Netflix homepage was TOTALLY different to mine. It’s evident that people tend to favour watching programming that speaks to them, and right now Britain is maybe too diverse for a national identity that can fund a single, independent media outlet.
Wonderful, Miranda on the Influencer/skin care topic
32:00 Maybe I'm not the young people anymore, but being a millennial I remember the start of this discussion.
I remember being really into bbc 3 growing up. Great channel. Lots of fun shows.
Then they decided to move that to streaming only.
Now they've brought it back, but I still get it on streaming because I'm used to that now.
And what you've got to bear in mind, is basically every network did this. There were online exclusives for most On demand services so we were basically trained not to watch TV.
Bear
@@longjonwhite oh huh. fair enough, changed
Yrs but BBC 3, and maybe 2 used to commission these incredible 7 minute interview shows that lifted the rock on whole areas of society you didn't know existed - the Secret Life of .... [Booksellers, to give an example] was a great one.
Season's greetings, happy holidays, cool yule, and merry celebrations
I have written to my MP. As a retired It professional who now writes fiction I understand the inevitability of adopting such technologies but also that they need to respect existing laws.
Disruption isnt always a good thing. Look at how AirBnB has impacted on the affordability of long term rental properties in popular destinations or how the likes of UBer and others have sought to undermine workers rights by not recognising them as employees at all.
Isn't this somewhat similar to diverting manufacturing to places such as China, losing all our own manufacturing capabilities, and then bemoaning our dependency on that stupid and frankly obvious outcome?
should be opt in not out
Our local Sephora the consultants are constantly despairing the tween purchases, the store has considered banning sales under a certain age but it is the parents paying and insisting their kids can have this stuff. And while we know the people who run the beauty industry have no morals, I cannot get over the parents paying! Might add I live in a city (not UK) that is affluent and “well” educated, the most progressive and privileged in our country.
Remember when optimistic sci fi authors thought AI would take over the mundane, boring stuff and let people spend more time in creative endeavours? Yeah....
let's face it, all great technological innovations are always sold to us as 'imagine what what this could do for humanity!' (i,e how good it's gonna be), but they always end up as being exploitative tools, because the lust for power is always irresistible. And to have power, you need control. The internet is a prime example of this. I can't even fathom what the world will be like with 'full' AI and quantum computers. I am sure they will acheive a few good things (like curing deseases), but at the end of the day, I am not so sure humanity as a whole will benefit from it.
I used to be a freelance copywriter for a company which no longer pays for human written copy. Just to play devils advocate: If I read a Stephen King novel, am I not then also trained on the work of Stephen King? Would I be violating his copyright if I then write a story aping his style ( assuming I paid for my copy the novel)?
Exactly, it's not at all as simple as they make out
Can someone tell me about lip balm? I use it all the time, and I’m really interested 😯
I'd love to see the evidence for AI saving the government and businesses a lot of money.
This episode is one of the best!
The only skin care regimes for tweens is soap water and sunscreen which will mean they don’t need any of the expensive stuff later on when bank of Mum & Dad won’t be funding it 😎
Loved this episode, Marina made me laugh so much today
I work with a tween brand in the USA for boys and i'd say they echo your thoughts entirely. The young ladies want the "feature ingredients" they see advertised for adults (like retinol), even though they would be bad for them. With the brand I work with they understand that and have products designed for younger people to help protect that perfect skin rather than improve it.
I got sent the questionnaire the government has sent out from what I've read nd the questions clearly state that government is going rewrite copyright law in favour of the tech bros.
As a regular reader of Marina Hyde's brilliant Guardian columns I would say that the paragraph of an AI generated Marina Hyde column, while it didn't sound quite right it also didn't sound hilariously wrong either. A new world is coming.
I am regular reader too, I thought it did sound a bit like her, but I wanted to hear the AI version of one of her introductory paragraphs, as I think I could write one of those
I've got two little girls (5 and 2) and I'm now feeling quite terrified of how to deal with all those influencer things as they grow up, I hope it isn't quite as bleak as Marina makes it out to be!!
Back in the 80s wasn’t the product the entertainment? He-Man She-ra Transformers? I remember when cartoons were the influencers that had ads for the toy attached.
The overpushing of diversity in TV and advertising just makes things look unrealistic. It doesn't achieve what it intends and actually has the opposite effect for me personally. Having a black woman and white guy sitting on their DFS sofa with their Southeast Asian kid and their three legged dog just makes me feel alienated. It certainly isn't going to make me want to buy anything or influence my decision either way. DEI for the sake of DEI is at best pointless and at worst very dangerous.
dangerous? seriously get a grip.
I think that's maybe just you. I like the diversity, even if it's not realistic. TV has never actually been representative, so it's nice to see some different kinds of people at least.
There's zero self awareness or empathy in this response. If a multiethnic family makes you feel alienated you should be looking at yourself, not DFS.
Our Saturday morning cartoons were nothing but 30 minute commercials, but I agree that consumerism as entertainment has been taken to the extreme.
Artificial Intelligence, specifically Generative AI is colonising creativity as we know it. We have to do more to protect creatives, artists, and designers as this is taking from them more than just their creative outputs but also their souls journey of lived experience.
2018
Google’s unofficial motto has long been the simple phrase “don’t be evil.” But that’s over, according to the code of conduct that Google distributes to its employees. The phrase was removed sometime in late April or early May, archives hosted by the Wayback Machine show.
The beeb missed the streaming wars and they had the product and the iplayer. They should've opened it up to the world and charged for the privilege. Instead they sold of the shows to the streamers which are now killing them.
My 82 year old dad steals a budget tape measure every time he goes to B&Q he normally uses the excuse that he’d forgotten he’d put it in his pocket, I’ll suggest next time he blames AI data scraping.
when marina really gets going I have to turn the playback speed down to 0.75 😅
There's so much in that incredible brain of hers and it's just screaming to be let out.
I fear this conversation about AI learning comes too late. I'm facing AI every day in the art world and the theft of copyrighted works has been done, sure you can protect future works but even though the snowflake is part of the avalanche, you're just removing a few snowflakes from that avalanche... The question being asked is the wrong one, it's not if AI can create art to the level of the level of our best and brightest, the question is does it have to? All AI has to do is be good enough to pass a first glance, when it's able to do that it becomes a matter of volume. Reducing the signal to noise ratio and push human creations somewhere in the haystack. In my field I currently see a problem with young artist, who no longer have a chance to hone their skills on these small projects that are absolutely crucial to pay the bills of a young artists. Things like posters for bands, venues... artwork for thumbnails, the local football team's flyers for the barbecue... All those are done by AI today, completely cutting off an entire stream of artists. We're literally going back to the times when art was for the rich. That said, AI companies will be the ones hiring writers and artists to feed their beast, because these models self destruct when they source their own content.
I came across a video of a 3D printer (with software (AI)) that can mimic brushstrokes of artists, to get that impasto, handworked 'product'. Why? Why do technologists want to eradicate any human creativity?
to the question, can AI create art?, my answer woukd be no. Anyone who thinks AI can create art, does not know what art is. Art at its root is a human experience. Can humans use AI to create art? Absolutely. But if you are not including any human expression to what is created, then it is not art. AI can probably create synthesized artefacts, but if we cannot attach meaning to them, then surely you can't call it art. Or am i wrong about this?
@@prickwillowartclub4738 because somewhere there will be a company who thinks they can get the machine to do what the human did for less money to increase their profit margin.
@@dibdab101 I agree but where are you going to find the art in a sea of derivative work.
@@SkinnyObelix that's my point. you won't.
Does AI stand for Accesed Illegally?
I don't know if they can pay as a licence i.e buy it for 3 years and then renew - because of how it works, once the system is trained, thats it - the content itself isn't keep being accessed actually
It's one thing that it 'owns a copy' of the work, but every time it generates an output that's directly derivative of the work then that work would be liable for charges.
@@NotThatOneThisOne there are some works to aim for that but on the basis of the technology it also doesn't work that way, basically the entire training is used for all answers and they would lean on or sound like A or B upon request because they know what A or B sound like but still all input is used and it is very hard (but, again, there are things doing that) to know exactly what part was the basis for any answer. This isn't to excuse the issue, only to say the compensation might need to be higher as a one-time fee and not on a repeat basis
@@NotThatOneThisOne AI isn't build in a way that forgetting previous learning materials is happening. Sure, they could invent and implement something like an "undo" button, but some powerful governments have to force them to do that.
And even if this would happen, an AI will always give an answer. Stephen King's books can't be used, but the AI read on reddit that Grady Hendrix has a similar writing style, would you notice the difference? And if that's off limits too, there might be a self published Stephanie Queen parody somewhere which never sold more than 2 copies. Or the AI considers tv shows like Simpsons, South Park, Rick&Morty, Garth Marenghi's Darkplace and many, many more as acceptable reference. That will work, because Stephen King is a pop culture icon.
For people like Marina Hyde there will be less references outside of her own work. But AI would still know that she is a columnist and write a column without telling the user that the style is only guesswork. Because AI is always only guesswork and it's the users job to verify or dismiss it.
It's quite funny that Friends now is considered not very diverse but I remember when watching it for the first time thinking how diverse it was because one of the characters was Italian (Catholic) American and three were Jewish. It was incredibly powerful having non-WASP characters.
Where did you grow up? I feel like Italian Americans and Jewish Americans have been prominent in TV and movies for as long as there has been TV shows and movies.
@@daviebananas1735 Not New York obviously. There have been Italian and Jewish American actors but they were either gangsters, minor characters or playing WASP (or indistinct from WASP) - David Janssen, Leonard Nimoy, Wiiliam Shatner.
@@mjpledger What about Mel Brooks, Jerry Lewis, Gene Wilder, Barbara Streisand, Alan Arkin, The Marx Bros, Woody Allen, George Burns, Jack Benny, Garry Shandling, Jeffrey Tambor, Phil Silvers, Jerry Stiller, Jerry Seinfeld, Don Rickles, Bette Midler, Walter Matthau, Tony Curtis, Larry David, Lenny Bruce, Katey Sagal, Joan Rivers, Carol Kane, Jackie Mason, Dustin Hoffman, Fran Drescher, Rosanne Barr, Jason Alexander, Scott Wolf etc I could go on. And that’s just Jewish Americans.
They tried that in France: creating shows based on bills of charge with ethnicities and sexualities rather than characters but thats what they solely were, clichés or so anti-clichés they were clichés. People hated their lack of depth and authenticity.
Love the reference to the Underpant Gnomes.
I was hoping when Richard read out the AI generated Marina Hyde column he would start with the intro. I’ve been reading her excellent columns for years in the guardian and I think I could spot a well made imitation introduction, and maybe fake one myself
It's the parents who buy the tweens this stuff. Blame them, not the children. I remember wanting a pair of Olivia Newton Johns Grease high-heeled sandals as a 10-year old. You'd think my mum had ripped my arm off the tantrum I threw when she said 'absolutely not, you're far too young'.
@@DaddyBiscuits Do you need a tissue?
I live near a Sephora... suppose i secret shop the location after Christmas ? Oh my gosh
I was commissioned to edit an AI generated novel, it was an absolute mess in terms of the writing, tenses were inconsistent, but there were some very good twists that I suspect were from someone else rather than AI inspiration. I gave up in the end, they will get better though, so now us the time to get legislation in place
that's the whole point about 'AI' it's only working off existing material. currently that's still produced mainly by humans but that will flip and probably quicker than we even think.
Ive worked in customer service and hospitality for years. And there is absolutely no way that the make up and skin care companies did not have their finger in the pulse on this from the beginning. When you are behind the counter you will notice the amount of children buying products and the word goes up the chain about it.
Make up and skin care should be treated like all "adult" products. Not to be sold to anyone under 18. If we had, as a culture made it clear that make up is an age thing this wouldn't be an issue.
Parents stopped parenting.
Copyright is actually stronger than is being stated here. All things that are created (with some caveats) are copyrighted.
It was the same with the adult industry. These websites came along, basically just stole everything until they got big enough and rich enough to get the law written in their favour. And now the richest person in the adult industry isn't someone who actually makes any of it, but the person who hosts it on their servers.
We need to put more responsibility on companies that advertise their products. Simply make it illegal to advertise to people under the age of 18 and if they claim they can't control that on certain platforms, then they simply can't advertise on those platforms. It seems that in today's society, companies right to bombard people with lies is more protected than the wellbeing of the people.
Visual creators (concept artists, illustrators etc) have been utterly screwed over by gen AI, and personally I have no interest in any media that has been created using it. On top of all the thievery, AI stuff simply looks and feels scammy (and then there's the environmental impact too, oh gee how fun and ethical). I genuinely don't know why anyone would want their brand associated with it. I am only interested in human art made by humans for humans, and that's that.
Usually, nothing can stop Marina when she's in full flow of her stream of consciousness, including Richard's aside jokes which always seem to whoosh past her going unnoticed, but the Sephora joke actually did derail her momentarily - is that a first? 🙂
The AI crawlers don't keep record of what they have copied as they copy it. It take a blistering amount of computing power just to copy all the things, analyze them, and synthesize them into a response to a prompt. It would take *even more* computing power to keep a bibliography of what is being copied, let alone tracking whether or not it is allowed to be copied.
And yet, that is exactly what lawmakers should demand tech companies do. The AI must keep bibliography and not copy illegally. All AI that does copy illegally should be classed as a virus, legally and technologically.
The thing is - it does not copy in the sense most people would interpret the word. It statistically analyses the words and plots them in a multi-dimensional vector space that encodes the meaning of words in an extraordinarily detailed way. Each word’s vectors are a result of all the instances of that word on all the material it is trained on. So it is analysing the language across all works, not copying those words.
@@MrNilOrange that is indeed copying. It's not storing, but it is copying.
Ironically, the AI actually ought to store the words, with proper attribution. In that multi-dimensional vector space of meaning as you term it, each node should have a bibliography. That way the works that inform that meaning are known and their authors can be credited. Without that attribution, large language models are elaborate theft. Lawmakers ought to advance that principle, but they haven't the understanding or the mettle.
@@MrNilOrange that is indeed copying. It's not storing, but it is copying.
Ironically, the AI actually ought to store the words, with proper attribution. In that multi-dimensional vector space of meaning as you term it, each node should have a bibliography. That way the works that inform that meaning are known and their authors can be credited. Without that attribution, large language models are elaborate theft. Lawmakers ought to advance that principle, but they haven't the understanding or the mettle.
Ooh, you mean like data mining and those horrendous bitcoin farms, or whatever they are called.
@@thescowlingschnauzer Why should we make machines do it when humans don't though? If I write something, I don't have to reference where I first learned every word in this sentence, nor the idea behind it. It's not that I think we should just let AI run riot but I can't see a way of dealing with it without just saying "it's okay for humans do to this, but it's not okay for a human to make a machine to do this" which I find a bit uncomfortable.
28:52 Very hard disagree with Marina’s characterisation of South Korean culture (disappointed Richard didn’t pause her at any point, the video also has a cut suggesting some parts of her anti Korean tirade was edited). Clearly is incredibly biased and ignorant from someone who has never visited themselves.
1. S Korea is a pioneering country in skincare and treatments because, like many other countries, there is pressure to make yourself stand out. The same thing applies to the UK equally where girls attending even primary school apply make up liberally. What is the harm in a country being a pioneer where others can be inspired by it - Korean treatments are never targeted at children
2. Dystopian - reeks of British exceptionalism that a country that is easily a decade ahead in tech (which has had cell network allowing people to watch tv on their smartphones since 2009 when I lived there) is reduced to being a dystopia
3. Massively patriarchal and low birthrate - again reeks of ignorance. Women are highly educated and well represented in upper echelons of leadership both political and commercial. And birthrates are falling globally including in Britain - a very Conservative viewpoint coming to the surface by equating that to being in a patriarchy
I would invite Marina to speak to some Korean people herself or even visit the country. Very disappointing to hear this from a journalist of note. While the comments go mad about her choice of book arrangement or quality street prefs (which are both twisted but not the same weight as her rant about S Korea)
Always informative, always excellent. My 21yr old heard me listening and popped her head round the study door to confirm she's a fan as well.
OPT IN needs to be the default and copyright law brakig needs to be enforced.
Is there really a diversity issue in traditional TV? I can remember reading somewhere recently that statistically there is an overrepresentation if compared to the census data
Se four or five of them . Fair bloody play Richard 😂
Nepotism doesn't help either ..its absolutely rife within the TV industry.
AI companies paying for back catalog is actually the right thing to do. What's wrong is: once you put the back catalog into the AI, and it copies that data to make connections, it doesn't track which data was used to make a certain connection. AI can't delete / unlearn / selectively forget. At least, not as currently designed.
It should be made into law that AI programs must track data in every connection and must have the ability to remove connections based on copyrighted data, and it should be made law that any AI program that fails to do that is a virus and a tool of theft. But legislators don't have the understanding or the balls to do it.
Pity, that. Because otherwise Richard's model of licensing back catalog to AI for a certain number of years, it would be a great way for copyrightholders to get paid.
In fact, if AI programmers solved the bibliography problem (making AI remember what it copied for each connection) then you could automate the lawyers right out. The moment someone publishes an AI generated work it could trigger digital payment from the publisher to the copyrightholder(s) immediately. No legal interpretation needed.
AI can absolutely be retrained to unlearn selections of datasets though and is something that is done when undesirable effects from model deployment are found. Xu et al. Mar 2024.
Neural networks are not designed to have metadata attached to every connection, they're universal function approximators, we don't even know what the model connections even mean. They are black boxes and there is a whole machine learning sub field of researchers that have been spending years trying to understand what the weights between nodes actually mean.
You can't just say that node 5 in layer 3 of multi attention head block 3 is connected to this and this and this and this to every single one of the 100s of billions of parameters of these models, it's nonsensical. The challenge in deploying these models is already an issue without having this as well.
An easier way of doing this is that any data that is used to train the models should be subject to audits via a third-party or state run organisation that can verify where the data comes from, whether it sourced ethically, whether it was source legally along with all other data protection checks.
Will the logical conclusion be that AI will "create" output which will subsequently be scraped by other AI and re-output ad infinitum?
How long before the results become utter gibberish.
Note: don't forget about the creep of predators in the manga anime world in forums online and how that directly connects to kids suddenly transing themselves.
Are they only making kids trans, these predators? Are they making them gay too?
Underpants Gnomes still makes me laugh. That's one of my favorite gags.
I think teens using botox is far more concerning than having kids know it's important to look after and care for their skin, again its all about getting balance between keeping healthy skin and the scary extremes of beauty obsession
Claiming diversity TV would accumulate more viewers is like saying niche food sells more than mainstream fast-food. What sells the most are what's easily consumed and appeals to large masses. This tends to correlate with lower quality and less diversity since formulas are used to increase appeal at the lowest cost possible which in the end makes all products very similar to each other.
As long as you can direct a generative AI to produce something "in the style of " then the "drop in the ocean" argument falls apart.
Stealing one persons wallet is a drop in ocean, but you would still get punished by a court for that.
Maybe the arguement becomes more paletable if its about all jobs not just the so-called creative sector...you cant be happy with self service tills and ai bots replacing call centers and then go wait dont take the upper middle class jobs
*Alphaville* The comment about not understanding WWI because AI never read poetry on the topic, it reminds me of the great French movie '"Alphaville". How to undermine the technological system underpining the system. Perhaps ... poetry .... Time for an update perhaps
"You have to pay"...permission and payment is not the same.
I think for the most part I completely disagree with Richard about AI (which is unusualy for me). I have an interesting question, If Richard stands up on stage or says on this podcast 'Like a great auther once said "It’s only after we’ve lost everything that we’re free to do anything."' and then goes on to talk about his relationship to that quote, is he not profitting off and stealing someone else's work?
Everytime a professor/lecturer or a teaching doctor passes on information that they have learned but belongs to someone else are they not stealing it?
If we were to compare an AI trained on the works of just one author to a person who has studied the works of the same author and ask them both a question (whether we have paid them to answer or they have answered freely), aren't they both stealing if one is?
There are only two issues I can see here, has the copy of the literature(or whatever medium) been paid for (i.e not pirated), and does the medium have a commercial licence version (i.e. I know some non fiction is printed with the explicit purpose for the person reading to pass it on indirectly, usually for education purposed and these copies cost more).
Just because technology can read it faster and make better connections then a person I don't believe this fundamentally changes any morals. At most I think an AI should have to pay a marked up price to read/watch etc the content, it certainly shouldn't be free and/or pirated, but other than that I think its pretty fair game.
No, because referring to a small excerpt from a text, in this example, is legal under copyright's fair use terms. AI scraping isn't designed to onboard small parts; it has to have the whole work in order to analyse it.
@NotThatOneThisOne I'm semi familiar with fair usage. So is it fair usage to on multiple occasions to tell different quotes? Where does frequency play into fair usage? Because again, an ai can do it quicker, the morals don't seem any different.
I do agree if you asked an ai to recite a work and it gives you the whole work verbatim that that is morally wrong, but if I asked my professor to tell me an entire poem from memory how is that any different then an ai doing so?
The show has become uninteresting because Richard and Marina are too similar in their backgrounds - both knowledge and political leanings. In this case, both woefully misunderstand how llms function, and the basics of copyright law, but they just reinforce each other and mislead their audience.
Reminds me of the days of early Hip-Hop
Emily In Paris is exactly as diverse as 1993's Mighty Morphin' Power Rangers.
I struggle to believe readers want to read AI books. I think it should be the bare minimum to at least have these things clearly marked on storefronts.
Don't call it AI, call it by it's name : Grand Theft Autocomplete
blame the parents not companies and definitely not the children, if their parents taught them that those cultures generally eat well, get fresh air and sunlight regularly maybe they would do that?
Is it me or is the audio panning all around the place?
"Kazakhstan or Texas" 😅
Diversity on tv, Jesus that's an overlooked issue if ever there was one
you didn't even talk about suchir Balaji's open ai passing away
The skincare issue is worrying, however if parents will pay for these goods for children, despite them not needing them, then there is part of the issue, however, the level of influencing through social media is incredibly intense so it’s no the wonder children want these goods, it’s like the 80s capitalism x 100
Baroness kidrun!!!? What even is England 😅
AI need to get in the sea. Everyone who works in AI needs to get in the sea.
It is now our sworn duty to fill the Internet with as much crap as possible to ensure the scrapers are learning crap.
1. How about having a knowledgeable AI expert in this discussion? 2. There are large bodies of music, writing, images, etc., that have been licensed for all manner purposes. Hard to unring that bell. 3. Richard needs to stop asserting without evidence what was stolen or how AI can be applied.
Say what you like about zofora but, if you have a pet, it’ll get the smells out of the kitchen linoleum.
AI doesn't read, it ingests
North West LMFAO! 😂😂😂
If an author reads a book and then writes their own book based on what they have learned. How is that any discussion to AI?
I could walk into a library in theory, and read every book on the shelf, take all that knowledge and write my own book. Surely AI is doing the same? Unless the work AI produces is the exact same as an existing book then its not copyrighted I would have thought 😅
As a thought exercise sure that's true, but things have worked out so far because a human cannot possibly do that. Now that AI models have made that a reality it's perfectly reasonable to work out parameters on how that is handled without it being an automatic free pass.
I think Rich shud be more scared than he is?
I have an argument in favour of AI. The talk of these publishing and movie companies allowing AI companies to use their work for the AI. But that's what already happens with humans. We consume everything, and make something new out of it.
Whats that saying about great artists steal
One thing about the AI copyright thing is the ridiculous length of copyright. Every time Disney ip was going to go public domain the law was changed to increase the duration. If copyright was reduced to increase the amount of works in the public domain, then there'd be less demand from tech companies to steal.
I don’t think creatives are happy about AI. I think people who are looking to profit from creativity are. AI doesn’t help someone be creative or have an original thought in any way.
AI appeals to people who want a shortcut to creativity and capitalists
Large Language Models is the largest infringement of copyright protections of artists and other creators since copyright laws were established to prevent this type of creative theft.
My Tory MP doesn't care whether we live or day, never mind whether we fall prey to AI....
Eye opening
I rarely disagree with Richard and I hesitate to do so now. But I think he has the wrong concept in his head about how LLMs work. They are not copying or emulating works. They are analysing them semantically. They are effectively learning English at an extraordinarily detailed level by reading everything possible and plotting the relationships between the words in a hugely multi dimensional vector space. So they are learning English and what words mean and how they relate to each other in exactly the same way that all humans and authors learned their language and their craft. If you think they are plagiarising then we have all plagiarised every book we ever read and learned from.
The money is interested in ai, creatives are not. if you go online and look at the dialogue creatives who understand the tech are scared, especially small independent producers who do not have the money to pursue and Sue big tech companies who are stealing their data be that the written word or images or even 3D sculpture now and video. The idea AI is going to be used to empower the workforce And somehow tech pros who now don't pay taxes like they should will out the goodness of their heart pay universal basic income is preposterous. This is a fundamental fight for who owns our own data and our own intellectual property and whether we have a say On how that is used. It's also very short sight of a government because the creative industry is a big industry in the uk and if you go out of your way to destroy it including her little guys ghost a non-gender specific peoples then who's gonna remain to pay the taxes we've already seen record levels of unemployment this is only gonna add to it. But fundamentally it's not even about whether people are going to lose their jobs it's about the permission that we give or not and whether we have the right to withhold that permission. And this from the industry that told us that copying dvds was bad and now they're doing exactly the same thing but it's OK because of the power and the money.
COPYRIGHT and AI: To begin with, the Lords is right - if AI steals copyrighted material it is no different from if someone else does. (Not that I think AI is a someone!) But I am confused about what that stealing is. And the phrase which is relevant here is Teaching Materials. In essence, what is the difference between an English professor tasking their students to read written works to understand about style and construction and a programmer tasking their LLM to do the same? One is obvious - the student would need to get hold of the works somehow. This could be buying them first or second hand, borrowing them from a friend, or borrowing from a library. AI doesn’t have friends so the LLM needs to do one of the others.
When the student buys and reads a book, the author earns a single royalty. There is nothing in the licence granted by buying the book (buying does not mean you now own the copyright) that says you can or can’t learn from it. Or that you cannot remember it for future use, or you cannot retain it on your bookshelf for future use. And if that future use is being “inspired” by it, that is covered simply by not being referred to. However, you cannot reprint the book, republish it in any form (including recording it as an audiobook). You need a separate licence for that, which would normally include a payment.
And here is the confusion - how do you translate that into a mechanism for a LLM? When is the LLM just learning from source material, and when is it borrowing from source material?
This is where agreements with publishers make sense. The company training the LLM essentially sets up some kind of library agreement. This is also where I think previously, they have DEFINITELY stolen - they have had access to books without a library card. They have broken in to the library.
But having learnt from the material, how do you stop them plagiarising? Or how do you make sure they pay when they quote and that is not covered by a public interest argument? (My thought on the latter is that they don’t have that option in the first place. Many internet creators use and abuse that idea to breaking point already).
I also agree with Stephen Fry. I have been banging on about this forever when it comes to AI narration - there is NOTHING in how AI works or is evolving which means it “understands” an idea. It is a machine that can assimilate and regurgitate knowledge in a consumable form, but it has no sense of understanding. It might be able to impersonate Marina’s writing style, but any understanding of the subject is in the prompting written by Richard, or in knowledge gleaned from sources. It is not in the LLM itself.
That is clear enough, but I don’t think it is widely understood. So a company can set up a system where AI will act as an editor for your book, but they will omit to mention that their program won’t understand your book in any way at all. And authors will fall for it and be ripped off. I would rather read a self-published book full of spelling mistakes and grammatical holes, but is the honest work of a human being, than read something that has been passed through a system with no human oversight at all. I use grammar checkers all the time, but it is ME who chooses to accept each and every suggestion - It is not automated and it for me, it never will be. I like a human to be there for every word, and for that human to be me.
As someone who works in this space, your analogy of learning from material or borrowing actually falls into the dichotomy of training the model and prompting it.
When these models are trained, they have an architecture of billions of parameters that can be tuned in order to fulfil some task. In the first instance their task (often called pertaining, the P in GPT) is to simply predict the next word (it's actually "token" but I'm simplifying with word) in a sequence. What this actually does is instills a probability distribution of what the next word would be given the context of the previous words. Once we have trained a model to do that you can then go on to "fine-tune" that model on more specific tasks, for example training it to output text in the same style as William Shakespeare. When we do both of these training regimes, the internal parameters of the model are tweaked and changed so that the probability distribution of outputting a particular word is changed.
When we prompt the model however, all of those parameters are frozen and what we're instead doing is activating the parameters of the model to predict what the next token is. In the background there will be some formatting done to prepend "User" and "Assistant" before each input and output so the model takes that into consideration as per its training data so the probability distribution remains constant given that other parameters when prompting the model are appropriately set, like "temperature=0".
To me, if a model was trained on copywritten materials then the model developer should be paying a licensing fee to have that material in their dataset or remove and retrain the model without it. It brings me to the point that it's not necessarily the models that we should be regulating, it's the data that is used to train these models that should be regulated in some way.
If you want to learn more, 3blue1brown has an amazing set of videos on it: ruclips.net/video/aircAruvnKk/видео.html