I thought the same at first. I think the point is to be able to show that an image is NOT edited. By taking a screenshot you could in turn not verify that the screenshot shows the real "thing" because it would lack the legitimacy metadata that the original might have had. So an image without that metadata might be legit, but you cannot prove it. Only the one with the metadata
@@MisterSnick Ah thanks that makes sense. But what if you take a picture of an AI generated image? You just need a good screen in a dark room I guess. Actually you don't even need a good setup when you think about the quality of the garbage that circulates on fb/twitter/instagram... In my opinion, you'll need to rely on the source rather than the material itself. If the New York Times posts a picture, take it as is, if it's from one of those spam accounts that you can't seem to get rid of your feed, always assume it's a fake. The sources you trust will definitely have a harder time vetting info that's for sure
@@MisterSnickthat’s a good point, and I think that’s a legit good solution. The only problem now is, how about the metadata itself being editable? How do we counter that?
@@NeroVingian40 Camera makers and software companies would have to sign the content of an image as it is in the moment. You can still make edits into data or metadata, but that will invalidate that signature.
it would be great if there were regulatory pressure to adopt such standards instead of just passing useless laws made by lawmakers whose knowledge of AI is basically: watched Terminator and started to freak out
Photoshop can be used to remove its own metadata. Phones that use AI tools in images will be unlikely to use C2PA as it might affect sales. I have little to no confidence that C2PA will do anything against the flood of AI generated/manipulated images.
Photoshop can remove metadata that it's allowed to remove, and C2PA would be separate from that. The initiative is more about image authenticity verification for news outlets, less about curbing every single AI generated content on social media.
Sounds like a great business opportunity for an industry that sources, validates, curates and distributes information. I wish something like that existed...
Very 09:33 of you. Speaks to the whole, “It's probably broken and I don't believe you” [-@reckless] of it all. So prescient, although ODD that that episode of the Vergecast is probably suppressed is ways that speak to the, we love podcasts but will still actively oppose RSS and never learn the lessons from Google Plus did the check to Reddit clear?
I'm glad to see more discussion like this. I can't understand what the need for near-instant AI-generated images on anyone's phone is? What problem is it solving?
5:40 - I never do photo editing, so all my photos are original and authentic. I think it's fair to mark human edits and AI edits as disingenuine. I believe altering even a single pixel of the photo should be flagged down. They should do this with video files too. The original RUclips featured far more unedited videos, while it is currently very difficult to come by. Also, I despise social media photo sharing like Instagram and Facebook. I would love to embrace the 90s internet again. Good times. The 90s were amazing.
How I explain it to older individuals, “I can’t tell you what’s fake but I can tell you what’s real, but we all have to do it for it to make a difference.”
Firstly, people are going to get better at spotting AI images. AI either simplifies an image or it makes mistakes. The duck is an example of a simplified image, though even then the angle of the shadow is different than the other objects. The wooden object is an example of complexity errors, as its structure does not make sense. Secondly, cameras will get better. I doubt that even now an AI image can match the pixel complexity of a 60 megapixel image file. When movies first came out people jumped out of their seats when the train came rushing toward them on the screen. Now, that seems absurd.
A C2PA camera would need to also capture depth information, like VR cameras do, and so the depth info of a screenshot is "completely flat" which would be the tell
It's interesting how simple greed and selfishness makes innovation move faster than the speed of light while simultaneously grinding to halt the buidling of safe guards down to be slower than the speed of smell.
literally nothing would be private if this takes effect, not to mention it could always be forged on the metadata to mislead someone, or even worse to be used as legal evidence. on these fronts it has the same problems as any other format
You're never going to get all camera manufacturers on board with this. You'll never get all image editing software onboard with this. A system that requires everyone to opt in won't work because there will always be entities that won't opt in. We already live in a world in which you cannot trust any image you see. AI is a new problem which requires new and different solutions.
The screenshot would not have the same metadata. So you could not verify that the image is unedited. So it might be legit or not. That is basically where we are right now
There's not a lot of metadata in a screenshot, credibility is severely lacking. You could fool social media, but reputable news sources would discredit it immediately.
Cryptographic signing, of course, relies on the security of the keys on the device. Just a matter of time until that goes the way of DVD encryption. Even if you revoked certain keys... this is going to be a niche market. (Press photographers?) And then there's taking a photo of a AI image. Or transferring it directly into a hacked camera. DRM is not going to save us.
How about storing a hash of the camera original in the metadata and where space permitting a full copy of the original image in the meta data that could be extracted with appropriate software? That would be a tool to alleviate any question about the extent of manipulation.
Are there privacy implications here, what if this becomes a mandatory standard? What if a journalist or citizen takes a photo and then this gets traced back to them? I’m concerned about what happened to Jamal Khashoggi…
Literally we’re just going from photo shop to Ai not a whole lot of different, always going to be haters and fakes lol .. just make it a new sub category and allow people to be creative with photos 😂
Everybody does not need photo generating software , it will only create chaos in society , these should be paid software and the who is providing it has to make sure it labels its every generated image image
i find the attempt to enforce this rather naïve, i mean it won't be long before the AI will be creating images that are indistinguishable from reality in every way
And immediately after accomplishing this astronomical feat, all AI creation tools will be updated within an hour to automatically insert realistic fake metadata to circumvent all of that effort.
just add electronic signatures to the images and where they have been generated from. instead of regulating camera hardware and what not why not just regulate and scrutinize gen ai tech hardware and software
Yea, and lets ignore the simple truth that this silly idea won't stop it. Step 1) Generate AI image Step 2) Get a high res, high quality, large display Step 3) Use the C2PA camera to take a picture of the display image Done, then end, stupid laws can't fix huge issues and sneaky people are smarter than you. The biggest mistake the gov makes, is thinking they are smarter than other people. There are always smarter people...
It’s a full process - the camera being the first step not the last. You have camera, editing on camera, upload to device, edit on device, upload to website, edit on website, upload to social, edit on social, post. Every one of these would be included in the metadata including date, time, lens, edits made.
The only real solution is getting off of the internet. Don’t get all your information from twitter. Don’t spend all your time talking to people on a keyboard. Go outside. The golden age of the internet is behind us and it’s just going to get worse and more stressful to use.
I don’t think ignoring it is either, but I’m not sure how you put it back in the box. You can’t tell people not to use it. I think it should have never happened but it’s also too late. Everything revolves around the internet because users have caved to it and companies have turned it into the ultimate cash cow. Everyone needs to do the hard work to deprioritize it in their lives.
@@ColePerrine that’s true, but it’s also easier said than done. I’m not saying we shouldn’t be distancing ourselves from the internet, we absolutely should, but it’s also impossible to do. You will still need the internet to fact-check anything in the first place. Working hard for it doesn’t matter when it’s impossible to do.
@@ColePerrinenot using the Internet is a privilege reserved to high income countries and individuals. If people have barely enough to eat and their boss asked them to have a WhatsApp & social media accounts they likely comply. Even for a freelancer, in some countries, WhatsApp & social media is the only way to capture commission and communicate with clients.
some of them are hard to differentiate, but don't tell me there can't be an (AI) tool that recognizes the vast majority of super obvious ones, like 9:11 and flag them.
If I don't want to include that logged data in my images I shouldn't be forced to do so. And you shouldn't be allowed to claim that they are not real just because they lack that data. Your voluntary system is broken from the start. And if you're trying to force it upon people you're the one that's broken.
You know what sounds very close to C2PA? NFTs! NFTs can actually be a solution to this problem, or an implementation of the solution. Corporations coming together and creating a standard can easily be turned into creating smart contracts that issue and verify certifications.
they're literally nothing alike. one is a standard cryptographically signed tag at the end/start of the file, the other is an ownership ledger. how would Adobe proving they own this file prove it's authentic exactly?
@@cataclystp They both have a commonality; authenticity. Anything that isn't part of the contract address can be deemed to be AI modified. Of course, this is only an idea. A potential application of technology to solve a problem. You're right in that they are fundamentally different. However, there's no harm in exploring different avenues to prove legitimacy.
Do you think we’ll find a concrete system for verifying real images? Better yet, do you think it matters if we can’t?
We'll need to get to a point where detecting and removing harmful fake images are good enough to penalize the bad actors from generating more.
We already have the best way to prove authenticity. Shot on film and if somebody doesn’t believe you show them the negatives😂
Any solution that can be circumnavigated by... taking a screenshot is doomed
I thought the same at first. I think the point is to be able to show that an image is NOT edited. By taking a screenshot you could in turn not verify that the screenshot shows the real "thing" because it would lack the legitimacy metadata that the original might have had. So an image without that metadata might be legit, but you cannot prove it. Only the one with the metadata
@@MisterSnick Ah thanks that makes sense. But what if you take a picture of an AI generated image? You just need a good screen in a dark room I guess. Actually you don't even need a good setup when you think about the quality of the garbage that circulates on fb/twitter/instagram... In my opinion, you'll need to rely on the source rather than the material itself. If the New York Times posts a picture, take it as is, if it's from one of those spam accounts that you can't seem to get rid of your feed, always assume it's a fake. The sources you trust will definitely have a harder time vetting info that's for sure
@@MisterSnickthat’s a good point, and I think that’s a legit good solution. The only problem now is, how about the metadata itself being editable? How do we counter that?
@@NeroVingian40 Camera makers and software companies would have to sign the content of an image as it is in the moment. You can still make edits into data or metadata, but that will invalidate that signature.
@@bartaxyz I see.
it would be great if there were regulatory pressure to adopt such standards instead of just passing useless laws made by lawmakers whose knowledge of AI is basically: watched Terminator and started to freak out
Photoshop can be used to remove its own metadata. Phones that use AI tools in images will be unlikely to use C2PA as it might affect sales. I have little to no confidence that C2PA will do anything against the flood of AI generated/manipulated images.
Photoshop can remove metadata that it's allowed to remove, and C2PA would be separate from that. The initiative is more about image authenticity verification for news outlets, less about curbing every single AI generated content on social media.
@@palody_en-ja That will only work if news outlets only ever used pictures or footage from press photographers with C2PA equipment. They don't.
Sounds like a great business opportunity for an industry that sources, validates, curates and distributes information. I wish something like that existed...
Here’s me thinking the presenter is an AI the entire time…
please elaborate.
I also felt same
That would be the ultimate plot twist. They should have done that to drive the point home
Exactly.
There are people in real life who sound like AI
Very 09:33 of you. Speaks to the whole, “It's probably broken and I don't believe you” [-@reckless] of it all. So prescient, although ODD that that episode of the Vergecast is probably suppressed is ways that speak to the, we love podcasts but will still actively oppose RSS and never learn the lessons from Google Plus did the check to Reddit clear?
Open-source communities rarely willing to comply with large corporations. Especially with something like fingerprinting & identification.
True. With good reason too
Well… now I have lived through both the beginning and the end of "pics or it didn't happen." 👴🏻
This is about as useless as an NFT and the privacy implications are horrendous.
I'm glad to see more discussion like this. I can't understand what the need for near-instant AI-generated images on anyone's phone is? What problem is it solving?
It's like asking everyone to stop burning fossil fuels to save the planet 😂
I like that 😂 dark - very dark
5:40 - I never do photo editing, so all my photos are original and authentic. I think it's fair to mark human edits and AI edits as disingenuine. I believe altering even a single pixel of the photo should be flagged down. They should do this with video files too. The original RUclips featured far more unedited videos, while it is currently very difficult to come by. Also, I despise social media photo sharing like Instagram and Facebook. I would love to embrace the 90s internet again. Good times. The 90s were amazing.
How I explain it to older individuals,
“I can’t tell you what’s fake but I can tell you what’s real, but we all have to do it for it to make a difference.”
1:36 Let's see so basically C2PA is a solution by tech companies themselves after introducing this chaos in the first place. Got it 🙄
Firstly, people are going to get better at spotting AI images. AI either simplifies an image or it makes mistakes. The duck is an example of a simplified image, though even then the angle of the shadow is different than the other objects. The wooden object is an example of complexity errors, as its structure does not make sense. Secondly, cameras will get better. I doubt that even now an AI image can match the pixel complexity of a 60 megapixel image file. When movies first came out people jumped out of their seats when the train came rushing toward them on the screen. Now, that seems absurd.
What if I take a photo of an AI generated image using a C2PA camera? This will look like an orginal image.
A C2PA camera would need to also capture depth information, like VR cameras do, and so the depth info of a screenshot is "completely flat" which would be the tell
I guess it'll come down to your reputation at that point.
@@palody_en-ja And then we are back to the way publishing worked before cameras existed. :) It worked for them, so why not for us?
Nah - it’s far more than about the camera. Remember location also factors into it, as well as lens, depth, time/date, etc etc
It's interesting how simple greed and selfishness makes innovation move faster than the speed of light while simultaneously grinding to halt the buidling of safe guards down to be slower than the speed of smell.
On the plus side, this explains why none of the historical cultural references in Star Trek went past the 21st century
Isn’t that because modern movie pop culture “starts” in the late seventies with some minor exceptions (Psycho etc.).
Happy Monday everyone! 😅
It's Friday 😅
@@ssdi91536found Dieter’s burner account
literally nothing would be private if this takes effect, not to mention it could always be forged on the metadata to mislead someone, or even worse to be used as legal evidence. on these fronts it has the same problems as any other format
Remember NFTs?
What a time to be alive
I know right - i have to pinch myself. I didn’t think i would be alive for any of this
You're never going to get all camera manufacturers on board with this. You'll never get all image editing software onboard with this. A system that requires everyone to opt in won't work because there will always be entities that won't opt in. We already live in a world in which you cannot trust any image you see. AI is a new problem which requires new and different solutions.
Did no one seriously think you could just screenshot an AI image and boom, metadata gone?
Can't anyone who wants to just bypass the whole meta data proof thing by using the snipping tool? Or a screen shot and crop it?
The screenshot would not have the same metadata. So you could not verify that the image is unedited. So it might be legit or not. That is basically where we are right now
There's not a lot of metadata in a screenshot, credibility is severely lacking. You could fool social media, but reputable news sources would discredit it immediately.
@@palody_en-ja No one will be checking the metadata of the images the are viewing. They don't care.
Why does a Verge video look so amateur-ish?
Ugh,let’s see where things go before going straight to a surveillance state.
What about a scenario of taking screenshots of ai generated photo, will it also work then?
Metadata can be edited 😂
Yes, but then the signature won’t match the checksum of the image and metadata.
There’s cryptographic signing.
Cryptographic signing, of course, relies on the security of the keys on the device. Just a matter of time until that goes the way of DVD encryption. Even if you revoked certain keys... this is going to be a niche market. (Press photographers?)
And then there's taking a photo of a AI image. Or transferring it directly into a hacked camera.
DRM is not going to save us.
So what if I take a screenshot of an image?
How about storing a hash of the camera original in the metadata and where space permitting a full copy of the original image in the meta data that could be extracted with appropriate software? That would be a tool to alleviate any question about the extent of manipulation.
Are there privacy implications here, what if this becomes a mandatory standard? What if a journalist or citizen takes a photo and then this gets traced back to them? I’m concerned about what happened to Jamal Khashoggi…
Literally we’re just going from photo shop to Ai not a whole lot of different, always going to be haters and fakes lol .. just make it a new sub category and allow people to be creative with photos 😂
Great video, well presented and really interesting. Fantastic job.
Everybody does not need photo generating software , it will only create chaos in society , these should be paid software and the who is providing it has to make sure it labels its every generated image image
i find the attempt to enforce this rather naïve, i mean it won't be long before the AI will be creating images that are indistinguishable from reality in every way
Great video! 👍
From a production point of view, this video didn’t feel The Vergy at all
C2PA is great for law suits, but does nothing for misinformation.
And immediately after accomplishing this astronomical feat, all AI creation tools will be updated within an hour to automatically insert realistic fake metadata to circumvent all of that effort.
Well written video
just add electronic signatures to the images and where they have been generated from. instead of regulating camera hardware and what not why not just regulate and scrutinize gen ai tech hardware and software
Maybe create an AI to act as a filter to other AI generated content. I even got a name for it, Blackwall :p
This video could have been a podcast.
did somebody think about privacy?
I wanted to watch this but i can't take seriously a reporter who dresses like that for a video.
7:20 i should try this
Look for that extra finger 😂
Yea, and lets ignore the simple truth that this silly idea won't stop it.
Step 1) Generate AI image
Step 2) Get a high res, high quality, large display
Step 3) Use the C2PA camera to take a picture of the display image
Done, then end, stupid laws can't fix huge issues and sneaky people are smarter than you.
The biggest mistake the gov makes, is thinking they are smarter than other people.
There are always smarter people...
It’s a full process - the camera being the first step not the last. You have camera, editing on camera, upload to device, edit on device, upload to website, edit on website, upload to social, edit on social, post. Every one of these would be included in the metadata including date, time, lens, edits made.
I’ll just take a screenshot and upscale with ai
Jess is living in the world of purely honest people. I have bad news for her.
The only real solution is getting off of the internet. Don’t get all your information from twitter. Don’t spend all your time talking to people on a keyboard. Go outside. The golden age of the internet is behind us and it’s just going to get worse and more stressful to use.
Everything revolves around the internet nowadays. We can’t really say the solution to this AI problem is to just ignore it.
I don’t think ignoring it is either, but I’m not sure how you put it back in the box. You can’t tell people not to use it. I think it should have never happened but it’s also too late. Everything revolves around the internet because users have caved to it and companies have turned it into the ultimate cash cow. Everyone needs to do the hard work to deprioritize it in their lives.
@@ColePerrine that’s true, but it’s also easier said than done. I’m not saying we shouldn’t be distancing ourselves from the internet, we absolutely should, but it’s also impossible to do. You will still need the internet to fact-check anything in the first place. Working hard for it doesn’t matter when it’s impossible to do.
@@ColePerrinenot using the Internet is a privilege reserved to high income countries and individuals. If people have barely enough to eat and their boss asked them to have a WhatsApp & social media accounts they likely comply. Even for a freelancer, in some countries, WhatsApp & social media is the only way to capture commission and communicate with clients.
What is a real?
Hopefully when Apple adds CP2A to iPhones everyone else will follow suit.
This was a bit more like a rant, than an explainer 😉
Great video thanks!❤
C3PO ???
some of them are hard to differentiate, but don't tell me there can't be an (AI) tool that recognizes the vast majority of super obvious ones, like 9:11 and flag them.
❤❤❤
But i can download local model from hugging face and choice not to add anything in meta deta 🤷 and abe to escape all of this solutions
It's already too late, we should stop discussing a way to prevent it and worry about how to live with this new reality without tearing society apart
this is meaningless and impractical
It’s definitely a step in the right direction but C2PA can be circumnavigated with a screenshot
there’s no such thing as real images
If I don't want to include that logged data in my images I shouldn't be forced to do so. And you shouldn't be allowed to claim that they are not real just because they lack that data.
Your voluntary system is broken from the start. And if you're trying to force it upon people you're the one that's broken.
she is beautiful host 🥰
You know what sounds very close to C2PA?
NFTs!
NFTs can actually be a solution to this problem, or an implementation of the solution. Corporations coming together and creating a standard can easily be turned into creating smart contracts that issue and verify certifications.
they're literally nothing alike. one is a standard cryptographically signed tag at the end/start of the file, the other is an ownership ledger. how would Adobe proving they own this file prove it's authentic exactly?
@@cataclystp They both have a commonality; authenticity. Anything that isn't part of the contract address can be deemed to be AI modified.
Of course, this is only an idea. A potential application of technology to solve a problem. You're right in that they are fundamentally different. However, there's no harm in exploring different avenues to prove legitimacy.