Deep Video Portraits - SIGGRAPH 2018

Поделиться
HTML-код
  • Опубликовано: 16 май 2018
  • H. Kim, P. Garrido , A. Tewari, W. Xu, J. Thies, M. Nießner, P. Pérez, C. Richardt, Michael Zollhöfer, C. Theobalt, Deep Video Portraits, ACM Transactions on Graphics (SIGGRAPH 2018)
    We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.

Комментарии • 235

  • @BlueyMcPhluey
    @BlueyMcPhluey 6 лет назад +403

    who's ready for our legal systems to become completely paralysed by this technology?

    • @1ucasvb
      @1ucasvb 6 лет назад +7

      josh mcgee We're totally screwed.

    • @jonwise3419
      @jonwise3419 6 лет назад +20

      If you mean in regards to evidence, then why would it? It wasn't paralyzed with photos. It won't be with videos or audio. Even if a reannactment will leave no fingerprint it only means that we'll have to sign digitally sign information like videos. Your security cam will simply digitally sign every clip it produces, which will take nearly zero impact on CPU or memory overhead. Same with phones an any other recording device. If the video is fake and you provide it as an evidence from your phone claiming that it's real, then you will be responsible for the false testimony.
      But in regards to synthesizing funny or pornographic videos of anybody without their consent, it will continue to cause a lot of drama, until eventually we give up and admit that anybody's face and can be digitized as a digital copy and be used in any way and nobody can stop it, so the idea that people own rights to their looks is as idiotic as it's unenforceable.
      Btw, I like your face, I think I'll take it and lick it in VR... no homo. (in 30-50 years that's going to be "I'm going to put a couple of bots to crawl your social media and synthesize a predictive model of your responses where AI will construct the best model of your personality and then bring it into consciousness so that I do whatever I want to it in VR. Or I'll upload that model from my own mind. After all, my ability to detect a fake depends only on predictions inside my mind, so if an AI has access to my predictions about you, it doesn't need to fake real you, it just needs to fake my predictions about you to make any diversion from the original undetectable for me")

    • @amurgcodru
      @amurgcodru 6 лет назад +19

      Ok mister genius who knows cryptography in and out. How do you prove a video is fake/real based on digital signature? THis would mean that every person who has a mobile phone WOULD have to generate a Public/private key AND would have to keep them secretly in store AND that the state and/or another entity would have access to those keys to verify that it was you. Looking at a lot of overhead here and if people are as bad with PKI as they are with passwords you're looking at a very big problem here since this is the main issue, being able to prove fake/real video/audio. What happens with lost private keys, who revokes them? How do you prove your innocence when your PKI and/or identity was stolen ?
      Your assumption with zero impact on CPU or memory is only based on data of Text based digital HASHING not Cryptographic signatures. A video should be signed on each frame and/or per total. If you then upload a video to let's say youtube then a whole new process occurs where it's converted to another codec. This means that even a per file hash/signature would be invalid. A per frame one would be almost impossible if you convert/reencode.
      TThere are other problems as well. Don't fool yourself this is a very big issue and no matter which new technical security things will be implemented in the future, it can and will cause MANY problems when it gets in to the hands of the minority of very bad people.

    • @jonwise3419
      @jonwise3419 6 лет назад +3

      Elixir Alchemist Blender Well, we can examine two different scenario here: one is where people present video evidence in court and another where they present it somewhere on the Internet.
      The court evidence scenario just requires each device having private keys stored on them. A person testifying will simply claim that this is their phone and they filmed it. The signature on the video will have a proof that it was indeed filmed on that hardware and they claim that it was they who filmed it. It's also likely that there will be several sources of the same thing happening in reality. If a street camera signs a video message, and 10 people sign that they saw the same thing, then it's likely that actually happened.
      In a more distant future, it's not improbable that we'll have small cameras on us running constantly anyway (for example, for AI personal assistants to see the world to help them assisting us ), so we might even be talking about a future here where there are always many different recordings from different sources of the same thing.
      As for the Internet, your account already has identity associated to it, so the site can sign material for you. In other words, if you post on social media account and claim that it's you on a photo somewhere, a social media site signs the photo or video for you (sign that it's you who posted it). If several people who are on the photo post it, in the end, there can be several signatures from different people who posted the same photo.
      Signatures, don't have to be included in the original formats of course. A photo and video can have a separate signature stored on other place (like blockchain, a DB provided by that social media, etc).
      > Your assumption with zero impact on CPU or memory is only based on data of Text based digital HASHING not Cryptographic signatures.
      Because you would not *sign* every frame / segment. If a separate proof for each part of the video is not needed, then the whole video can just be signed. Otherwise, you can use merkle proofs to allow cutting a part of the video out while still having a signature associated with that part.
      That last part can actually be accomplished with only one signature as well. You would *hash* every segment and sign only the merkle root hash. Even for hashes, you wouldn't even hash every frame, because usually people don't streams / cuts just one frame of the video. Although, you still can.
      The cost of storing a merkle proof for a segment would be `proof size = log(n) * hashSize`, where `n` is the number of segments. So, if you sign each 1 MB, then even for 1GB video file the overhead of each proof would be `512 bits * log2(1000 segments) = 512 * 10 = 5K proof per 1MB = 0.5% size overhead`. Checking the proof would be `log2(1000) = 10`, in other words, simply running a hash function ten times - ridiculously cheap. For smaller number of segments, it will be even less overhead.
      Also, you would not need to design a special format for this. You could include a file with merkle proofs and signature somewhere else. So, if somebody streams original video, they can open a second stream and stream proof of originality for every segment of the video, downloading first 1MB and downloading the first merkle proof, or start both stream from somewhere in the middle.
      Reencoding is not a big problem. Firstly, for most video it won't matter, because nobody cares whether a video of a cat farting is real or not. If you're giving a video of your dashcam as evidence, then you most likely will give it directly to the court rather than upload it to RUclips.
      But, for videos where it matters and that have to circulate social media, a reputation can be attached. Firstly, you can just download the reencoded video and sign it again, associating your reputation with it again. Automatically checking whether original fits the encoded version would be possible, so you don't need to watch it again.
      Another scenario is services like RUclips just signing reencoded video, claiming that nothing content-wise changed from the original and including the original signature of the video. If the video is fake, then either the original is fake or RUclips falsely claimed that it simply reencoded the video. Both claims are provable. If you trust that RUclips simply reencodes videos, then you can treat it as the same (RUclips generates new merkle proofs for each fragment of the video, but you can still trust them if you trust that RUclips simply reencodes videos; there is no incentive for RUclips to cheat, because it's easily provable by an author and one case of this make destroy their reputation).
      Also, since cheating is provable for some of those things it makes it easy to design cryptoeconomic protocols, where reencoding happens and nothing else. So, you can have a decentralized streaming service that encodes while original reputation still being valid. But that's an overkill.
      For more distant future, if you really want to an overkill for making it easy to trust information, you can actually make lying very hard. You simply use cryptoeconomics akin to prediction markets, but, instead of predictions about future outcomes, you can have people's hardware / personal assistant AIs / street cameras / dashcams betting on what really happened in reality. The more important event is, the more eyes are on it, the more probability there would be that people are telling the truth (in this case, they put a collateral, like promising 1000$ that they are telling the truth, nobody is intentivized to tell a lie, because they will simply loose money, if they are telling the truth, they are rewarded a small amount for giving their opinion to the network). A participation cost can be reduces as well. For example, if a camera was turned on May 5 in some location where an event happened, and there is a prediction market for that event on May 5 in that location, that camera might automatically participate and give a testimony to get a small fee, and it can put a large collateral to show that it has no incentive to lie.

    • @jonwise3419
      @jonwise3419 6 лет назад +4

      Elixir Achemist Blender Forgot to address your point about the private key storage. People are and will be using private keys on their phones for things much more security sensitive than simply claiming "hey I filmed this video and I claim it's real". For example, running blockchain apps in their phones (Status.im app for the Ethereum network). Also, some countries already have every citizen with a private key inside their document (e.g., Estonia has it, where people can do nearly everything, including voting via Internet, they just plug a card reader and have ID card that signs things on its chip).
      If you loose your phone, simply sign that the signature on your phone is no longer valid with whatever reputation associated key you had (either your ID or a key associated with your social media).

  • @Peacepov
    @Peacepov 6 лет назад +57

    Incredible!, now we need a system that can differentiate the real and fake videos

    • @aleksandersuur9475
      @aleksandersuur9475 6 лет назад +13

      If you have a system that can find something fake about image/video, then you can highlight it and modify it until it doesn't trigger "fake" and you simply get a better quality fake that becomes indistinguishable from real.

    • @Peacepov
      @Peacepov 6 лет назад +4

      aleksander suur unfortunately your idea makes a lot of sense but I'm hoping there's got to be a way around it

    • @MucciciBandz
      @MucciciBandz 5 лет назад

      If you read their research paper, they do adversarial training. Meaning their method contains the "system that can differentiate the real and fake videos". So as the network gets more photo realistic, the detector gets smarter as well. :)

    • @snowflake6010
      @snowflake6010 5 лет назад +1

      What do you do when you identify the fake? You remove it from social media, prevent it's spread, and notify users they've been had. And that means... we need to add the ability to instantly block, across all social media, any particular video by order of... ? The government? Once we have the ability to 'Deep Remove' a video... any unflattering thing might suddenly become unavailable if it's politically inconvenient. The sh*t is going to get real when things get this fake.

    • @higy33
      @higy33 5 лет назад

      We are building it www.deeptracelabs.com/

  • @suyac1774
    @suyac1774 5 лет назад +63

    Whos here from muh boi chills?

  • @federrr7
    @federrr7 6 лет назад +97

    i had a bad feeling about this

  • @trnobles
    @trnobles 6 лет назад +82

    I never thought about the possibility of changing facial animation of dubbed movies, that's a great idea and would make watching dubbed movies a lot less irritating

    • @ZoidbergForPresident
      @ZoidbergForPresident 6 лет назад +2

      I disagree, it's even worse I'd say. But whatever, I always prefer original voicing anyway and watch it that way. :P

    • @TeisJM
      @TeisJM 6 лет назад +1

      Theres so much body language that won't match the face

    • @robindegen2931
      @robindegen2931 6 лет назад +1

      Why watch dubbed movies at all though? I never understood them.. What's wrong with subtitles?

    • @shortbuspimp
      @shortbuspimp 6 лет назад

      Robin Degen I don't like subtitles. I'm watching a movie, arguably mostly a visual medium, that I constantly am looking away from and missing out on. I'm not opposed to reading, I've read hundreds of books over my life. I just don't like having to look away from the action that I'm supposed to be seeing. To each their own.

    • @backyardcook42
      @backyardcook42 6 лет назад +1

      Or just stop dubbing movies.. Watching english movies or any movie in their native tongue is a great way to learn a new language. The "i don't like subtitles" argument doesn't hold up, you get used to it really quick and after a while you won't need the subtitles. This tech shouldn't exist, it's way to easy to abuse.

  • @joshuasamuel2042
    @joshuasamuel2042 5 лет назад +27

    Just imagine if this technology fell into the wrong hands

    • @RealGubby
      @RealGubby 5 лет назад +4

      Joshua Samuel it already has

    • @11crysissnake19
      @11crysissnake19 5 лет назад +3

      It was made by the wrong hands

    • @davehug
      @davehug 5 лет назад

      in reality any 3d modeler can do this

  • @blueberry1c2
    @blueberry1c2 6 лет назад +17

    This technology is amazing and i seriously applaud your efforts in its creation. However i am tinged with fear about how it will be abused by more extreme media sources and in the justice system.

    • @KatieGray1
      @KatieGray1 3 года назад +1

      It's already happening. Unfortunately, the people creating these things do not often think through the implications and who it will ultimately impact. So far, it's impacting a lot of women so I don't know if there were no women involved in developing this technology or they also did not think about how it could be used. I suspect there's just not enough women in the room when these things are being created to say hey, just because we can do this, we need to ask if we should. www.abc.net.au/news/2019-08-30/deepfake-revenge-porn-noelle-martin-story-of-image-based-abuse/11437774

  • @FTLNewsFeed
    @FTLNewsFeed 5 лет назад

    I'm psyched to see this work make its way into dubbing. Sometimes you don't want to read subtitles and you'd rather have the dubbed voices coming out of the actors' face.

  • @MattSayYay
    @MattSayYay 2 года назад

    Apparently Chills can't unsee this.

  • @dan_loeb
    @dan_loeb 5 лет назад +2

    All of the generated content hits me hard in the uncanny valley, at least when they are in motion.

  • @tubelitrax
    @tubelitrax 6 лет назад +1

    Astonishing! I'm speechless...

  • @Unreissued
    @Unreissued 6 лет назад

    the "nearest neighbour retrieval" thing was especially clever. fantastic stuff

  • @AlexanderSama
    @AlexanderSama 6 лет назад +4

    Just thinking about how destructive could be a video of a few seconds in a social network of a president talking gives me gosebumps. Our current society *is not* prepared to accept that there are no digital media 100% reliable.

  • @TheTrumanZoo
    @TheTrumanZoo 6 лет назад

    you could use a second pass, or third pass... the output fed into a new input signal, creating a secondary even harder to spot output. if the initial output is used for another pass with less differences it could reduce a lot of artifacts.

  • @jamesbarnes1496
    @jamesbarnes1496 6 лет назад

    Just simply amazing

  • @shango12b
    @shango12b 6 лет назад +34

    Just because you can, doesn't mean you should.

  • @grzesiekmazur7711
    @grzesiekmazur7711 6 лет назад +4

    The future is now, Old man .

  • @sychedelix
    @sychedelix 6 лет назад

    This is awesome!

  • @SRAKA-9000
    @SRAKA-9000 6 лет назад +128

    We'll see a lot of porn made with this technology

  • @BakuTheLaw
    @BakuTheLaw 6 лет назад

    It's time to get framed. Thank you!

  • @piotrkakol1992
    @piotrkakol1992 6 лет назад

    It's awesome that they maked this technology public. It makes a huge difference if everyone can use it than if it could only be used by the governments. People who think they maked a wrong decision by developing this technology don't understand that it would have been maked sooner or later and if the first people to develop it haved malicious intents, it could have catastrophic results. By realizing this technology is here we can improve our future decisions.

  • @mattsponholz8350
    @mattsponholz8350 6 лет назад

    Seriously impressive technology. You should all be very proud of yourselves!
    As with all things powerful, this has the potential for bad, but also the potential for good! A lot of good. Well done :)

  • @alexandrepv
    @alexandrepv 6 лет назад +6

    Please, tell me you guys have the source code on github. And where can I get the pdf of your paper? Please! :D

  • @donnell760
    @donnell760 6 лет назад

    Amazing!

  • @MattGDreal
    @MattGDreal 5 лет назад

    This is awesome technology, it's amazing, I've never seen anything like it but only on 1 app before, "facerig" but it is truly magnificent

  • @leetae-kyoung1084
    @leetae-kyoung1084 6 лет назад

    Awesome!!!! So cool!!! Wow!!!

  • @iLikeTheUDK
    @iLikeTheUDK 6 лет назад +1

    I was about to ask where I could get a download of the code or an executable, but then I realised what that could lead to...

  • @JohannSuarez
    @JohannSuarez 5 лет назад

    Interesting, but also incredibly terrifying.

  • @kevincozens6837
    @kevincozens6837 5 лет назад

    This is amazing. It did miss one quick look to the right eye movement at 1:45.

  • @nilspin
    @nilspin 6 лет назад

    Niessner lab rocks! I hope to do PhD there someday :)

  • @micocoleman1619
    @micocoleman1619 5 лет назад

    Seems pretty cool.

  • @RobinCawthorne
    @RobinCawthorne 6 лет назад

    this is truly revolutionary. does this conversion happen in real time?

  • @SinuousGrace
    @SinuousGrace 5 лет назад +5

    If you can destroy somebody with, essentially, one word, how long before AI is used to make somebody say something that they never actually said in order to destroy that person?

    • @loudvoice5903
      @loudvoice5903 5 лет назад

      IT IS ALREADY IN PROGRESS, A LONG LONG TIME!

  • @JorgeGamaliel
    @JorgeGamaliel 6 лет назад +4

    Awesome!! There are a lot of enormous implications for this super-technology for example security, intelligence, fake news.. Generative adversarial networks a game of imitation and perfect indistinguishability.

    • @thor2070
      @thor2070 6 лет назад +3

      Framing people!

  • @bakhtikian
    @bakhtikian 6 лет назад

    How the book cover in background is recovered at 2:03?

  • @167195807
    @167195807 5 лет назад

    ماهو البرنامج

  • @lucasvca
    @lucasvca 6 лет назад

    DEUS É MAIS

  • @babyjesuslovesme1219
    @babyjesuslovesme1219 5 лет назад +1

    Scary but genius

  • @yuzzo92
    @yuzzo92 6 лет назад +2

    This is an amazing technology, but i can't help getting the feeling that it's going to be used the most in a malicious way than not.

  • @doctorscarf8958
    @doctorscarf8958 5 лет назад

    with this I can become masahiro sakuri!

  • @jeffhalmos7981
    @jeffhalmos7981 6 лет назад +68

    The nuclear bomb of software: Great technology; never want to see it used.

  • @theoriginalgoogle3615
    @theoriginalgoogle3615 4 года назад

    If a person had a notable
    feature on their face [scar, mole, birthmark, etc] could that possibly be unable to mask ? Can a result show features from both people on a video ? Cheers

  • @smoquart
    @smoquart 6 лет назад

    Will there be any code published?

  • @deaultusername
    @deaultusername 6 лет назад

    Definitely getting there, more than good enough as it is to mess with youtubers.

  • @simoncarlile5190
    @simoncarlile5190 5 лет назад

    Thinking about this makes me think about how time travel is described by the makers of the movie Primer: It's too important to use just to make money, but it's too dangerous to be used for anything else.

  • @jettthewolf887
    @jettthewolf887 5 лет назад

    Not going to lie but this scares the shit out of me.

  • @darkknight4353
    @darkknight4353 5 лет назад

    where can i get the software ? is it public yet?

  • @chrisbraddock9167
    @chrisbraddock9167 3 года назад

    What good could this bring to society that would outweigh the obvious evil?

  • @fccheung1798
    @fccheung1798 6 лет назад

    This can wage wars...

  • @ElmarVeerman
    @ElmarVeerman 6 лет назад

    We need a new class of videos: certified unedited video. Can tech companies provide this feature?

  • @ramanadk
    @ramanadk 4 года назад

    where do we get the code for above video?

  • @fleetwoodray
    @fleetwoodray 6 лет назад

    Kind of reminds me of the movie called, Simone. No one is safe from exploitation now.

  •  6 лет назад

    I think you should also work on tool which can recognize if the video was created artifically or not. Otherwise I am quite nervous about future (fake) news manipulation.

  • @azra31
    @azra31 6 лет назад

    why is this a good thing?

  • @SamJohnsonking
    @SamJohnsonking 6 лет назад

    Where is the Github code?

  • @leahnicole4443
    @leahnicole4443 5 лет назад

    Scary AF.

  • @sebhch244
    @sebhch244 5 лет назад +1

    dark side of 3d, vfx , motion graphics .

  • @BorisMitrovicG
    @BorisMitrovicG 6 лет назад +1

    Can't wait for the paper to be out!

  • @StevenFox80
    @StevenFox80 6 лет назад

    This is spooky.... o.O

  • @SuperGranock
    @SuperGranock 4 года назад

    my question is who can't tell the difference, it's in the eyes and expressions

  • @ey5644
    @ey5644 5 лет назад

    Where can I buy this?

  • @johnsherfey3675
    @johnsherfey3675 6 лет назад

    Time to make ultiment ytp?

  • @joe-rivera
    @joe-rivera 6 лет назад +31

    Terrible things could come of this line of work. I hope that (unlike many in the technology field) you are considering consequences and building in fail-safes for detecting fakes. Advancement of technology can’t be the only goal.

    • @Yui714
      @Yui714 6 лет назад +6

      It's all about advancing. Our species never plans for anything. We just adapt to the changes we make. Even something as huge as nations aren't planned but instead reactionary making up how it operates as we go along. We're not planners. Climate change is our weakness because it requires planning, so what we're going to do as a species is wait it out, hope to fix it through technological advancements, and if that doesn't happen come up with a plan B like sending people to another planet. Point being, we don't plan even if we know our species will die if we don't. We're not planners and this tech is cool.

    • @STie95
      @STie95 6 лет назад

      The Dead Past by Asimov comes to mind.

    • @aleksandersuur9475
      @aleksandersuur9475 6 лет назад

      Doesn't work, if they can make a system with failsafes included then someone else can make the same system minus the failsafes. In fact people are doing it all over the place. Check out "deepfakes", that's how this type of AI work really got started. Mostly it's used for patching celebrity faces into porn videos.

  • @pickcomb332
    @pickcomb332 6 лет назад

    Looks like Cellulose is gonna make a comeback

  • @marcusk.6223
    @marcusk.6223 6 лет назад +1

    pleas sell this technology to vidro game developers! This would make great games!

  • @Yawopy
    @Yawopy 5 лет назад

    who’s here bc chills sent ya? me!

  • @nightshade2541
    @nightshade2541 6 лет назад +2

    oh well
    thanks for all the fish

  • @MrCalhoun556
    @MrCalhoun556 6 лет назад +3

    Welcome to the Death of Reality.

  • @simoncarlile5190
    @simoncarlile5190 6 лет назад

    To quote Patton Oswalt: "Science, all about coulda, not shoulda."

  • @PrakharShukla
    @PrakharShukla 6 лет назад

    Reference Paper if anyone needs it. arxiv.org/pdf/1805.11714.pdf

  • @Madison__
    @Madison__ 6 лет назад +5

    Imagine being an artist and using this tech to figure out head angles by using a real model

    • @renookami4651
      @renookami4651 6 лет назад +2

      You'll also replicate the errors of the computer generated images just like someone learning with 3D model or anything else. But at least this base looks accruate enough, as long as you stay close to the original pose you would probably get good results. I don't know about trying a full profile, making the model look at something behind their shoulder, and other big changes. I guess further you modify the more errors appears.

    • @longliverocknroll5
      @longliverocknroll5 6 лет назад +1

      Ren Ookami Still, look at the difference in technology between the three different studies over that small time-frame. It conceivably won't be long before they work out individual uncanny valley-esque aesthetics.

  • @wowepic2256
    @wowepic2256 4 года назад

    Github?

  • @yumazster
    @yumazster 5 лет назад

    This is impressive technology. The shitstorm it is going to cause will be equally impressive. 1984 full blast...

  • @shawnwooster7190
    @shawnwooster7190 6 лет назад +4

    More dangerous than nukes. Great. Welcome to the New Age of Hyper-Anxiety.

  • @lilyzwennis1195
    @lilyzwennis1195 6 лет назад +1

    This is revolutionary and will make for epic realistic gaming. But no, someone should burn it. Burn it with fire!

  • @TheUmbrella1976
    @TheUmbrella1976 6 лет назад

    And outside, people protest against genome manipulations in crops because its 'dangerous'.

  • @MagicBoterham
    @MagicBoterham 6 лет назад

    Garrido et al. getting owned.

  • @DouglasDuhaime
    @DouglasDuhaime 6 лет назад

    Code or it didn't happen

  • @Sychonut
    @Sychonut 6 лет назад

    4:23 Stroke Simulator

  • @Santins12
    @Santins12 5 лет назад +3

    I only can imagine bad applications with this technology...

  • @whilebeingjezebel
    @whilebeingjezebel 6 лет назад

    #lazyeye

  • @c1vlad
    @c1vlad 6 лет назад

    Freaking awesome technology ...but i have a bad feeling about it

  • @UrzaMaior
    @UrzaMaior 6 лет назад +1

    Aaaaand we're doomed.

  • @808GT
    @808GT 6 лет назад +1

    We are fucked.

  • @hausmaus5698
    @hausmaus5698 6 лет назад

    And bye voice actors

  • @snowflake6010
    @snowflake6010 5 лет назад

    We've built an engine that can swing any election. Free speech with high viewer-counts will have to be constrained. I imagine this is what is really behind the EU wanting to establish their copyright check system. [so any uploaded video can be instantly cancelled so deepfakes can be taken down quickly.] I'd imagine all U.S. social media companies are making sure the capability to take something down quickly is possible. We're going to have to encoded video in a way that leaves an edit trail. Blockchain will somehow help there. I think.

  • @Namelocms
    @Namelocms 6 лет назад +1

    What is this used for?

    • @jameslucas5590
      @jameslucas5590 6 лет назад +5

      You could use it of evil or you could use it to dub foreign movies and turn them into a different language and make it seamless.

    • @murraymacdonald4959
      @murraymacdonald4959 6 лет назад +1

      James Dean, Frank Sinatra, Elvis, Marilyn, Princess Leia...

    • @longliverocknroll5
      @longliverocknroll5 6 лет назад +2

      James Lucas That's such a ridiculously small scope of what it could be used for in terms of "not evil" applications lol.

  • @Zeeeeeek
    @Zeeeeeek 6 лет назад +1

    the FBI wants to know your location

  • @kawabungadad8945
    @kawabungadad8945 6 лет назад +1

    This is going to lead to the start of WWIII.

  • @goliathfox
    @goliathfox 5 лет назад

    Fortnite players will love this!

  • @mstyle2006
    @mstyle2006 6 лет назад +2

    imagine how our next generations will be mass controlled by this technology

    • @snowflake6010
      @snowflake6010 5 лет назад

      lol. Yeah. Them. Coming soon to an election near you sir!

  • @winsomehax
    @winsomehax 6 лет назад +4

    I've seen some crude versions of this type of thing.. and I suppose I knew better ones were coming... but jesus... it suddenly hits you what's coming down the pipe at us. Prepare yourself.

  • @mari_hase
    @mari_hase 6 лет назад

    While being impressive, you can definitely distinguish the real actor from the fake.

  • @alejandrodaguy5732
    @alejandrodaguy5732 5 лет назад

    Chris Hansen is somewhere plotting.

  • @deepfakescoverychannel6710
    @deepfakescoverychannel6710 3 года назад

    that is fake paper without the code.

  • @sn-zd8ct
    @sn-zd8ct 6 лет назад +1

    Damn this is scary

  • @loudvoice5903
    @loudvoice5903 5 лет назад

    IT'S CAN NOT BE FAKER THAN THAT!!! lol!

  • @dangraphic
    @dangraphic 6 лет назад

    I'm sorry but this is fucking scary.

  • @jamespatches4553
    @jamespatches4553 6 лет назад +7

    I mean thats cool, but how exactly does this technology help society?

    • @FrankAtanassow
      @FrankAtanassow 6 лет назад +16

      For one thing, this sort of research will inevitably be done in the private and government sectors for nefarious purposes. By also doing it in the public sector, we can see what sort of image/video manipulation is possible or plausible and become more skeptical and critical of what others might present as evidence in bad faith. In short, we can see how others might try to trick us. If this sort of research isn't done in a transparent manner, then we become more gullible as covert technology subverts our so-called common sense.

    • @jonwise3419
      @jonwise3419 6 лет назад +1

      Well, at the very least, like any other other CG innovations advances entertainment. Imagine, just like anybody can write a book, anybody in the future being able to create a movie due to advancements in CG tools.

    • @leecaste
      @leecaste 6 лет назад +3

      Siggraph papers usually aim to improve vfx not society

    • @FrancoisZard
      @FrancoisZard 6 лет назад +6

      Soon enough to replace human actors who are grossly overpaid, given god-like powers amd spoiled like brats. Imagine never having to hear about the kardashians or having to deal with justin bieber and his shit. That's a huge service to society IMHO.

    • @starrychloe
      @starrychloe 6 лет назад +1

      Cheaper movie tickets. You can fire all the overpaid actors and just reuse John Wayne and Marlon Brando and Marilyn Monroe

  • @brookshunt928
    @brookshunt928 6 лет назад

    You will no longer be able to know what is real and what is fake.