AI Generated Videos Are Getting Out of Hand

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024

Комментарии • 209

  • @bycloudAI
    @bycloudAI  Год назад +64

    This video took too many revisions, hope you enjoy it, lmk if I missed anything too!
    PS: Break up with basic browsers! Get Opera GX here: operagx.gg/bycloud3

    • @ikcikor3670
      @ikcikor3670 Год назад

      I think you kept using "predecessor" as "successor"

    • @kusog3
      @kusog3 Год назад +3

      let me add regarding roop, swimswap, etc... all of them kinda works using the same underlying model, which is from the insightface project. While the successor of roop might able to continue the project, sadly the only version actually available to the public is the 128 resolution model. There is a higher resolution model but the developers refused to release it for variety of reasons.

    • @tobilpcraft1486
      @tobilpcraft1486 Год назад

      @@kusog3 tbh theres no real point in releasing the higher quality model since you already get better results with face upscalers like codeformer

    • @csehszlovakze
      @csehszlovakze Год назад +2

      shitty chinese browser, the spiritual successor is Vivaldi.

    • @DudeSoWin
      @DudeSoWin Год назад +1

      Why can't you guide it with text? Insert a line break or pipe in a 2nd prompt.

  • @anywallsocket
    @anywallsocket Год назад +178

    imagine falling in love with a celebrity of the future only for the system to glitch for one second and reveal the true monstrosity they are LOL

    • @DeniSaputta
      @DeniSaputta Год назад +10

      fake vtuber

    • @beansbeans96
      @beansbeans96 Год назад +61

      imagine falling in love with a celebrity.

    • @RiskyDramaUploads
      @RiskyDramaUploads Год назад +9

      Shrek

    • @maheshraju2044
      @maheshraju2044 Год назад +2

      Ostrich

    • @RiskyDramaUploads
      @RiskyDramaUploads Год назад +1

      @@maheshraju2044 What is ostrich?
      For those for whom looks are everything, things like this have happened before. Even young, good-looking Chinese streamers sometimes use software that changes their face shape, which glitches when their face moves off screen. And then there's this: "Chinese Vlogger Gets Exposed As A 58-Year-Old Woman After Her Beauty Filter Turns Off Mid-Stream"
      "It was all revealed when a beautifying video filter glitched mid-stream, exposing her real face. After the incident, it came to light that the streamer is actually a 58-year-old lady who just really enjoys playing Apex Legends."

  • @Xezem
    @Xezem Год назад +27

    There should be better AI for frame interpolating softwares like flowframes and Topaz, this would be a huge help tbh

  • @luciengrondin5802
    @luciengrondin5802 Год назад +47

    It seems to me that a new representation of video, one that would imply temporal consistency is needed. That's why I think of all these methods, the one about "content deformation field", COdef, is the most promising.

    • @anywallsocket
      @anywallsocket Год назад

      Naw you need to scrap basic NNs and use liquid nets instead

    • @shiccup
      @shiccup Год назад

      Lol i think i just figured out a great workflow for temporal consistency

    • @shiccup
      @shiccup Год назад +2

      For vid 2 video I might make a tutorial but essentially you just use ebsynth utility + img2img, then use reference control net and temporal net then put it into ebsynth and after that throw it in flowframe

  • @Gaven7r
    @Gaven7r Год назад +64

    I can't imagine how hard it must be to keep up with so much stuff going on lately lol

    • @David.L291
      @David.L291 Год назад +1

      So what's going on then of late? LOL

    • @quarterpounderwithcheese3178
      @quarterpounderwithcheese3178 Год назад

      A bunch of proprietary techno babble every AI startup has trademarked and pretty much *nobody* understands

  • @passwordyeah729
    @passwordyeah729 11 месяцев назад +4

    It's insane how fast AI is developing. This video is already slightly outdated... scary times we live in.

    • @danlock1
      @danlock1 8 месяцев назад

      Why must you always reference sanity?

  • @alexcrowder1673
    @alexcrowder1673 8 месяцев назад

    I like how at 3:08 he says "This one has the best generation quality" and then proceeds to show us the derpiest CGI lion I have EVER seen.

  • @aiadvantage
    @aiadvantage Год назад +18

    Absolutely loved this one. Great job!

  • @deemon710
    @deemon710 Год назад +7

    Hey btw, thanks so much for these latest-in-ai videos. It really helps to stay informed on what's out there and helps us be able to spot fake stuff.

  • @Entropy67
    @Entropy67 Год назад +9

    What the hell? Combine a couple of these with generative AI playing as a DM for D&D and you might have something crazy. Anyone starting a project let me know. Might be a very fun and open ended game.

  • @Chuck8541
    @Chuck8541 Год назад +13

    Things are moving SO fast.
    Also, hey dude. Can you put together a playlist, or even a paid Udemy course to get those of us that are noobs up to speed with how to use this stuff...maybe from a creator/consumer standpoint? I'd love to get into Ai and create things, but it seems there are dozens of models, and methods. I don't know where to start. I only understand like 30% of the technical words you use. haha

    • @David.L291
      @David.L291 Год назад +1

      maybe could see what works best for you

  • @AjSmit1
    @AjSmit1 Год назад +2

    the first @bycloud video i saw was about 'is AI gon steal our jerbs? prob not' and i've been watching ever since. i appreciate the work you do to keep the rest of us in the loop

  • @viratponugoti7735
    @viratponugoti7735 Год назад +7

    "All these techniques are just here to assist ai video generation or just become a thirst trap"

  • @thedementiapodcast
    @thedementiapodcast 9 месяцев назад +1

    What I've learned from using these tools almost daily is that the human brain becomes very quickly attuned to pick up the little details that betray AI generation.
    1. start by looking at clothing. Coherence in clothing is currently near impossible. Jackets don't have zippers, have 10 buttons where there should be none, that kind of thing.
    2. Check objects in the background. I don't know anyone who bothers to create a LORA for every single bg oject especially in bg scenes, so the coffee maker, the fridge, etc are all going to be looking 'off brand'.
    3. elements these tools aren't trained on are evidently missing: if you're a sneakerhead, you'll quickly spot that the jordans have converse soles, etc.
    4. camera angles are all very boring. Anything that's dramatic with massive differences in proximity to the lens are going to need a lora.
    It's therefore no suprise Runway current commercial strategy is to partner with brands to push specific objects in scenes - so the coffee machine will be a 'nespresso' machine, but expect this to be abused to the max (it's basically mandatory product placement).
    I think we'll see 'ai videos' but this is the precambrian stage of it. We need tools to create scenes that aren't random but reflect the artist vision (currently light maps in blender pushed to img2img type process are the best option).

  • @Aizunyan
    @Aizunyan Год назад +1

    13:41 its called rotoscoping from 1880s

  • @MrJohnnyseven
    @MrJohnnyseven Год назад +3

    Wow after 30 years online what do we have...people watching crap AI videos that all look the same....

  • @e8error600
    @e8error600 Год назад +4

    It was cool shit at first, now its getting scary...

    • @patrickfoxchild2608
      @patrickfoxchild2608 Год назад +5

      It's already scary. This is just what the public has produced.

    • @tylerwalker492
      @tylerwalker492 Год назад +4

      @@patrickfoxchild2608 And we'll never find out exactly what governments will produce!

  • @yalmeme
    @yalmeme Год назад +1

    Hi guys. Is any of currently existed tools can do img2img on realtime video? So I can use it in streaming?

  • @JDST
    @JDST Год назад +2

    "thank you, ice cream so good. yes yes yes gang gang gang"
    Such inspiring words. 😢😢😢

  • @AthenaKeatingThomas
    @AthenaKeatingThomas 11 месяцев назад +1

    Wow, this was far more thorough than I expected it would be. Thanks for the information about HOW video generation works as well as the examples of some of the current tools!

  • @SianaGearz
    @SianaGearz Год назад

    I have recovered a "lost" music video, as in uploads exist but they're so low quality that you can't even tell what's happening due to a deinterlacing error, they're all from like 2006, and from what i know all the official data masters etc have been lost to a fire. The data i have recovered is a slightly blocky and noisy 4mbit MPEG2 made from what looks like a painfully well-worn Betacam at a TV studio when they were changing equipment. I'm trying to make it presentable, but so far upscaling has generated some creepy facial frames. Is there an AI workflow that i can feed a handful of high resolution images of the performer's face to have it restore it? Maybe first a pass that makes faces less bad even if inconsistent, and then reintroduce frame to frame coherence with simswap or the like?

  • @zrakonthekrakon494
    @zrakonthekrakon494 Год назад +1

    So many options with so much nuance and customizability, I hope the best methods continue to evolve into the widely used tech of the future instead of being phased out.

  • @Uthael_Kileanea
    @Uthael_Kileanea 11 месяцев назад

    10:48 - I could hear:
    Dame da ne
    Dame yo
    Dame na no yo

  • @eloujtimereaver4504
    @eloujtimereaver4504 9 месяцев назад

    Can we have links to some of your examples?
    I have not seen all of them, and cannot find some of them.

  • @guy_withglasses
    @guy_withglasses Год назад +3

    bro didn't link neuron activation

  • @Chuck8541
    @Chuck8541 Год назад

    lmao at the guy standing backwards on the surfboard.

  • @moahammad1mohammad
    @moahammad1mohammad Год назад

    Slightly disappointed how many people fake the results of these AI's to make it seem it was entirely done with simple first-passthrough prompting only

  • @kamillatocha
    @kamillatocha Год назад +1

    it all boils down to who will make the first AI Video porn
    and soon porn stars will go on strike too

  • @walterhugolopezpinaya5641
    @walterhugolopezpinaya5641 Год назад

    Thanks for the great video on the current landscape of generative video methods! ^^

  • @NeXaSLvL
    @NeXaSLvL Год назад +1

    it's funny, technology used to help us create art, now we're using tools to assist the AI's video generation

  • @Rscapeextreme447
    @Rscapeextreme447 Год назад

    I think we should call category 3 “corridor video creation”

  • @MihajloVEnnovation
    @MihajloVEnnovation Год назад

    What are your opinions on Kaiber?

  • @dochotwheels2021
    @dochotwheels2021 Год назад +1

    I am trying to find a image to image generator that can turn my still pictures into cartoons/watercolers/low bit poly/ect. Do you know of any program that does a good job? I use midjourney but it never does it correct, its usually something totally differnt.

    • @finallyhaveausername5080
      @finallyhaveausername5080 Год назад

      Try searching for style transfer programs rather than image2image generation; they tend to be more lossless.

  • @lorenzoiotti
    @lorenzoiotti Год назад

    Is there something like sadtalker for videos? Wav2lip worked on videos too but from what I’ve seen sadtalker only does images

  • @TDMIdaho
    @TDMIdaho Год назад

    How do you not include deforum? Which is the best.

  • @darkezowsky
    @darkezowsky Год назад +1

    roop is dead, but roop-unleashed is alive and even better ;)

  • @zikwin
    @zikwin Год назад

    I tested almost all the mentioned techniques over the past few months, and nothing is missing as far as I know. Great video, sum everything, I like it

  • @humanharddrive1
    @humanharddrive1 Год назад

    the ice cream so good part gave me whiplash

  • @alonsomartinez9588
    @alonsomartinez9588 Год назад

    There is also Phenaki!

  • @jakelionlight3936
    @jakelionlight3936 Год назад

    introduction of quantum computing solves any divergence, if all possible routes are takin at the same time one route will be closest to perfect as possible..it will be indistinguishable from reality with zero lag... i imagine this is already being done if your in the mile high club... we are defiantly in the dark about a lot of things imo.

    • @obsidianjane4413
      @obsidianjane4413 Год назад

      Someone handwaved "quantum something". Take a drink.

  • @rem7502
    @rem7502 Год назад

    1:34 bro wtf was that sponsorship lmao😵

  • @blackterminal
    @blackterminal Год назад

    Would like Ai avatars to not loop hand movements but do more random movements.

  • @GameSmilexD
    @GameSmilexD Год назад

    "that is not a legitimate hoverboard (it's got wheeeeels)"

  • @pointandshootvideo
    @pointandshootvideo Год назад +1

    Thanks for this video! The current state of the art is very disappointing. I'm wondering if creating a 3D controlnet skeleton and then generating 30 fps images using Reallusion would move the technology forward. Thoughts?

  • @sotasearcher
    @sotasearcher Год назад +2

    you’re the MVP of this, keep it up 👏👏

  • @DigitalForest0
    @DigitalForest0 Год назад

    Thank you!, i personally got message juntil 0:38 that this video is not for me, i didn't waste my time, THANK YOU!

  • @johnjohansson
    @johnjohansson Год назад

    What about zeroscope v3?

  • @Arewethereyet69
    @Arewethereyet69 Год назад

    thanks for the video. great channel by the way. just subscribed

  • @sneedtube
    @sneedtube Год назад

    I didn't quite get if there's a method to deepfake a live stream but I'm kinda re tar ded so I should probably give the video another rewatch

  • @patrickfoxchild2608
    @patrickfoxchild2608 Год назад +1

    hold the eff up, did anyone notice the Bud Light commercial it made had only women drinking it?

    • @tylerwalker492
      @tylerwalker492 Год назад +2

      Bud Light knows it's new target demographic lmao

  • @mat_deuh
    @mat_deuh Год назад

    Thank you for this review :)

  • @MegaDixen
    @MegaDixen Год назад

    can wait get a new grapics card to play with this

  • @mishame156
    @mishame156 Год назад

    Проорался на видосе, где Рогозиным подменили идущего к реке

  • @granodiorite9032
    @granodiorite9032 Год назад

    WHY DO WE KEEP CALLING IT "AI" WHEN ITS NOT AI???

  • @andrewdunbar828
    @andrewdunbar828 Год назад

    makes it looks

  • @renanmonteirobarbosa8129
    @renanmonteirobarbosa8129 Год назад

    give it 5 years. trust me bro !!!! 2 years for us to mathematically get it, the other 3 years is the porn/hentai community to perfect it

  • @ivangrebennikov799
    @ivangrebennikov799 Год назад

    MICHAEL?!?!

  • @Sam.Sung_
    @Sam.Sung_ Год назад +1

    They're all men.

  • @Daniel_F_RJM
    @Daniel_F_RJM 11 месяцев назад

    Great Video. thanks

  • @Lancer95_305
    @Lancer95_305 Год назад

    No is a thirst trap my friend 🤣🤣🤣

  • @onlyyoucanstopevil9024
    @onlyyoucanstopevil9024 Год назад

    AWESOME

  • @Demial_Sparda
    @Demial_Sparda Год назад

    I'm scared for women's sexual worth. 😰

  • @Bruh-sp2bj
    @Bruh-sp2bj 8 месяцев назад

    The future of porn is looking bright 💀

    • @Saber06
      @Saber06 6 месяцев назад

      Of course, the lustful are going to take advantage

  • @ieye5608
    @ieye5608 Год назад

    That won't stop me, it doesn't matter if they are male or female :D

    • @-rate6326
      @-rate6326 Год назад

      It won't stop you from what?

  • @Baraborn
    @Baraborn Год назад

    Wow great video.

  • @BTMYYY
    @BTMYYY Год назад

    nice

  • @ALFTHADRADDAD
    @ALFTHADRADDAD Год назад

    SOLID

  • @timchikun
    @timchikun Год назад

    how can a video be a dude?

  • @moneqtemnome6678
    @moneqtemnome6678 Год назад +1

    i dont think videos have gender

  • @keenheat3335
    @keenheat3335 Год назад +39

    glad you feel better and start making AI video again. Saw that post and thought maybe you gave up on AI entirely. I don't think the content is the issue but packaging (thumb nail, title, etc) These sale and marketing part of the video matter a lot in view, some time even more than the video content it self.

  • @Wooraah
    @Wooraah Год назад +74

    Great overview thanks. I think we all need to bear in mind that so many of these techniques are in their very earliest stage before writing them off as terrible, like dialup internet in 1998, the leaps being made are truly astounding, and it won't be long before these tools and techniques are being used consistently for commercial applications - for good or ill.

    • @EduRGB
      @EduRGB Год назад +8

      That profile picture tho, I have to re-install it again..."Your appointment to FEMA should be finalized within the week..."

    • @WrldOfVon
      @WrldOfVon Год назад +8

      Completely agreed, but for the most part, I'm seeing people praise these models more than hate them. The fact that they can produce such high quality outputs now compared to a year ago goes to show you how quickly things can change even in the next 6 months. I'm so excited! 😆

    • @3zzzTyle
      @3zzzTyle 10 месяцев назад

      Deus Ex kept being right for 20 years and it'll be right about AI as well.

    • @thedementiapodcast
      @thedementiapodcast 9 месяцев назад

      The problem is much bigger than 'it's brand new'. Think about how these models create ANYTHING, such as a coffee machine: They aren't 'selecting a coffee machine' from a list of what they understand to be a coffee machine, they Frankenstein their way in transforming noise into what generally appears to be a coffee machine. Humans know what coffee machines generally look like, down to the brand - or what the soles of converse shoes look like, or that a windbreaker has a zip going from top to bottom on the garnment. AIs DO NOT. The solution is chaining loras, but this is not just extraordinarly time consuming, it's impractical in the context of a scene containing as many objects as the average kitchen . Runway is working on partnering with brands right now to try to address this but i think it will just turn into mandatory product placement.
      Then you have camera angles: Anything 'extreme' (think david lynch) cannot be done without loras even on single frames. But beyond that, if you wanted to make a frame of 'a man dressed elegantly at a coffee table', imagine how many thousands of tiny details you'd have to prompt vs grabbing a Gh5 and filming. So people use blender to create depth maps and attempt to generate based on that 'template' so at least things look coherent.
      99.999999% of the stuff you see that looks 'impressive' is either vid to vid or pure sheer luck that 4 second sequences feel somehow connected to each other.
      TLDR: We're a very very long way away from 'taking what's in your mind' and putting it in images. Right now it's a giant random seed lottery, or the same old 'cyberpunk' look where people forgive that the 'futuristic helmet' looks nothing like what they understand to be a helmet to be.

  • @angamaitesangahyando685
    @angamaitesangahyando685 Год назад +2

    AI waifus in 2025 is my only cope in life.
    - Adûnâi

  • @rookandpawn
    @rookandpawn Год назад +7

    the amount of research and knowledge in this video is off the charts. ty for your efforts! subscribed, amazing content.

  • @darii3523
    @darii3523 Год назад +15

    Ai is growing bigger and bigger

  • @owencmyk
    @owencmyk Год назад +4

    What if you trained a diffusion model on videos by having it interprate them as like... 3 dimensional pixel arrays so you're basically extending its ability to generate images into the temporal dimension

  • @VJP8464
    @VJP8464 Год назад +2

    We’re living in a truly unique age in human history; there’s the time of human history spanning a great many years before advanced technology, and in the future will be the time of the virtual being indistinguishable from reality, and everything being digital and supremely unnatural, which will span the time from several years from now to the end of humanity
    We’re the only humans who will ever get to experience the transitional phase between those two time periods, which will span only 1-200 years of the tens of thousands we have been/will be on this earth
    We’re the Guinea pigs of the technological future
    What we do these days is going to be critically important to the fate of people in the future, I sincerely hope we use our ‘trial run’ position for the good of everyone, especially since it’s too easy to use technology for malice

  • @TAREEBITHETERRIBLE
    @TAREEBITHETERRIBLE Год назад +2

    *_please keep watching_*

  • @ceticx
    @ceticx Год назад +2

    Thought i didnt care about this at all but you kept me for all 20 minutes

  • @shakaama
    @shakaama Год назад +1

    so which do i use?

  • @b.delacroix7592
    @b.delacroix7592 Год назад +2

    No way any of this will be used for evil. Nope.

    • @theyoten1613
      @theyoten1613 9 месяцев назад

      Every technology was used for evil. That's a non-argument.

  • @dan323609
    @dan323609 Год назад +1

    That day will come, when I try using nuke copycat with SD. Btw I made some tests and it was very not bad

  • @DG123z
    @DG123z Год назад +3

    Once it gets 3d modeling instead of just being videos solely of images, the moment will become a lot more realistic.

    • @robertceron9056
      @robertceron9056 Год назад +2

      CSM and imagine 3d does it, but Picasso nvidia ai will have a better version

  • @greyowlaudio
    @greyowlaudio Год назад +1

    Oh no...
    *_he's hot._*

  • @francesco9703
    @francesco9703 Год назад +1

    I need the sauce for the Kita gif at 6:14, it looks so clean

  • @Kisai_Yuki
    @Kisai_Yuki Год назад +2

    IMO, a lot of these techniques are ... poor, but not for the reason you'd expect. The reason is because the underlying hardware needed to get a good result is out of reach. So starting with Stable Diffusion itself, yes, it can run on a smaller GPU, but the input training data was already low-resolution (512x512) and is incapable of generating anything else. That size was picked so it would fit on existing hardware. As soon as you tell SD to generate something bigger, the result is not a "higher resolution" result, but rather a different image with more chaotic data in the same palette.
    What is needed for good results is datasets that start out at the resolution of the output. So 2K, 4K, 8K, and that means the GPU video memory has a substantial increase each time, so to get an 8K image , you need 256 times as much VRAM as you would need for that 512x512, so if you needed 8GB for 512, you need 2TB for 8K. That's not even possible on a Nvidia DGX (which has 320GB.) So given present hardware, a 1K image would need a 32GB device.
    Which I think is going to have to come out is a tile-based model that renders 512x512 portions of each image and stitch them together, which means figuring out how to tell the AI that it's part of the same frame.

    • @phizc
      @phizc Год назад +2

      Stable Diffusion XL is trained on 1 megapixel images and the default generation size is 1024x1024. It can run on 8GB GPUs, AFAIK. It's trained on 1024x1024 images and some other specific resolutions.
      For SD 1.5 models you can upscale with ControlNet tiled. It splits the image into a grid and adds details to each, then combines it back to a seamless image.

  • @nefwaenre
    @nefwaenre Год назад +2

    i have been using Simswap for 2 years now. i use it mainly to animate my characters in real life. i am waiting for a day when i can inpaint a consistent video. For rg- change the shirt colour of my subject in the video.

    • @finallyhaveausername5080
      @finallyhaveausername5080 Год назад +3

      If you're just looking to animate a character based on real life movements then you could try something like EbSynth? you inpaint one or two keyframes per type of shot and it generates the rest.

    • @nefwaenre
      @nefwaenre Год назад

      @@finallyhaveausername5080 Thanks for the info. It's not just a character, i have these faces that i've created (there's only a few) from my paintings and dolls that i have and so i only need their faces to be there, which is why i use Simswap. Cuz i can have tons of videos with just these few characters this way.
      But i can't really post the videos online cuz then people might say it's stolen content, even though, this is an absolutely personal project and i have no intention of sharing this on a money based platform.
      So, i just want to change the shirt colours and maybe the bg from these simswap videos so that i can post online.

    • @SatriaTheFlash
      @SatriaTheFlash Год назад +1

      You should change to FaceFusion right now, it has the better results than SimSwap

  • @obboAR
    @obboAR Год назад +1

    You're my go to AI image to Image to video video style text to Gan video multi frame to image generator, RUclipsr.

  • @Spamkromite
    @Spamkromite Год назад +1

    Not only out of hand. Most of the ecosystem of their databases were made from stolen footage and pictures from all over internet across 5 years, even from private videos and anything you sent through messengers. Once that is found out, the owners of those sites will be sued into nothingness, specially when photograms from movies and other copyrighted animated films are discovered. Like, we can't use those photograms when we make our videos and upload them to RUclips and we are banned from getting a single monetization. Why these sites can make cash with the same footage and double down by using full movies even? But that's just me thinking too hard 🤔

  • @Tarbard
    @Tarbard Год назад +1

    This video really activated my neurons.

  • @JazevoAudiosurf
    @JazevoAudiosurf Год назад +1

    wow i actually learned something

  • @Donxzy
    @Donxzy Год назад +1

    As a former hobbyist with SD and photoshop, this video cracks me up and it's accurate indeed

  • @Marcytheeditor
    @Marcytheeditor 24 дня назад

    4:58 what is this video from? Why are there guys fighting? Can someone tell me?

  • @user-ez7ls2du9c
    @user-ez7ls2du9c Год назад

    Searching for this topic(developments/future of AI video generation) is near impossible. You will get 1000s of search results from mainstream media bull news and a truck load of clickbait videos with titles suggesting AI will destroy the world or whatever industry they picked today. Anyone with a little knowledge of ai and a morsel of honesty will tell you ai is just overhyped chatbot at this point and will be for the next decade atleast. Maybe there will be some developments in video gneration in the coming years, but we will have to wait and see, AGI is a pipedream, there is literaly no evidence or theory to suggest it will ever happen, no matter how much processing power you made have lmao.

  • @issay2594
    @issay2594 9 месяцев назад

    this thing doesn't progress well because it goes the wrong way. it's like using a leverage at the wrong side of it. if you make an analogue, it's like teaching a human to dream with no hallucinations, to see a totally adequate movie when you sleep with no strange things happening. once there is step by step reasoning + firm understanding of physical reality (what is possible and what is not) these things overnight will start making hyperrealistic movies all of a sudden. just the way they do pics now. same approach that was used with pics won't work with movies simply because the blind associations that image generating neural networks use with a good results, after crazy amount of training, for movies would require magnitudes more training, simply because way more can happen over each fraction of a second. it's like adding several more dimensions to the task complexity. just wait for the reasoning and it will happen overnight.

  • @EllyCatfox
    @EllyCatfox Год назад

    superconductors are real btw they just dont got em working at room temp yet :P

  • @ellen4956
    @ellen4956 Год назад

    I wondered if a youtube channel called "curious being" was using AI for the presenter, because she doesn't look natural to me. My daughter said she thinks it's a real person. I don't. Can someone check it out and let me know? It's always about history and pre-history, but a young woman stands in a room with either a blank wall or a wall with a painting next to her.

  • @chynabad9804
    @chynabad9804 Год назад +1

    Thank you, nice snapshot of the current capabilities.

  • @The3kproduction
    @The3kproduction 11 месяцев назад

    thats next level catfish lol

  • @Francis_UD
    @Francis_UD Год назад

    What a click bait yeah thx for wasting 20 min of my life

  • @the_proffesional1713
    @the_proffesional1713 Год назад

    Will AI Consuming the reality, the more reality they are, more need high vigilance against it.