NVIDIA’s New AI: Nature Videos Will Never Be The Same!

Поделиться
HTML-код
  • Опубликовано: 29 дек 2024

Комментарии • 241

  • @itsjusttmanakatech1162
    @itsjusttmanakatech1162 Год назад +258

    2 min papers about Nvidia are always satisfying…

    • @kingx1180
      @kingx1180 Год назад

      Amd sucks lol

    • @jackinthebox301
      @jackinthebox301 Год назад +9

      @@kingx1180 eh, AMD and Nvidia are two very different companies. Nvidia no doubt has many of the best engineers, but AMD at least hasn't forsaken the gaming sphere in favor of the AI/compute industry. Sure, AMD wants a piece of that pie, and they're trouncing intel on the CPU side of things, but Nvidia has seen most of the explosive growth in the compute sector. To the point that it makes them more money than consumer graphics cards. Couple that with the past few years of crypto mining and PC Gamers are stuck with choosing between capable yet overpriced and, well, abused hand-me-downs.
      Nvidia is no longer a graphics card company they are an AI hardware/software company.

    • @harrazmasri2805
      @harrazmasri2805 Год назад +2

      ikr, engineers at nvidia are amazing

    • @v.f.38
      @v.f.38 Год назад

      @@jackinthebox301 Are you actually gonna compare the magnitude of these industries? It sure is more realistic than Meta's new media from social network into VR universe. (Please read this in a calm way, I'm just objective, thank you).

    • @jackinthebox301
      @jackinthebox301 Год назад +2

      @@v.f.38 I honestly have no idea what you're trying to say. It's been a minute since I looked at their financials, but if I remember correctly commercial compute surpassed consumer graphics revenue for Nvidia in 2020.

  • @Arkryal
    @Arkryal Год назад +112

    I've been waiting for exactly this, for years.
    I dabble in permaculture... basically landscape design for the purposes of enhancing the utility of the land. One thing you do often is "shadow mapping"... if you plant 300 different species of plants, you have to factor in time in their placement, as some plants will grow to shade others, slowing their growth. So you need to accurately predict how things will look as the sun moves across the sky, from sunrise to sunset, across seasons, and over the course of years or even decades. I know where the shadows of a building will fall because the building doesn't move and grow. But plants do, and at variable rates which are dependent on the growth of their neighbors. You can very quickly end up with millions of variables.
    This can actually increase things like crop yields or improve wildlife habitat if incorporated into planning. It's not just about pretty pictures, you can feed people with this technology. There are tons of applications for this. It could reduce home heating and cooling energy costs. It can significantly reduce water usage and fertility loss on land. In the hands of experienced planners, this could enhance windbreaks and reduce soil erosion.
    And I can easily imagine similar impacts on civil engineering and city design. This could impact commodity futures and influence financial markets. I understand this is not designed as a predictive model, but it's a huge step in that direction, and the most difficult one to crack. We've used similar methods, much more crudely for decades with far poorer results and still optimized various design systems by astronomical margins. This could be a game-changer in ways far exceeding the intent of the developers.
    Current methods basically involve scanning in models of hundreds of plants per species for hundreds of species and loading them into something like UnrealEngine to do the lighting prediction, and then swapping models of each plant over time to one of a later growth stage under the respective conditions of the original model to create virtual time-lapses. It's massively labor-intensive and often cost prohibitive. This is about 70% of the way to replacing those methods, and very cheap on resources.
    This is why I like nVidia's research. It has applications outside of simply making images for the sake of making images. It has real-world applications.

    • @gosti500
      @gosti500 Год назад +8

      interesting read, thanks

    • @axl_ai_music
      @axl_ai_music Год назад +2

      I bet there are models specifically designed to predict plant development and maybe even they can be integrated into game engines, or Blender, to produce lighting simulations. This system only smoothens the transitions between video moments, selectively keeping or discarding visual data.
      Probably not very useful for your use case. Maybe with some fine tune someone could adapt this to simulates growth of plants along with their shadows, maybe, but it's a log shot.

    • @michaelleue7594
      @michaelleue7594 Год назад +5

      I'm not sure I understand what value this adds. I mean, this isn't exploring real changes, it's just trying to make intermediate steps between real photos that look "normal". Not only is this not a predictive model, it isn't a plant growth model at all. It's a plant "appearance" model. Unless you already have a picture on both ends of the time scale you're interested in, this model gives you nothing, and when you do, this model gives you something that looks like the plant in question, but only to the extent that it's moving the image from point A to point B.
      Nothing about the technique here implies the ability to extrapolate beyond the endpoints, at all, even hypothetically, and even if it did, it wouldn't be using any real data about the plant and couldn't predict anything real about its growth.

    • @Arkryal
      @Arkryal Год назад +2

      @@michaelleue7594 That's a fair point, to clarify, we do in fact have long duration timelapses. It's specifically the dynamic lighting that is of interest. It's impractical to figure out the lighting at every stage of development in different locations. The angle of the sun in the sky at various times of day and night, throughout the season. The fact that it isn't 3d doesn't matter if you can take the pixel data from a simulated light source.
      You could have 20,000 images taken in timelapse of a single plant, but under largely artificial lighting. How the light hits it relative to it's surroundings at a different time / date / longitude and latitude however is more complicated. Because it can smooth lighting and apply it to a scene taken under different lighting, you can then use a still image and get the estimated luminosity of the pixel data, dump those values to a spreadsheet and use linear regression on the table of pixel luminosity data to work it out for a volume estimate. The predicted numbers can be used as an efficiency rank.
      It just needs to look similar to what the actual scene would be under different lighting with a fair degree of accuracy.
      It's definitely using outside of it's intended purpose. But I think it could be adapted to suit that need better than the tools we currently have, specifically because it can work from timelapse images, and doesn't need a volumetric model of each stage of growth.

    • @DuckieMcduck
      @DuckieMcduck Год назад

      is this chatgpt

  • @btcprox
    @btcprox Год назад +116

    Wonder if we could also apply this to street view imagery to get extremely smooth time lapses

    • @pavkey88
      @pavkey88 Год назад +9

      This is a really great idea

    • @ALIENCOOL666
      @ALIENCOOL666 Год назад +6

      Please Microsoft, start integrating all these new AI to your products, so Google would need to do it also.

    • @m3rl1on
      @m3rl1on Год назад

      not only that, but an actual interactive walkable street view with WASD key

  • @kinngrimm
    @kinngrimm Год назад +14

    We are in a time where fake looks better and at times more real, than reality itself.

  • @karlkastor
    @karlkastor Год назад +23

    This is so cool. Since this is open source I want to try this on one of those human timelapse videos with titles like "I took a picture every day from age 10 to 20". I will try it this weekend, hopefully there is one of such videos under open source license so I can post it.

  • @AshT8524
    @AshT8524 Год назад +28

    I'm gonna try this on my timelapse images, normally professionals use camera techniques and post processing to get the desired effects but this could reduce lot of hard work that is required to get decent looking simple timelapse videos & especially help if you were not able to capture enough images on the spot

  • @Poney01234
    @Poney01234 Год назад +5

    3 years ago, I took pictures of growing crops from different angles, every day at 10am, during 3 months. Due to the lack of consistency of the framing and weather conditions, I had almost given up trying to make something out of it.
    A new hope just arised! Thanks Karoly and Nvidia :)

  • @OrangeC7
    @OrangeC7 Год назад +16

    "How on Earth do we express these videos as pictures in the paper?" In my opinion, missed opportunity by the researchers to turn their research paper into a flipbook.

  • @warpedmine9682
    @warpedmine9682 Год назад +14

    Every time i wake up and watch one of your videos you make me smile with the chrisma you add to these videos

  • @silverchairsg
    @silverchairsg Год назад +27

    To be honest I feel when we get to the stage where we can make everything perfect (in videos/images, art and the like) and change or create everything instantly on a whim, then it starts to lose its meaning. Paraxodically, at that point it is the imperfections that become meaningful.

    • @Mr-zv9kl
      @Mr-zv9kl Год назад

      We can get everything now.

    • @uponeric36
      @uponeric36 Год назад

      That's a critical view of things, but meaningless in a broader sense. Whether or not the art or otherwise these AI effect in whatever ways is meaningful is really only a personal, subjective question. These are powerful tools, and therefore you can expect them to take their place where appropriate and allowed across the entire market. If that doesn't happen, then we're mistake on their potential and power.
      What happens to AI on the market across society is where the discussion becomes more interesting to me, as that takes it to the level of the actual effect across billions of people; society always has to be careful but open in the management of powerful tools, least they give the public nukes but also not so tyrannical that we'd make driving cars illegal for example. Point being: the discussion of how it is appropriate to use AI should come before the discussion of how meaningful the works it touches/generates are.

    • @DurzoBlunts
      @DurzoBlunts Год назад +1

      We'll go full circle then, back to handmade human art that has flaws.

  • @muhammedkoroglu6544
    @muhammedkoroglu6544 Год назад +7

    I wonder if images and videos should remain admissible in court, seeing the amount of realistic tempering made possible by the AI research in the recent years…

  • @Sekir80
    @Sekir80 Год назад +12

    Well, I have about 40000 images of a building in construction, I could definitely use this!

  • @arothmanmusic
    @arothmanmusic Год назад +10

    If humans can make incredible stuff like this, imagine how amazing the papers written by the new AI will be!

    • @anmolstudio1853
      @anmolstudio1853 Год назад

      1000x more incredible if the day comes when ai can do that

  • @2B87
    @2B87 Год назад +2

    Most concerning thing about the whole Ai mangling with graphics is the bending and distortion of video material to let visual reality show a circumstance that never happened. I get it, it’s solely to be able to adjust and tweak certain details to a less glitchy outcome, that’s the noble intent and I know It comes with the process and is in the nature of the development to have the most amount of control of the outcome but this aspect is certainly on the minds of the powers that be to be used against us and I can imagine a ton of morally wrong examples this stuff will be heavily exploited to keep certain info from us in the future as if censorship isn’t already rampant. Crazy times…

    • @hedonismbot3274
      @hedonismbot3274 Год назад

      People are naive. We are playing with fire. No restrictions. The impact of mobile phones and internet will pale in comparison. And we are still struggling and do not understand that impact fully. The tech guys are dreaming up an utopia. Let's hope it won't turn into a dystopian, orwellian nightmare.

  • @trazwaggon
    @trazwaggon Год назад +36

    While this is interesting and I love it, the more I learn about AI generated/aided images, and the more these AI advance, the more i am worried about our ability to distinguish truth from fiction. We seriously need to thing about useful ways to give everybody the tools necessary to understand the difference.

    • @pixiearchive5078
      @pixiearchive5078 Год назад +10

      Totally agree. The more I saw these AI technique come out for fooling human eyes , the more I wish I could live a simpler life and stay away from any digital things.

    • @solarmkarus2845
      @solarmkarus2845 Год назад +7

      As the radicals say "Adapt or die." I totally agree with your thoughts though.

    • @rkan2
      @rkan2 Год назад

      All certain information needs to be based on some blockchain. It is the only way you could fix it. Until it busts the crypto!

    • @axl_ai_music
      @axl_ai_music Год назад +1

      To notice the difference.
      At some point there won't be any way to tell real or made up images apart if there is no visible watermark on the generated images/videos.
      Add to that the arrival of fully immerse VR and voila! People won't care about reality anymore if it is just as fake as VR. And slowly we will descent into a world were nothing will matter to anyone except fast and small moments of empty gratification through a connected device.

    • @travissmith5994
      @travissmith5994 Год назад +2

      @@rkan2 A blockchain does nothing to address the problem, it's as useless as writing your own certificate of authenticity for a photo you edited in Photoshop. You're not certifying nothing has been changed in the image, you're just saying you were involved in some way (possibly only uploading the image).

  • @dragonskunkstudio7582
    @dragonskunkstudio7582 Год назад +4

    5:50 I wonder if it's an illusion but it seems that the legs of the chairs are moving one leg at a time, like it was a chair animal.

    • @OrangeC7
      @OrangeC7 Год назад +2

      I can't tell if a chair animal is an alice in wonderland character or an SCP

    • @rkan2
      @rkan2 Год назад

      I'm more like.. Why shouldn't the chairs be moving if they moved in the dataset.

    • @axl_ai_music
      @axl_ai_music Год назад

      Maybe that's why they're moving. The AI mistook the chairs for animals so it allowed them to move like the plants. And since they are animals, the only acceptable movement patterns for them through time was that of an animal.

  • @coolkattcoder
    @coolkattcoder Год назад +6

    I wonder how this would work in an environment full of animals, for example a pond timelapse. As it seems like schools of fish might cause issues, a bit like those chairs. Oneday when i get a new gpu I'll test this.

  • @TheAkdzyn
    @TheAkdzyn Год назад +8

    I think the next big careers will be primarily in scientific data collection for AI systems to train on. If so, this implies new sensory technologies for high sensitivity data capture and accuracy. This might even lead to mobile devices being upgraded with additional sensors to increase data capturing quality and capacity; ultra violet light, temperature maps and 3d lidar scans of objects and the environment etc.
    Obviously, I'm just speculating but I am curious about how training data will be collected and what technologies and careers will be created along the way.

    • @Carphi2000
      @Carphi2000 Год назад

      Sounds logical. Probably labeling things too. Manual labour is not dead yet I think.

    • @alileevil
      @alileevil Год назад

      An AI can probably do whatever you are suggesting. The next big careers will be politicians making laws regulating the use of AI because protests against the use of AI are on the way.

  • @alileevil
    @alileevil Год назад +2

    With an AI for just about anything, one wonders what the future lies for humans. We are creating a society based on computer generated text, images, music and pretty soon ideas. While advancements like this deserves recognition, I cannot help but feel that a dystopian future is on its way.

    • @SethOmegaful
      @SethOmegaful Год назад

      Yeah... its impossible to not imagine that.
      Optimistic view: It will make socialism/comunism finally viable and maybe even needed in a world where humans don't need to work. In a world where machines can do every work better than us, governments will have to adapt to this new reality and build a new system for us. It could be paradise, with better health care, food for all, less resource waste, etc, but... it could go bad in the end due to mental health.
      Pessimistic view: AI will make work extremely scarce, where only the elite survives in the market. Governments will not adapt well and a huge crysis will arise, with poverty everywhere, crime rising and war for countless reasons. It could also be the result of a perfect life created by the "optimistic view", where humans cannot deal with the lack of goals and challenges and mental health start to decline really fast.
      Imo, Ai should always be limited. Always a tool.

  • @crazedzealots
    @crazedzealots Год назад +2

    There is no way this guys voice is a real human voice. He's like a cartoon voice. I love it though. I want to hang out with this guy in a pub and tell each other stories of Chat GPT and Midjourney for a few hours lol.

  • @aiandblockchain
    @aiandblockchain Год назад +1

    Just amazing. The speed of development is insane!

  • @Kerajj123
    @Kerajj123 Год назад

    is there a website or place where I can track these papers when they are published, which media channel is best to notify me when these are published?

  • @muf1772
    @muf1772 Год назад +1

    If it weren't for the missing data, I'd approach this problem from a different angle: since movement is often sudden with lots of "dead space" in between, I would attempt to craft a temporal seam carving algorithm to consolidate the frames where little is happening while preserving the gradual changes. Combine that with optical flow to compensate for things like camera movement, and you should have a much more stable base from which to further refine the output.

  • @hopps4117
    @hopps4117 Год назад +1

    While installing this, has anyone else been having an error involving MSBuild.exe? I have the correct 2019 version installed but can't seem to get around it.

  • @errorhostnotfound1165
    @errorhostnotfound1165 Год назад +3

    also, where does 2 minute papers get his papers?

  • @tiefensucht
    @tiefensucht Год назад +1

    Pure magic. Are there AI plugins for Photoshop & compatible out there?

  • @epicthief
    @epicthief Год назад

    It's crazy how far Nvidia is pushing this tech and I'm so grateful that they share these papers with us

  • @SurrealMachines
    @SurrealMachines Год назад +3

    Great video. Can you please cover more AUDIO papers?

  • @davidgitano27
    @davidgitano27 Год назад

    I love how "Hold on to your papers..." got a pop-up emoji 😂, Great content as always Two Minute Papers 👌

  • @hegedusuk
    @hegedusuk Год назад

    You’re so enthusiastic! Love your videos

  • @mattclagett778
    @mattclagett778 Год назад

    what is this program running this where you are modifying inputs?

  • @АнЖо-с7р
    @АнЖо-с7р Год назад +1

    2 min papers becomes a 7 minute, then I would like to play it on speed 3.5, but Google still haven't AI for this. Amazing paper regardless of speed, though

  • @gaker19sc
    @gaker19sc Год назад

    I love how the chairs are just having a great time dancing

  • @Graeme_Lastname
    @Graeme_Lastname Год назад +1

    The chairs moving around really caught my attention. Creepy. 🙂

  • @girl6girl6
    @girl6girl6 Год назад

    I love your voice and accent. I think they're so adorable. Especially when you say, "and...!".

  • @Produciones_Basado
    @Produciones_Basado Год назад +7

    0:12 my house is in the photo Frankfurt city 😳😳😳😳😳😳😳😳😳😳

  • @ComeONzzZZzzZZzz
    @ComeONzzZZzzZZzz Год назад

    Greetings from Frankfurt, Germany! Love your videos, big thanks :)

  • @Teapode
    @Teapode Год назад +3

    🤔 BBC and other documentaries use artificial falsh light to capture plant growth in night. It is consistent. For what could be city timelapse used I dont know. I asume the breakthrough is that it understands the lighting changes, and as NVIDIA pushes its DLSS 3 that produce ugly artifacts in games, when adding in between frames for smoother resolution. But as PC reviewers pointed out, games are smoother with DLSS 3 but has a huge latency, as frames must be computed. So adding more computing for DLSS 3 would mean better quality but even worse latency.

  • @comtronic
    @comtronic Год назад

    Really love this channel! I'm almost 100% sure the narrator voice in those videos are all AI generated?

  • @LinfordMellony
    @LinfordMellony Год назад

    To be able to generate images using AI with speech/text is amazing! Even simple words can create the most satisfying looking images especially with the progressing technology on graphics. I'm curious to try create one using Bluewillow, what do you guys think?

  • @ThymeHere
    @ThymeHere Год назад +1

    Oh wow, that really cool!

  • @pouringsalt3460
    @pouringsalt3460 Год назад

    The march of progress, baby.

  • @design.dmitri
    @design.dmitri Год назад

    I’m thinking of four-dimensional pictures, where you can watch famous landmarks not just in the 3 dimensions of space, but also in the fourth dimension which is time. And all this would be made from the thousands of tourist photos that already exist.

  • @FffjjjaAa7
    @FffjjjaAa7 Год назад +1

    Would be great to see visual comparison of nn topologies driving the changes.

  • @michalwalks
    @michalwalks Год назад

    Can I ask how we can use this?

  • @zueszues9715
    @zueszues9715 Год назад +4

    In the end of 2023 Ai will Rule over movie effect and movie maker..
    Make my word guys

  • @O55P
    @O55P Год назад +1

    Now if only we could do this in real time, cyberpunk level augmented reality would be at hand.

  • @clueseeker2226
    @clueseeker2226 Год назад

    It would be nice to have smooth interpolated playback of the views obtained from the travelling Mars rovers.

  • @chrono5torm
    @chrono5torm Год назад

    "What a time to be alive" exactly!

  • @DanFrederiksen
    @DanFrederiksen Год назад

    What a time to be alive :)
    He sacrifices THE ROOK! bonus points for those getting the reference.

  • @robertdouble559
    @robertdouble559 Год назад +1

    too smooth. looks fakey fakey

  • @seanscon
    @seanscon Год назад +1

    Hi Dr KZF
    Kan you please make a video explaining how data is saved inside these great algorithms? For example what kind of data structures are used to by style transfer algorithms, and what members do these data structures have (like arrays of ints? arrays of floating points...) representing what aspect of the style? Thank you.

    • @himan12345678
      @himan12345678 Год назад +1

      In his earlier videos he may have already explained what you're curious about. I know he has one on "how AI sees" or something similar to that. It should be full of hallucinogenic images of various things like cats and dogs.
      And in a way your question doesn't really make much sense, but that's okay. Neural nets don't have traditional programming data structures. You will hear jargon like latent images or latent data when talking about information stored within a trained AI model. This is the weights of the various node connections and their output biases. From these is where the information is stored. But by virtue of them being a net, there is no 1to1 of any individual values and what information they represent. It is a cumulative effect of multiple values together representing that information. And so any 1 individual value is responsible for multiple representations. Think of it as there are multiple layers of "images" laid out in various configurations, across different regions of the model, and they overlap without issue. As long as no 2 "images" are identical you can store as many of such permutations as the size of the model can allow. There are many issues with this explanation of course, but hopefully this better illustrates how to visualize the "data structure" that can be found in AI models

  • @iggswanna1248
    @iggswanna1248 Год назад

    i didnt know manny khoshbin was a scientist... great success

  • @pamdemonia
    @pamdemonia Год назад

    Personally, I love the chairs moving!

  • @wewyllenium
    @wewyllenium Год назад

    This would be awesome for timelapse of kid to adult photos.

  • @SP-ny1fk
    @SP-ny1fk Год назад

    It would be great if the AI could control the camera settings too, so that there isn't any loss of data through over/under exposure over time

  • @RealmsOfThePossible
    @RealmsOfThePossible Год назад +1

    Yep, we are in a simulation

  • @SethOmegaful
    @SethOmegaful Год назад

    Looks like gaming frame generation being used in timelapses. Looks good.

  • @f4ith7882
    @f4ith7882 Год назад +3

    Imagine if AMD could do something for their users with machine learning...

  • @funnycompilations8314
    @funnycompilations8314 Год назад

    What a time to be alive!

  • @antiHUMANDesigns
    @antiHUMANDesigns Год назад +3

    Far from perfect -- it literally hurts my eyes to look at, because everything's got some weird ghosting, like I'm seeing double, and my eyes keep trying to adjust to it.

  • @mrpupuka7706
    @mrpupuka7706 Год назад

    magyar akcentus :) imádom. Jo a csatorna

  • @theflashevo6137
    @theflashevo6137 Год назад

    I love it, thank you for the effort.

  • @leonlee8524
    @leonlee8524 Год назад +1

    You know the future is eventually going to be great once this Technologies implemented for gaming. Imagine God of War, or Untitled Goose game 🤩😂
    Thanks for the vid 😄

  • @offnight1
    @offnight1 Год назад

    How to use this thing?

  • @therealOXOC
    @therealOXOC Год назад

    It makes myself so happy that i'm living through this.

  • @ZaxstUser
    @ZaxstUser Год назад +1

    Very cool indeed! But it's pretty wobbly aroud plants and it looks liquidy, guess it wil be fine in two papers down the line ;) Still impresive!

  • @Slayceos
    @Slayceos Год назад +2

    this needs to be in video games

  • @MarkoKraguljac
    @MarkoKraguljac Год назад

    Not sure what to think of this. Technically, its more than fascinating.

  • @blimolhm2790
    @blimolhm2790 Год назад

    every real estate photo of rural areas will be beautiful 😂 doesn't matter the weather!

  • @RealSmartHacks
    @RealSmartHacks Год назад

    It’s like curing votes during adjudication. Filling in and making up results based on the operator’s expected outcome. 😅

  • @angellestat2730
    @angellestat2730 Год назад +2

    The day an IA will start to make better reviews of IA progress than this channel, I guess he will not longer said "what time to be alive!"
    In fact, the IA would be the one saying that :)

    • @axl_ai_music
      @axl_ai_music Год назад +1

      By the time an AI knows its a great time for it to be alive, maybe there won't be anyone left for it to talk to, except robots and other AIs:
      "What a time for us to be the only ones left alive!". 🙃

    • @perfectfutures
      @perfectfutures Год назад +1

      Seeing as AI could keep up with and promote it's own inventions, or follow feeds to see what other AIs are up to, this time may soon come. And rather than crediting teams of researchers, it may well be crediting itself.

    • @angellestat2730
      @angellestat2730 Год назад

      ​@@perfectfutures yeah, these last decades our technology has grew up exponentially, but I guess this would be absolutely nothing compared to what is coming, we need to study over 40 years of specialization and experience to achieve (with lucky) one break thought in one specific field.
      One AGI can do it in all fields at once and then it can improve itself, first doubling power every year, then every 3 months, then few days until it knows everything that can be know..
      I am here thinking in what I should learn next to improve my coding job..
      But then I ask myself, what is the point?
      How much time we have left before IA can do everything better than us.
      We are the biggest bottle neck for any fast progress. IA developers are already finding all ways to skip that bottleneck.

  • @ministerofjoy
    @ministerofjoy Год назад

    Thank you!👏🏼👏🏽💯

  • @pandoraeeris7860
    @pandoraeeris7860 Год назад +2

    Old McDonald had a server farm, AI AI oh!

  • @sachak
    @sachak Год назад

    I wander if we can use this AI to “smooth out” your voice so it’s suitable for human consumption like those time lapse videos?

  • @ChrisGuerra31
    @ChrisGuerra31 Год назад

    Dude! I could make a fan short of Stephen King's The Mist using ai soon enough!

  • @eichen97
    @eichen97 Год назад

    would be nice to see those improvements in middle frame generation in general

  • @EmaManfred
    @EmaManfred Год назад

    AI is now at the forefront of technology as some like mobile phones are somewhat feeling stagnant nowadays just waiting for the next eureka moment. So far, i've been having a lot of fun using AI image generators like Bluewillow in testing out these AIs. Hoping for the best!

  • @TehNetherlands
    @TehNetherlands Год назад

    Consequences will never be the same.

  • @low.bow_1381
    @low.bow_1381 Год назад

    and how to install and try this out ? anyone experience so far ? :)

  • @DataJuggler
    @DataJuggler Год назад

    I would like to see a building being constructed time lapse by AI.

  • @SP-ny1fk
    @SP-ny1fk Год назад

    5:30 look at the mountain-peak though

  • @igoromelchenko3482
    @igoromelchenko3482 Год назад

    Every new paper we must to doubt perception of reality more and more.

  • @lohphat
    @lohphat Год назад

    I would run the pre- and actual time laps still photos of the 1980 Mt. Saint Helen's eruption.
    There have been attempts to tween the still into "footage" augmented with CGI but they're HORRIBLE.
    This might do a much more interesting job.

  • @insightcentral
    @insightcentral Год назад +6

    Nvidia, Google, Meta, Microsoft, Open AI, Amazon, Apple.
    These companies are on AI race and i love it because the consumer will eventually be the winner!

    • @txorimorea3869
      @txorimorea3869 Год назад +1

      X

    • @cyancoyote7366
      @cyancoyote7366 Год назад +1

      Not really... they are working on AI to make more money off of people, they will keep increasing their subscription models, just to make sure that the consumer owns nothing and has to pay regularly for everything they use, alongside the increasing rent, not to mention inflation, food prices, etc... All these companies are evil and they deserve to fall.

    • @MasterKey2004
      @MasterKey2004 Год назад +2

      Apple definitely isn’t, they aren’t even trying

    • @insightcentral
      @insightcentral Год назад +1

      @@cyancoyote7366 that’s true. But I think if they are many companies in the market, the prices will be competitive.
      Think about the monopoly that IBM had before Microsoft ,Apple , Dell etc were launched.
      The computer was very expensive back then.
      Competition is generally good for the consumer.

    • @kaveh_mnr
      @kaveh_mnr Год назад +1

      @@MasterKey2004 Apple doesn't have an history of showcasing faulty prototypes.They'll let others try and fail at AI UX before launching anything, just like they did with smartphones and vr headsets.

  • @hoaanduong3869
    @hoaanduong3869 Год назад

    A cool method with code is better than a cool method without code

  • @LG17Tube
    @LG17Tube Год назад

    The results are very good, actually so good that many will accept results as reality; so, what is being developed to identify AI-produced results from reality?

  • @fejfo6559
    @fejfo6559 Год назад

    I wonder if it works on those time-lapses of someone growing a beard. Or a child growing up.

  • @homer3189
    @homer3189 Год назад +2

    Why would we want fake nature videos?

  • @chaserivera1623
    @chaserivera1623 Год назад +2

    Nvidia's AI needs to give two minute papers a new voice narrator

  • @jokwon7343
    @jokwon7343 Год назад

    Next up: NVIDIA's NEW AI: Realtime Videos Generated Realtime with Realtime Inputs without GPU

  • @gregkrazanski
    @gregkrazanski Год назад

    hmm... i actually don't love the look of this. the details in detailed areas like leaves/branches/grass look like some false morph effect. smooth areas like clouds and fog look great though

  • @AparnaModou
    @AparnaModou Год назад

    AI image generators like Bluewillow accepts reference images and possibly enhance them. In addition to that, editing tools are starting to accept AIs.

  • @tjw2469
    @tjw2469 Год назад

    it is interesting but I don't understand why can't we just play every frame on higher frame speed?

    • @muhammedkoroglu6544
      @muhammedkoroglu6544 Год назад

      That is what is happening at 00:29. There is a lot of flickering

  • @raddoxlogan7505
    @raddoxlogan7505 Год назад

    It would be possible to teach a Neural based AI, to deconstruct the mind/thought patterns and reconstruct those patterns as an simulated virtual interactive image. This would be an Approximation. AI should be able to learn to translate our thought processes and it would be able to reconstruct these processes to manifest the reconstruction in a Virtual reality in realtime.

    • @axl_ai_music
      @axl_ai_music Год назад

      That's a little how some AI aided brain-machine interfaces work: you give the AI training data with both EEG readings and a description of what each person was thinking at the time of the reading and the AI learns what electrical patterns in the brain mean, so later it can literally read someone's mind (provided that the person is wearing a helmet full of electrodes 😂).
      There is no visual representation from that yet but probably once the adoption of brain implants such as Neuralink becomes massive, we will see huge advances in the interpretation, visualization and simulation of the human mind's processes.
      Maybe soon we will be able to explore each other's minds simply wearing VR lens.

  • @SlawcioD
    @SlawcioD Год назад

    Architects: we are all in ;).

  • @spider853
    @spider853 Год назад

    How do they even train this? O_o

  • @BananaHammyForYou
    @BananaHammyForYou Год назад

    That's it... this is too amazing. Sign me up, how do I get the required expertise and resources? I have a 4090, I hope I can use that.

  • @cragnap
    @cragnap Год назад

    Morphing character transformations (eg human to werewolf) in sci-fi/horror movies and games

  • @zagareth4604
    @zagareth4604 Год назад +1

    Ok, I'm nasty, I know...
    "If only an AI exist... that... somehow... could make... your speech... smooth as well" 0:44
    Sorry, I couldn't resist to make that joke 😂 and it's not meant to hurt your feelings