Realtime 3D models, transparent AI videos, AI computers, consistent characters, full video control

Поделиться
HTML-код
  • Опубликовано: 12 янв 2025
  • НаукаНаука

Комментарии • 225

  • @astrovah
    @astrovah 17 часов назад +5

    AI Search, thanks for using my Mona Lisa clips in the opening of the video ;) Nice recap of some amazing AI advancements.

  • @lalropekkhawbung
    @lalropekkhawbung День назад +20

    7:40 The 2d to 3d actually looks great!
    You can actually see it in 3d if you look behind through your screen and overlap the images.

  • @johnrussel9902
    @johnrussel9902 День назад +59

    Damn it should now be concerning how every model released is usually made by "chinese names", even for the American tools.

    • @ianwatson5767
      @ianwatson5767 День назад +14

      That's just how it is, most big tech comes out of china these days. If you notice even when american universities come out with an ai or robotics tech paper, every name for the students involved are Chinese. 😂

    • @VikingMarketer
      @VikingMarketer День назад +6

      Because they aren't that hindered by guardrails and the government is probably helping a lot

    • @Metapharsical
      @Metapharsical День назад

      ​@@ianwatson5767they like to call it their "Thousand Talents Program" .
      I call it espionage .

    • @Metapharsical
      @Metapharsical День назад +12

      ​@@VikingMarketer"no guardrails" .. I might be inclined to agree in certain aspects, but if you've ever asked DeepSeek or ByteDance's LLM about any "sensitive topic" (Tiananmen Square, North Korea, Xinjiang, etc..etc..) you will quickly find they have VERY STRICT GUARDRAILS in their training.

    • @sakibtalks1806
      @sakibtalks1806 День назад +7

      Why are you concerned in the first place?

  • @calebweintraub1
    @calebweintraub1 6 часов назад

    Nice work as always. Thanks for bringing these to our attention.

  • @joequese4809
    @joequese4809 День назад +4

    @7:28 a quick trick is to look at the left view and right view then cross your eyes, a third image in the middle will appear if you overlap the images, it will be in 3d. Like when youre looking at a magiceye image.

  • @peterp4037
    @peterp4037 19 часов назад +11

    The most incredible aspect about AI is the progression. In just a few years. Now think what it would be like in 20 years.

    • @Lugmillord
      @Lugmillord 18 часов назад +4

      At this point I can't even think one month into the future.

    • @EMajdob
      @EMajdob 14 часов назад +1

      2027 should be the benchmark for what the standard ai production software can do, when I say production I mean animation and cgi live action films since that’s my niche.

  • @louiewashere3
    @louiewashere3 17 часов назад +6

    I wonder how long it will be before we can insert our selves into movies real time in VR example standing on the Titantic watching the characters and look around while the story plays out or standing on the Millennium Falcon, or standing in the room watching Jason Bourne fight the bad guy, or standing on the battle field with the 300 as the battle happens all around you.

    • @thefryingpan951
      @thefryingpan951 11 часов назад

      Its already been doable for months bro

    • @Bdbdbdh-h3k
      @Bdbdbdh-h3k 4 часа назад

      @@thefryingpan951 but does it look good?

  • @pdjay8912
    @pdjay8912 День назад +3

    제가 인정하는 최고의 AI뉴스입니다. 감사합니다 :)😊

  • @realthing2158
    @realthing2158 19 часов назад +2

    I spent several hours trying to install Stable Point Aware 3D into ComfyUI but I was getting one error after another while trying to install the correct versions of various build tools used in the installation process, until I finally gave up. Why can't they release a pre-built, fully working program?

  • @avdecibel
    @avdecibel 17 часов назад

    TransPixar is exactly what I've been looking for my plan is to use this AI generated content for live club visuals where you need a lot of black space, also this allows me to use my live VJ software to add effects which the backgrounds would get in the way. Thanks so much for sharing this!

  • @mathewsjoy8464
    @mathewsjoy8464 21 час назад +3

    Would be good to see ai news outside of video generation/3d art/robots, cause I feel like its the same stuff each week at this point, maybe stuff from other industries like farming/manufacturing etc

  • @Murdalizer_studios
    @Murdalizer_studios 11 часов назад

    Big fan. Im out here doing my part teaching digital artists lessons about ai, i want toi be like you, i love you and look up to oyou. Thank you for fighting the good fight for ai and being a beacon of truth and knowledge about ai. You're aammmzing

  • @labmike3d
    @labmike3d 10 часов назад

    Did a quick test of the Stereo Crafter demo videos using my 3D-printed adapter for the old-school Google Cardboard setup... and honestly? Not bad at all! 😎 A couple of minor glitches with close-up and overlapping objects, but overall, it's pretty solid. Tested with SBS demo vids from Sora examples. 💡 Also, shoutout to the AI news - killer selection! No complaints this time, seriously. 👏 Just keep that quality high and don't even THINK about lowering the bar! 🔥 Keep having fun with it, and I’ll be watching for more! 🎥

  • @VintageForYou
    @VintageForYou 23 часа назад

    Another fantastic video as always the videoanydoor looks good.👍

  • @MabelYolanda-c9i
    @MabelYolanda-c9i 11 часов назад

    Thanks for the excellent video!
    7:05 to be fair, this technology exists since 2006, developed by Cineform that now is part of GoPro. The encoders are still available for free. 90% of the 3D movies released in the market were shot on 2D and then converted to 3D, among them is Titanic.
    16:26 They never show images with occlusion. I wonder why.
    17:55 It doesn’t work that well. If you have a character with specific complex hair that can’t be prompted and need consistency throughout an entire scene; it doesn’t let you crop the image to your likeness, like vidu. Still not ready for prime time.

  • @RestlessBenjamin
    @RestlessBenjamin 21 час назад +5

    The technology to track where people are looking is kind of scary. I can foresee scenarios that where you are looking could be used against you. You were looking too long at a store that someone robbed the next day? Suspicious! Your smart glasses history will be reviewed. Even private unrelated moments would be subject to review.

    • @Lyle-In-NO
      @Lyle-In-NO 19 часов назад

      Interesting point. "Where someone is looking" is one of the physical manifestations often used by behaviorists/profilers/etc to infer thoughts/emotions especially in the field of criminology.

    • @JikJunHa
      @JikJunHa 19 часов назад +2

      It's not the technology itself. It's the way it's used. The technology itself is great and amazing, but if its used to empower a dystopian system then that's bad.

    • @RestlessBenjamin
      @RestlessBenjamin 17 часов назад

      @JikJunHa agree. It's the same with any tool or tech - some person or group will always use it to opress or control another - which is why privacy safeguards and/or countermeasures are so important. Who knows, maybe big hats will come into style, lol, or something a bit higher tech like personal low powered lasers that interfere with surveillance tech.

  • @Nekotico
    @Nekotico 3 часа назад

    Before 8:18 video demo, u can full screen and do the cross eyes for watching TikTok 3D fx videos, and ull see the same fx , tt was better tho or is too small but it's there fore sure...so well done

  • @ericliao4915
    @ericliao4915 23 часа назад +2

    Finally, we must have robot arms, which can assemble robot arms, which can manufacture AI chips, which can create AI videos.

  • @tdplayert
    @tdplayert День назад +1

    31:15 that relighting diffusion model seems to be a fascinating thing to look a bit further into by itself

  • @High-Tech-Geek
    @High-Tech-Geek 2 часа назад

    7:09 You can see the 3D effect for yourself without glasses or aids if you converge the 1st and 2nd images (ignore the 3rd image on the right) using either cross-eyed or look through techniques similar to viewing Magic Eye puzzles. Doing this, you can watch all the videos in 3D. I found the resulting depth to be very minimal.

  • @yak-machining
    @yak-machining 23 часа назад +4

    Nvidia is announcing too many things and softwares at the same time. Its pretty confusing for the developer and consumer

  • @capoman1
    @capoman1 59 минут назад

    26:03 This tool could matchmove a green screen recording, and allow the replaced background to be 3d instead of a 2d image.

  • @lalropekkhawbung
    @lalropekkhawbung День назад +12

    18:21 now this is game changing 🗣️🗣️

    • @iubankz7020
      @iubankz7020 17 часов назад

      fr i thought more people would be talking about this

    • @pepaw
      @pepaw 17 часов назад

      Is it open source?

  • @Will-kt5jk
    @Will-kt5jk День назад

    15:00 - forget the colours, it added clouds reflected in the glass (but the car’s motion seems off)

  • @tomaalexandru1735
    @tomaalexandru1735 22 часа назад +1

    'Unfortunately, the "Subject Reference" feature is no longer available for free Hailuo A.I users. I tried to use it first time and after being in queue for a few hours, i got a message that the generation failed, and the Upgrade button appeared instead of credits.

    • @yoagcur
      @yoagcur 9 часов назад

      It's working now. Looks like it cannot handle complicated prompts yet

  • @zengrath
    @zengrath День назад +4

    cool stuff. video anydoor looks fun, code not out yet though. I seen that 5090 would be about 2x preformance of 4090 but not a whole lot more vram. that nvidia digits computer on other hand while it is $3000 fact it processes externally and all is intriguing, i guess we'll have to wait to find out more about it, but if it does insane tasks and massive models that even a 5090 can't handle.. i'll be keeping an eye on it. I am not too thrilled to pay $2000 for a 5090 as i feel like it's not enough of an upgrade for price, 2x speed is nice but vram seems like not a huge step up. and fact that digits pc likely can run without using up my main pc resources meaning i can play games and such while it does it's thing sounds amazing. still 3k is a lot to spend on just being an AI enthusiast.. not like I have a job using AI to make things, but who knows.. one day it's possible.

    • @theAIsearch
      @theAIsearch  День назад

      thanks for sharing! things are happening so fast

  • @user-cz9bl6jp8b
    @user-cz9bl6jp8b 16 часов назад

    What is the Spar3d Hugging face space link where you can try it. I can't find it

  • @WayOfTheZombie
    @WayOfTheZombie 10 часов назад

    Transpixar is a game changer for yt creators

  • @TheBann90
    @TheBann90 21 час назад

    Kling already have a clothes swapper. Only tricky part is that you need to have the new clothes always have white background. So you might need photoshop in order to get it working perfect. The consistent character thingy is also old news and available on Midjourney and Flux for a long time now. The only news would be if we could use a character of choice, and add it into an old image. Or maybe someone here knows where I can find that?

  • @edmitry
    @edmitry День назад +1

    It seems to me that the polygonal embodiment of 3D models will soon become a rudiment. There will be something like a reference point cloud, and the neural network will build everything else in real time.

  • @leandrogoethals6599
    @leandrogoethals6599 23 часа назад

    How much vram do u need for steriocrafter cause i see that it also requires 3 other models

  • @greatestone4eva
    @greatestone4eva 31 минуту назад

    this is ok. not bad and might save some time on photogrammetry for video game asset creation. i think the HyperDAG models are still better for 3d model customization in real world uses, so it'll be cool to see the applications there

  • @zephyros1938
    @zephyros1938 День назад +4

    woohoo now we're taking jobs from 3d modellers too!

  • @Binary0Penguin
    @Binary0Penguin День назад +9

    The Only RUclipsr Whoose Videos I Wait For ❤

  • @yikesAnTuAn
    @yikesAnTuAn 10 часов назад

    Is this available for use ? For the public

  • @Adventures_EC
    @Adventures_EC 18 часов назад

    the last one is still impossible to try out. Hopefully soon

  • @thewatersavior
    @thewatersavior 20 часов назад

    Bring your own model + Home model hardware seems like a compelling case for small business trying to cut their consumption costs.

  • @BuioPestato
    @BuioPestato 21 час назад

    hailuo new feature has two problems: 1- changes clothes 2- doen't let you choose format, I for example mostly work with 9:16 and not 16:9

  • @willywychtyg
    @willywychtyg День назад +3

    in under 5 years we'll have Judge Dread robots roaming the streets!

    • @jaysonp9426
      @jaysonp9426 День назад +1

      OpenAI: "I AM THE LAW"

    • @Yorkshire2024
      @Yorkshire2024 День назад +1

      @@jaysonp9426and the law won

    • @marianpe5773
      @marianpe5773 23 часа назад

      You mean terminator or robocop

    • @tiergeist2639
      @tiergeist2639 22 часа назад

      i guess not further than 2 years away and we will have terminators. 3y max

  • @thewatersavior
    @thewatersavior 21 час назад

    interesting.. gaze is interesting - I wonder if it retains character continuity as they enter and leave scenes. Would be annoying to have to mash character scene data

  • @tshuqwud1693
    @tshuqwud1693 9 часов назад

    O.M.G I FINALLY know which voice you remind me of. Please tell me. Are you SignsOfKelani?

  • @Metapharsical
    @Metapharsical День назад

    24:49 tbf, seeing 3 adjacent coffee shops in the background made me double-take for a second, until I realized just how common that is in the real world 😉

  • @TheBann90
    @TheBann90 21 час назад

    I wonder how much we can do with 5090 by using fp4. If that's enough to run a 400b model for example

  • @crippsverse
    @crippsverse День назад +3

    I'm wondering how Blender will survive AI

    • @Rilex037
      @Rilex037 День назад +1

      none of this 3d generated objects cannot be used straight out of the ai gen

    • @crippsverse
      @crippsverse День назад +2

      @@Rilex037 Not yet but it won't be long

    • @marianpe5773
      @marianpe5773 23 часа назад

      ​​@@crippsverselike everything u can make with computer

  • @pdjinne65
    @pdjinne65 23 часа назад

    I see applications of the gaze decoder in certain areas of Brussels or Paris

  • @scetchmonkey007
    @scetchmonkey007 День назад

    can Gaze-elle be used to control eye direction in generating Ai animation, or altering it and rendering from existing video... that would be a great amount of control most video generators lack.

    • @theAIsearch
      @theAIsearch  День назад

      no, it currently only detects gaze direction. but it'd be nice if there was an AI that can control gaze!

    • @scetchmonkey007
      @scetchmonkey007 День назад

      @@theAIsearch Yeah I was just surmising what that technology could do in the future. Once you have the ability to detect gaze direction the next step is controlling that. There is next to no control over AI animations for stuff like that, when we get the control then that will change the industry.

    • @fluffsquirrel
      @fluffsquirrel День назад

      NVidia Broadcast made that eye control software where your eyes always look at the camera. Maybe NVidia will release something like that but with this as the control

  • @rogerruiz1801
    @rogerruiz1801 19 часов назад

    Love your content, i wish i could share with my family whih they dont speak english at all, so would be cool if you could use youtube auto dubbing

  • @Trvvo
    @Trvvo День назад

    i cant find the huggingface space for the 3d model one

  • @ItzzzDre
    @ItzzzDre 13 часов назад

    Can you please make a update version of the “This free Al Text-to-Speech is insane!” Video? Literally everything changed 😢

  • @1conscience0dimension
    @1conscience0dimension 16 часов назад

    Imagine a graphics card that directly contains the node structure of a pre-trained AI, capable of undergoing modifications for personalized learning or through training programs. It would essentially be like a brain. The ultimate challenge would be to make it consume as little energy as our own brain does. In short, it would be a graphics card that is itself a neural network.

  • @luciengrondin5802
    @luciengrondin5802 День назад

    Which paper made the Mona Lisa model?

  • @HasanAli-kd2vx
    @HasanAli-kd2vx День назад

    What are common technologies, languages, libraries used for such kind of open source AI tools?

    • @isitanos
      @isitanos 18 часов назад

      Python, pytorch, CUDA. C, C++ and Rust for high-performance tools. Among others.

    • @HasanAli-kd2vx
      @HasanAli-kd2vx 17 часов назад

      @isitanos thank you

  • @Pawel_Mrozek
    @Pawel_Mrozek 11 часов назад

    By the way on huggingface there is already "Image to 3D Asset with TRELLIS" generator which is superior in terms of quality comparing Stability 3D solution hover is quite slower.

  • @VaibhavShewale
    @VaibhavShewale 22 часа назад +2

    this CES was damn amazing

  • @ajalipio1
    @ajalipio1 19 часов назад +1

    This is my new fave AI channel. Subbed!!

    • @maelstrom2313
      @maelstrom2313 19 часов назад +1

      Yeah, I get all my AI breaking news from this channel. It's great. I usually end up downloading or bookmarking new models for each video.

  • @superjaykramer
    @superjaykramer День назад +9

    I hate tech that I can't use, demos of research is useless

  • @serikazero128
    @serikazero128 20 часов назад

    I wanna know more about the "digit" thingy. Because building a PC to run LLMs and other models, I need to spend around 5k euros to run heavily quantinized versions of 70b models.
    If this thing costs 5k and I can run windows on it + models -> I want it.........
    The main limitation right now is the V-RAM-GPU / Price. And obviously, electricity.
    To run 200b models you need around 100gb V-RAM for the quininized forms, but usually you will need around 200-400gb V-RAM for that. For more "normal size" models.
    A 5090 is priced at NVIDIA for 2000$ without VAT. That means for consumers, it will be around 2700$ give or take 200$. And that has 32 gb RAM.
    So, to "run" 200b models, in a "acceptable" fashion, you need around 3 to 4 of these 5090. Or 5-6 of the older generation 4090. Or 8 to 9 of the rtx 4080/4070.
    So, I'm very curios how much such a device would cost. Since just the GPUs for the above mentioned setup would cost around 10.800$
    So, this doesn't take into account the massive electricity cost you will have to deal with, nor the rest of the build and the cooling systems.
    Seriously, the moment we can easily run 70b models locally, things are going to be insane

    • @marsrocket
      @marsrocket 16 часов назад

      He said it is priced at $3000 each. With two of them you can run a 405b model.

    • @serikazero128
      @serikazero128 15 часов назад

      @@marsrocket yes and no. The prices at CES are without VAT (value added tax), which means you as a consumer will also pay for that which usually its +20% to the price of the item.
      Also NVIDIA does not sell directly to consumers, that's their business model. They sell to partners and those partners then sell to consumers. Which means you will have to pay more $$$ for the Asus/ROG/Gigabyte/MSI/etc branding on that device.
      Which mean the more realistic price of such as device is around 4k $/euros.
      And now regarding the 405b and other models out there. There are different quaternization levels.
      AI stores the meaning of words as numbers, usually Real numbers such as: 1.1349849, etc
      the Quantization methods are reducing the length of said numbers in memory.
      So, now lets take a 405b model to help you understand what this means. We shall take the very popular Llama 3.1.
      To run this model you need:
      for no quality loss, float point 16 size in memory a total of 812 GB Memory (RAM or VRAM, VRAM is faster).
      The NVIDIA mini super computer is advertised at around 128GB memory. Also said device doesn't have a CPU memory or anything, so, you will run everything, including your OS on that 128 GB memory.
      You don't have to be a genius to realize that 128 + 128 doesn't equal to 812 GB.
      So, does it run a 405b model?
      And here's where quantanization comes forward, by reducing the memory space required (the bytes required to store numbers, coz AI is just math, a lot of math) we have 8bit or q8 quantanization. For Llama 3.1 - 405b model, this requires 431 GB memory to run. Which is way better than the previous 812GB.
      We also have smaller quantanization levels. such as: 405b-instruct-q4_0
      this reduces the precision of the AI model once more. However, the memory required now is just 229 RAM.
      And this can indeed work if you combine two of those mini GPUs.
      However, as mentioned, it comes with the caveat that your AI model is not pretty heavily quantinized. To make it easier to visualize.
      The FP16 model has 16 lines of text defining what every word means
      The Q8 has 8 lines
      The Q4 has 4 lines.
      Its much more easy to have an accurate and precise AI the bigger the amount of memory you use to store its data.
      And this memory usage is on 2k context window. The larger the context window, the larger your memory needed. Now, I don't know the math formula for how much it increases.
      And now lets move to a smaller model, Llama 3.1, 70b
      70b-instruct-fp16 - requires 141 GB RAM to be used at the 2k context window.
      So, as you can see, this mini computer lacks the ability to run even a 70b model at full capacity.
      Is it still interesting? yes, definitely. I'm going to keep an eye on it. It sounded AMAZING first time I heard it. A bit less amazing the more I looked into it tho.
      If this runs with only 7 tokens per second speed, then sadly its not worth it.
      70b-instruct-q8_0 - this is 75gb And usually a Q8 quantanization is till very useful as a model. With not that big quality loss as a lets say a q2 or q3 model.
      But for this to be useful it should have at least 20 to 40 tokens/s inference generation speed.
      For referance, my Asus laptop with 32 GB ram and without a single CPU, can run Phi 4 - q8. A 14b model that requires 16gb Memory at 3.48 token/s. Its slow
      While 8b-instruct-q4_0. With 4.7 GB Ram required, runs at 9.25 tokens/s
      Again, this is the speed on my laptop with just CPU. No NVIDIA acceleration. No VRAM.
      So yes, its going to be an interesting device, but lets not assume its going to run the entire 405b model on 2 of these. Or 200b as claimed during the presentation on one of them.
      We don't even have enough memory for a full 70b model. You will more likely run a q8 70b model on this device. I just hope the speed is 40 to 60 tokens/s.
      Then I'm buying one of these.
      Sadly, there's barely any youtubers that review speeds for AI and stuff.

  • @High-Tech-Geek
    @High-Tech-Geek Час назад

    33:02 why are there 3 different coffee shops all right next to each other??
    Luckin Coffee
    Manner Coffee
    Starbucks Coffee

  • @jacobheinz8236
    @jacobheinz8236 День назад +1

    Have you noticed that every product claims to have AI suddenly? It’s the new snake oil now.

  • @diegoleal422
    @diegoleal422 11 часов назад

    best AI news, thanks!!!!!

  • @fluffsquirrel
    @fluffsquirrel День назад

    14:02 Yes you're right, he's Nemo. That is 100% a Nemo

  • @sentinelah7
    @sentinelah7 День назад +1

    Artificial intelligence is advancing so quickly that it doesn't even take a video for a substitute to appear that does the job better; in the same video this happens.

  • @HiCARTIER
    @HiCARTIER День назад +1

    Now you can sculpt things out of clay and import them as 3D objects :D

    • @theAIsearch
      @theAIsearch  День назад

      cool idea!

    • @Zuluknob
      @Zuluknob День назад

      you have been able to do that for years with photogrammetry or a 3d scanner.

    • @JesusPlsSaveMe
      @JesusPlsSaveMe День назад

      ​@@theAIsearch
      Where are you going after you die?
      What happens next? Have you ever thought about that?
      Repent today and give your life to Jesus Christ to obtain eternal salvation. Tomorrow may be too late my brethen😢.
      Hebrews 9:27 says "And as it is appointed unto man once to die, but after that the judgement

  • @synchro-dentally1965
    @synchro-dentally1965 День назад

    tip for stereo crafter: if you create a side by side, you can just cross your eyes to see the quality.

    • @fluffsquirrel
      @fluffsquirrel День назад +1

      Or oppositely, you can let your eyes go out of focus and view it in parallel if you make it small enough, like viewing on a phone screen.

    • @JesusPlsSaveMe
      @JesusPlsSaveMe День назад

      ​@@fluffsquirrel
      *Revelation 3:20*
      Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me.
      HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless.
      *Revelation 22:12-14*
      And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be.
      I am Alpha and Omega, the beginning and the end, the first and the last.
      Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.

  • @ShifterBo1
    @ShifterBo1 23 часа назад +1

    16:30 expected.

  • @thewatersavior
    @thewatersavior 20 часов назад

    So.. did they reveal how Jensen did the shrinking computer trick?

    • @Lyle-In-NO
      @Lyle-In-NO 19 часов назад

      I have the same question. I'm sure many others, too.

  • @RequiemForPAIN
    @RequiemForPAIN 22 часа назад +3

    With proper 3D modelling and motion capture, we won't even need consistent characters in text to video, and also camera/character manipulation won't be needed either. We'll have absolute control in environments like Blender, and we only have to pass an IMG2IMG post-processing for each frame afterwards with a Flux workflow and a script or something like that. So 2D to 3D stuff is what I'm most excited about. Of course, the capability of moving 200B parameters locally is going to be a game-changer like no other in history. We're not as hyped as we should be, not even close.

  • @polodog7458
    @polodog7458 День назад

    Great Video! I love how many cool AI tools your showing off in this one

  • @ButterSock.
    @ButterSock. День назад +1

    The Gaze LLE AI is an interesting concept but it has a long way to go.

    • @fluffsquirrel
      @fluffsquirrel День назад

      I imagine it would be fun though to roll your eyes and watch it spin. It seems to calculate each frame, so it'll update often hopefully 😂

    • @IceMetalPunk
      @IceMetalPunk День назад

      I'm still trying to figure out a decent use case for this that isn't spying...

    • @ShifterBo1
      @ShifterBo1 23 часа назад

      ​@@IceMetalPunkreal, its so stupid

  • @switchbackfive
    @switchbackfive 18 часов назад +1

    14:55 AI is so good, it knows a BMW driver is going to be on the wrong side of the road around a blind corner! 🫣

  • @rotors_taker_0h
    @rotors_taker_0h 21 час назад

    What a hell, was that Mona Thunberg at the beginning?

  • @lalropekkhawbung
    @lalropekkhawbung День назад +1

    Let's gooooooo 🗣️🗣️🗣️

  • @shake6321
    @shake6321 День назад

    is Ai really progressing or do we find a problem, get decent at it but not quite good enough that it makes any impact on society and then move to the next problem for AI to solve 90% but not good enough to replace a human.
    is there any problem generative AI is solving 100% by itself?

  • @thewatersavior
    @thewatersavior 20 часов назад

    Can it walk with a limp tho..

  • @ssdfhtrs
    @ssdfhtrs День назад +1

    Every new video making me more depressed, that AI about to more jobs.

    • @isitanos
      @isitanos 18 часов назад

      Same thing happened during the industrial revolution... But I agree that the transition may be rough.

  • @Grumgo2
    @Grumgo2 18 часов назад

    ahh, welcome back clippy

  • @phoenixfire6559
    @phoenixfire6559 19 часов назад

    Spar3d (in this video by Stability AI to make 3d models from 2d images) is pretty poor in comparison to TRELLIS (see a previous video by AI Search). The models are all flat and lack detail. It's not usable for anything yet but its a start I guess.

  • @tdplayert
    @tdplayert День назад

    interesting comment in that video: "reward engineering is all you need"

  • @KevinLarsson42
    @KevinLarsson42 18 часов назад

    Soon we will have robot gf's who can live as an avatar in our electronics

  • @kirlianpictures
    @kirlianpictures 19 часов назад

    Excellent!

  • @Sujal-ow7cj
    @Sujal-ow7cj День назад

    bangers 🙈🙉

  • @aggressiveaegyo7679
    @aggressiveaegyo7679 День назад

    Hopefully this blessing of local AI flooding won't end soon.
    3D video? It seems Nvidia can adapt its Reflex 2 to create 3D video in real time. Create a depth map, shift the angle by the distance between the eyes and fill in the empty spaces behind close objects.

  • @xenn2996
    @xenn2996 День назад +3

    21:36 very expensive

  • @MagnusMcManaman
    @MagnusMcManaman 19 часов назад

    Does the author take most of his information on AI from China? It looks a bit like it.

  • @Squirrel4Gir
    @Squirrel4Gir 15 часов назад

    It’s only a matter of time before they……. I’ll patent that idea first!

  • @amiri7392
    @amiri7392 18 часов назад +1

    Open weights is NOT open source. Just because you can run windows on your own computer doesn't make it open source

  • @jantube358
    @jantube358 18 часов назад

    Gaze-LLE gaze estimation can be used to track how men are looking at women and how women looking at men for example in talk shows with attractive guests. 😁

  • @VanSocero
    @VanSocero День назад

    2024 was great 👍. But 2025 is going to be on 🔥🔥🔥

  • @asifaghaXR
    @asifaghaXR 23 часа назад

    SPAR3D "Stable Point-Aware Reconstruction of 3D Objects from Single Images" not that great, beside the speed, the 3D models quality still way behind unfortunately!

  • @alexshapiro9841
    @alexshapiro9841 28 минут назад

    Blender artists, please put fries in the bag bros

  • @__________________________6910
    @__________________________6910 День назад

    4:32 online interview and online exam in danger for students and freshers

    • @fluffsquirrel
      @fluffsquirrel День назад

      I mean, they already have paid eye-tracking software. If they want to track your vision, they'd already have done it

    • @__________________________6910
      @__________________________6910 23 часа назад

      @@fluffsquirrel hummmmmmmmmmm

    • @fluffsquirrel
      @fluffsquirrel 16 часов назад

      @@__________________________6910 Maybe I'm misunderstanding your comment, but I believe they already do this for colleges and universities. One popular proctoring service, Honorlock, claims to use Ai to detect wether the test-taker is in frame or not, and if it detects suspicious behavior, it alerts the human proctor to investigate.

  • @Lv7-L30N
    @Lv7-L30N 17 часов назад

    Thank you

  • @Johan-rm6ec
    @Johan-rm6ec 19 часов назад

    Finally i found Mona Lisa.

  • @greatestone4eva
    @greatestone4eva 29 минут назад

    the problem is now we'll have extremely realistic deepfakes by uncensored models.

  • @Saiyajin47621
    @Saiyajin47621 День назад

    i think very soon we can really play Yu-gi-oh just like anime!

  • @sirhammon
    @sirhammon 5 часов назад

    Krea AI realtime images going straight to realtime 3D models, edited in your VR headset. Speed Blender anyone?

  • @sugaith
    @sugaith 18 часов назад

    this is INSANE

  • @N1ghtR1der666
    @N1ghtR1der666 День назад

    dude, why is optimus even on your mind, its not a robot, its a remote control and not a particularly good one

  • @SolisMortis
    @SolisMortis 22 часа назад

    When are you news ever not huge?

  • @jatinderarora2261
    @jatinderarora2261 День назад +1

    Mind blowing. Thanks for sharing.