An AI constantly analyzing and watching the video footage as it comes through-that sounds like terrifying possibilities for people to be surveilled in workspaces and more generally.
If it wasn't for you I wouldn't have ever been able to get so interested in AI. Your vids have helped not only keep me motivated, excited, and productive
One of the reasons I love Friday is because your weekly AI news video comes out. I really wait for it all week. The content you do is incredible, and I'm just very grateful for that. Greetings from Chile! 🇨🇱
The issue with the 50 series is that they’re marketed as offering 2x the 'performance' of the last generation, but NVIDIA is using the term 'performance' very misleadingly. The raw power isn’t anywhere near 2x
Is it? Performance and power are different metrics. If your eyes can't tell the difference what's the discrepancy? If 2 Cars have 50hp difference but the 0-60 w less hp is faster, that sounds like more performance to me.
I honestly don’t get this take. We are talking about “video games.” None of the frames are “real.” We should look at the actual frames it generates and compare it with the “real frames.” Until then, it’s kind of up to debate.
@@aalluubbaa Well let me put it this way, the issue isn’t with the concept of generated frames vs real frames.. it’s with the way NVIDIA markets it. Claiming 2x performance without making it clear that it relies heavily on techniques like frame generation (rather than raw hardware power) feels disingenuous. (This is besides the point, but..) Also what do you mean by 'none of the frames are real'? All frames rendered by a GPU are 'real' outputs of its hardware, whether traditional rendering or AI generated. The difference is how they’re produced and whether the added frames actually represent true performance improvements, like lower latency or higher responsiveness.
Have watched your channel for a very long time. My favorite AI channel because you always tell it like it is. The common man's opinion; not always hyped about EVERYTHING and genuinely considers long term effects.
@@jksdo88 It's not an apples to apples comparison. The new card is using a new frame generation method that creates more frames - they aren't real frames. The comparison that would make sense is not to use any frame generation at all.
Thanks Matt. Always enjoy seeing you every week and have great admiration for the amount of work that you put into all of her endeavors sometime you’ll have to share with us where you get all the energy. 😅
As with Sam Altman, I believe I can run a marathon and I am going to turn my efforts into running an ultra marathon. Granted, I haven't accomplished either but I believe I know how to.
1) You might be right on 2) If he's made as good a plan as he can on the former (AGI) and set up teams to work on it... Why shouldn't he then start looking for ideas that allow for ASI? Until AGI results give him data to work with, or makes sense to look ahead, if there's little to do while AGI tests are run.
Matt. I feel your pain with regard to re-recording a video. Such is a video creator's life, sometimes, unfortunately. This one was really well done! Keep up the good work. I am always an audience member for ya!
Hey Matt! Using future NVIDIA NIMs would be a huge time-saver for video editing. Imagine shooting hours of CES b-roll, then generating a full video with just a basic prompt-selecting shots, length, and subjects. You could walk around with a gimbal, create a video in minutes, and upload it to RUclips the same day. All with minimal effort, thanks to NVIDIA NIMs. This is the future of video editing! Right now, AI-powered tools like Resolve's text-based editing are available, but soon smart AI agents (like NIMs, RAGs, etc.) will take this to the next level. Pretty amazing!
Number bigger good, can't wait for hyper super intelligence to finally come out but I heard they were secretly developing masive hyper super intelligence artifical intelligence thats going to be a million times better. Snakey snake oil. Its just a hype method.
I think the first is also important. AI agents in the workforce have the potential for positive and negative implications. What immediately comes to mind is the impact on the current workforce, e.g., us humans
Thank you for always keeping us up to date! As a music producer and audio engineer, I do have one suggestion: Please be more gentle with the speech volume that you deliver. Any modern microphone is able to pickup the audio just fine, even if it is a whisper. This will make the audio of your content less strident. Cheers.
This reminds me of those sci-fi shows, “I’ve analyzed 1.2 Million scenarios and XYZ has a 97.3% chance of success”. It’s really happening, all the things we’ve seen on TV, about the future, is all coming to life. Exciting time to exists in the world!
Now that Sam Altman has squeezed all the meaning out of the term AGI and left it empty, he's about to do the same with ASI. Even tho they didn't even get close to AGI.
I thought it was telling of Sam's headspace when he wrote that "we may see the first AI agents "joining the workforce" and materially change the output of COMPANIES". Why COMPANIES and not people? So far it really seems like people/individuals are getting more immediate benefits from AI than any company, but that may just be the lack of visibility to how AI is driving value.
I actually tried that but it came out really weird looking. Like I deepfaked myself. It’s also frustrating that ElevenLabs only allows 5-minutes at a time when doing audio to audio.
@mreflow yeah, I hate that when it comes to actual practical things the AI still has trouble with simple items. I've been trying to figure out a workflow to take some pictures that I shot and just do some AI retouching. So far it's been a struggle to just get it to take the same picture and just make some adjustments. It can redo the whole image into something else but taking the same image and making some adjustments is a struggle
Thanks for the breakdown! I need some advice: My OKX wallet holds some USDT, and I have the seed phrase. (mistake turkey blossom warfare blade until bachelor fall squeeze today flee guitar). What's the best way to send them to Binance?
As someone who has prerogative and edited a ton of videos over the years, I feel you on the audio issues. It sucks when you put hours into making a video and it's all for naught. It's a great learning experience, at least.
Hey there, Matt. So, in After Effects, you can actually control any green that is left in your video, after your shoot. The trouble is, most creators either don't have the tool, don't know that it exists, and/or, just never take the time to find out. Ask Chat GPT. I forget the name of the exact effect, but, I assure you that it's there. I use it quite a bit. It's slider based and works really well.
Really nice stuff, I do hope as someone who works on this kind of stuff in manufacturing and logistics that they continue to improve heavily on Cosmos, since this first version sadly has too many structural distortions to be usable but they definitely nail it on the style transfer part of Synth-to-Real problems.
Bro I gotta be real here, if you're not being rate limited or have no NDA with Project Astra.. you should show more footage of that. Seriously, that's a gold mine of content.
2:20 You're probably right about most people not noticing a difference in quality on AI generated frames, but this new tech only works on games using DLSS 4, and at the moment not a whole lot of games even use DLSS 3.5
Your video is interesting. Thank you. I noticed from the thumbnail that I saw, that you looked a lot like Ryker (#1 Startrek Next Generation I think). Not so much in video, lol.
I’m totally down with getting an AI mirror as long as I don’t hear about that flipping “fairest of all” Snow White. I mean, would it be out of the realm of possibility for the AI to on the fly blur out the wrinkles, do some color correction, and throw in some flattery?
Great job as always! Looking forward to Mondays video. Maybe a NIM can be built for lip reading your video... I think Bill Belecheck jnows someone that has one. Lol
@tobiasmyers3505 The virtual boy used mirrors to bounce a single line of LEDs to the back of the retina. It is the same method an industrial fiber laser would use to move the beam without having to move the whole head and focusing unit. It also relied on how your eyes dissipate the signal. Similar to hitting peanut butter with UV light, your eyes don't discharge 100% of a signal instantly. By having a faster scan time across the LEDs than your eyes can dissipate, you see an image. Other devices, from fpv goggles, to vr headsets don't beam single LEDs to the back of your eye
5070 vs 4090 is not comparable if you talk about VRAM which is the main reason why I bought it to run AI models in the first place. Even the 5080 does not have enough. So for gaming and all, sure, but for running local video generator models at highest quality and long duration etc, no.
Yeah. The 5090 has more VRAM than the 4090. 32 gb of gddr7 and higher bandwidth than 23 gb of gddr6. Or there are the DIGITS mini AI computers. You can couple 2 together to run LLAMA 405B, etc, locally, and train, etc.
If you make the process to go through hours of b-roll to grab the cool parts to combine into short clips and segments, that would be massively cool... if you share it :)🙂
Matt, can you give us please some advice on which ai-glasses would you advise us to get, as high tier, high quality type of ai glasses, for dayly ai integrated work/activities. Also with possible api interaction with eg your own made automations/workflows. To save or 40+ old years necks and backs 🫤😉 (btw another great video Matt! Sending my colleague engineers to your website and the next wave podcast! )
Does anyone know if any of these image generation tools can re-use a generated character? For example, if you want to make a kids book, and you wish to develop a character and out it in various scenes. Thanks!
I can't imagine how advanced technology will be by 2050, i don't think anyone can have a clue. The things that limit technology tend to be restrictions. removing that is going to bring insane tools but also insanely dangerous. part of the deal, you can't remove that , is human nature.
Imagine you pay for cheaper offline subscription for Sora or similar and compute unlimited videos with your NVIDIA AI dedicated computer. Nice instead of paying for cloud rendering which is getting more and more expensive.
I bought most high end M4 mac for the purpose but I have now calculated that private setup money wise makes no sense. Running all the time 70B LLM model I get max 10t/s and thus 25M tokens in 30 days. That costs between 25 to 50 usd in OpenRouter and would be way faster.
Timestamps (Powered by Merlin AI) 00:05 - Nvidia dominates CES with powerful new 50 Series GPUs. 02:13 - Nvidia's 50 Series GPUs enhance performance with generative pixels and AI capabilities. 06:07 - Nvidia unveils AI blueprints for enhancing workflows and video analysis. 08:03 - Nvidia unveiled new AI models and virtual training environments at CES. 12:10 - Microsoft releases powerful open-source AI models for math and code generation. 14:11 - OpenAI advances toward AGI and ASI, aiming for significant impact by 2025. 18:07 - Adobe's new AI, transPixar, generates videos with transparent backgrounds. 19:51 - Samsung showcases interconnected smart appliances enhancing user convenience. 23:15 - Exciting AI technologies showcased, raising concerns over health and ethics. 25:06 - Technical difficulties delayed the AI news video post-CES.
Hey Matt - I know the "live" brodacasts are important to you but please don't canabilize your historical style review videos. For instance your normal CES recap included all aspects/highlights of the events but you stated that will be included during your next live. Your standard event review videos were succient and on point. Thanks much :);)
If the super computer can be a computer itself, that would be perfect. Plug a monitor in it, or ar glasses like Xreal, plug in a Metaquest for creating vr worlds It might have Bluetooth for mouse and keyboard
An AI constantly analyzing and watching the video footage as it comes through-that sounds like terrifying possibilities for people to be surveilled in workspaces and more generally.
Yes. It can be done, so it will be done, especially if only some additional software needs to be rolled out under the guise of IT security.
People are being surveilled in their workplaces, that has been going on for years lol
Imagine the ruthless efficiency AI could bring.
absolutely terrible not good
You're only worried about that because you live in a dystopia. Fix your society, then it'll become a valuable tool.
Wow, great news summary, best audio! Thank You!
If it wasn't for you I wouldn't have ever been able to get so interested in AI. Your vids have helped not only keep me motivated, excited, and productive
One of the reasons I love Friday is because your weekly AI news video comes out. I really wait for it all week. The content you do is incredible, and I'm just very grateful for that. Greetings from Chile! 🇨🇱
The issue with the 50 series is that they’re marketed as offering 2x the 'performance' of the last generation, but NVIDIA is using the term 'performance' very misleadingly. The raw power isn’t anywhere near 2x
Is it? Performance and power are different metrics. If your eyes can't tell the difference what's the discrepancy? If 2 Cars have 50hp difference but the 0-60 w less hp is faster, that sounds like more performance to me.
They're selling software.
Ya ha it really does... really, really..
I honestly don’t get this take. We are talking about “video games.” None of the frames are “real.”
We should look at the actual frames it generates and compare it with the “real frames.” Until then, it’s kind of up to debate.
@@aalluubbaa Well let me put it this way, the issue isn’t with the concept of generated frames vs real frames.. it’s with the way NVIDIA markets it. Claiming 2x performance without making it clear that it relies heavily on techniques like frame generation (rather than raw hardware power) feels disingenuous.
(This is besides the point, but..)
Also what do you mean by 'none of the frames are real'? All frames rendered by a GPU are 'real' outputs of its hardware, whether traditional rendering or AI generated. The difference is how they’re produced and whether the added frames actually represent true performance improvements, like lower latency or higher responsiveness.
It was great to briefly say hello whilst in the queue for the keynote
Have watched your channel for a very long time. My favorite AI channel because you always tell it like it is. The common man's opinion; not always hyped about EVERYTHING and genuinely considers long term effects.
1:18 that slide is completely misleading - read the fine print at the bottom.
I read it and still don't understand what is wrong with it
@@jksdo88 It's not an apples to apples comparison.
The new card is using a new frame generation method that creates more frames - they aren't real frames.
The comparison that would make sense is not to use any frame generation at all.
@@metamon2704 About a minute later, Matt talks about exactly that.
@@metamon2704 The fake frames have microplastics. Stick to natural grown frames.
@@metamon2704 bro speak in simpler terms I'm not a nerd
Thanks Matt. Always enjoy seeing you every week and have great admiration for the amount of work that you put into all of her endeavors sometime you’ll have to share with us where you get all the energy. 😅
As with Sam Altman, I believe I can run a marathon and I am going to turn my efforts into running an ultra marathon. Granted, I haven't accomplished either but I believe I know how to.
1) You might be right on
2) If he's made as good a plan as he can on the former (AGI) and set up teams to work on it... Why shouldn't he then start looking for ideas that allow for ASI? Until AGI results give him data to work with, or makes sense to look ahead, if there's little to do while AGI tests are run.
*it makes sense, not or
Missed you Saturday, thanks for the commitment beyond all the issues...that stuff is frustrating.
Silo writters should consult AI to tell them that spending entire season of wathcing Rebecca getting in and out of water was boring and bad writting
This is my first time seeing a video for you, and wow, you rock.
Thank you so much . Much awaited .
Thanks. Great update as usual.
Great report! Thank you 🙏
Matt. I feel your pain with regard to re-recording a video. Such is a video creator's life, sometimes, unfortunately.
This one was really well done! Keep up the good work. I am always an audience member for ya!
so keen to get my hands on the personal supercomputer. I just wish I new how to program it.
Just ask it how to program it.
Hey Matt! Using future NVIDIA NIMs would be a huge time-saver for video editing. Imagine shooting hours of CES b-roll, then generating a full video with just a basic prompt-selecting shots, length, and subjects. You could walk around with a gimbal, create a video in minutes, and upload it to RUclips the same day. All with minimal effort, thanks to NVIDIA NIMs. This is the future of video editing! Right now, AI-powered tools like Resolve's text-based editing are available, but soon smart AI agents (like NIMs, RAGs, etc.) will take this to the next level. Pretty amazing!
glad to hear how the market is growing & expanding so much! love the content! congratulations Matt! thanks 4 sharing!
The most important thing in CES is NVIDIA NEWS
You mean the false advertising scam that everyone who actually understands tech is laughing at?
@@B.D.E.you mean you dont know anything about tech. Im sorry life sucks for you
The last 2 paragraphs of the doc at 16:00 is so important for everyone to read.
Number bigger good, can't wait for hyper super intelligence to finally come out but I heard they were secretly developing masive hyper super intelligence artifical intelligence thats going to be a million times better. Snakey snake oil. Its just a hype method.
I think the first is also important. AI agents in the workforce have the potential for positive and negative implications. What immediately comes to mind is the impact on the current workforce, e.g., us humans
Pray that this is just corporate bragging. Otherwise you can line up for a walfare program today and we are all f*cked.
Thank you for always keeping us up to date! As a music producer and audio engineer, I do have one suggestion: Please be more gentle with the speech volume that you deliver. Any modern microphone is able to pickup the audio just fine, even if it is a whisper. This will make the audio of your content less strident. Cheers.
There are AI tools to master and balance audio. Especially, with high bit lossless recording. Cheers!
just the fact that we are talking about superintelligence means we are getting closer to agi.
So you wouldn't trust a smart fridge to order your groceries correctly, but AI agents replacing lawyers and doctors, yeah cool bring it on! 😂
Yea
This reminds me of those sci-fi shows, “I’ve analyzed 1.2 Million scenarios and XYZ has a 97.3% chance of success”. It’s really happening, all the things we’ve seen on TV, about the future, is all coming to life. Exciting time to exists in the world!
Thanks for this great overview and update, Matt! Keep up the great content! 👍
Thank you awsome video!
I hope you got to check out the Aptera! Would love to see you do a video visiting them in Carlsbad!
Sundays may be even better for "Coffee with Matt" as I like to call your weekly updates! Fridays are fine, but hey - Sundays!
I like Coffee with Matt
We all do @@joefier ! I like my AI summaries like I like my coffee: strong and in company.
Smart fridge is old, remember Korea or Japan showing this off about a score ago.
Nice update!
25:50 - imagine using elevenlabs with your voice to fix the audio, what a practical AI application that would be, lol
Now that Sam Altman has squeezed all the meaning out of the term AGI and left it empty, he's about to do the same with ASI. Even tho they didn't even get close to AGI.
Hey Matt - I noticed your missing links to the stories you covered. You just have timestamps. Any chance we'll get the links for this episode?
Liking the idea of this project Digits
I thought it was telling of Sam's headspace when he wrote that "we may see the first AI agents "joining the workforce" and materially change the output of COMPANIES". Why COMPANIES and not people? So far it really seems like people/individuals are getting more immediate benefits from AI than any company, but that may just be the lack of visibility to how AI is driving value.
No more links? :(
MATT!! Why did you stop providing links to the articles that you discuss during your AI News videos?? I need these links!
Seems like you should've been able to use Eleven labs are something to just re add your audio on the video you shot
I actually tried that but it came out really weird looking. Like I deepfaked myself. It’s also frustrating that ElevenLabs only allows 5-minutes at a time when doing audio to audio.
@mreflow yeah, I hate that when it comes to actual practical things the AI still has trouble with simple items. I've been trying to figure out a workflow to take some pictures that I shot and just do some AI retouching. So far it's been a struggle to just get it to take the same picture and just make some adjustments.
It can redo the whole image into something else but taking the same image and making some adjustments is a struggle
The way he said "A squirrel eatin' a nut!' made me spit out my coffee. lol I'm such a child at times I guess
I was waiting for this video
Yaa same . I was waiting from yesterday morning and finally got today .
Thanks for the breakdown! I need some advice: My OKX wallet holds some USDT, and I have the seed phrase. (mistake turkey blossom warfare blade until bachelor fall squeeze today flee guitar). What's the best way to send them to Binance?
Matt, always double record your audio.
Audio is usually the problem, I dunno why exactly, it just be like that.
Real raster frames are not the same as generated frames. Number bigger does not equal better. Gameplay will feel like you're drunk.
As someone who has prerogative and edited a ton of videos over the years, I feel you on the audio issues. It sucks when you put hours into making a video and it's all for naught. It's a great learning experience, at least.
I don't get it
Double check, and check again
Matt doesn't trust tech now!
Done it a few times before. It does really sting and hurt.
no sources to the articles?
I want ASI now! 💪🏻💯
Hey there, Matt. So, in After Effects, you can actually control any green that is left in your video, after your shoot. The trouble is, most creators either don't have the tool, don't know that it exists, and/or, just never take the time to find out. Ask Chat GPT. I forget the name of the exact effect, but, I assure you that it's there. I use it quite a bit.
It's slider based and works really well.
6:23 PDF to podcast is great 👍🏾. 9:30 This virtual simulation system is crazy 😧. 9:30 This is very innovative 😊.
Really nice stuff, I do hope as someone who works on this kind of stuff in manufacturing and logistics that they continue to improve heavily on Cosmos, since this first version sadly has too many structural distortions to be usable but they definitely nail it on the style transfer part of Synth-to-Real problems.
Funny thing is. In season 4 of the series Westworld they used AI to make a simulation of the world. And boy did that turn out great.😂
Bro I gotta be real here, if you're not being rate limited or have no NDA with Project Astra.. you should show more footage of that.
Seriously, that's a gold mine of content.
Always great content, a quick question just out of context - why do you wear headphones?
I'd say because you cannot afford feedback into a mic from speakers and no audio whilst recording not a great idea.
@user_375a82 How about setting the recording level and muting the speakers?
Agreed. I never was on twitter and never will be. If grok comes available outside of twitter, I will have a look
The VLC news is something I wanted back in 2019😢
2:20 You're probably right about most people not noticing a difference in quality on AI generated frames, but this new tech only works on games using DLSS 4, and at the moment not a whole lot of games even use DLSS 3.5
Thank you :)
The most shocking thing in this video is seeing Jensen without a leather jacket.
The news video you recorded. If there was a transcript, it would be a great experiment to see if Elevenlabs could do a dub with your own voice?
"Engage." and "Teleport me off of this rock".
IYKYK!
Your video is interesting. Thank you. I noticed from the thumbnail that I saw, that you looked a lot like Ryker (#1 Startrek Next Generation I think). Not so much in video, lol.
Omg!!! SO MUCH NEWS!!!🤪
I’m totally down with getting an AI mirror as long as I don’t hear about that flipping “fairest of all” Snow White.
I mean, would it be out of the realm of possibility for the AI to on the fly blur out the wrinkles, do some color correction, and throw in some flattery?
Why do we need nims if the models are supposed to be omnimodal?
I don't know about cheaper and cheaper , that has not happened with NVIDIA Graphics cards .Actually the opposite.
if you care about financial freedom do yourself a favor and read The Censored Guide to Wealth
So why is NVIDIA not doubling the VRAM?
Don't get too excited about the 50s series, the scalpers are listening. edit: If the NIM gets something wrong, you can call it a nimcompoop!
Great job as always! Looking forward to Mondays video. Maybe a NIM can be built for lip reading your video... I think Bill Belecheck jnows someone that has one. Lol
The Virtual Boy used the same technology (in conception) as those smart glasses. Beamed the images directly onto your eye.
All light is beaming into our eyes, so, kinda...
@tobiasmyers3505 The virtual boy used mirrors to bounce a single line of LEDs to the back of the retina. It is the same method an industrial fiber laser would use to move the beam without having to move the whole head and focusing unit. It also relied on how your eyes dissipate the signal. Similar to hitting peanut butter with UV light, your eyes don't discharge 100% of a signal instantly. By having a faster scan time across the LEDs than your eyes can dissipate, you see an image.
Other devices, from fpv goggles, to vr headsets don't beam single LEDs to the back of your eye
The Samsung stuff is like 6th Day with Arnold schwartneggar. The cloning movie.
Matt, how come you never talk about Verses AI?
5070 vs 4090 is not comparable if you talk about VRAM which is the main reason why I bought it to run AI models in the first place.
Even the 5080 does not have enough.
So for gaming and all, sure, but for running local video generator models at highest quality and long duration etc, no.
Yeah. The 5090 has more VRAM than the 4090. 32 gb of gddr7 and higher bandwidth than 23 gb of gddr6.
Or there are the DIGITS mini AI computers. You can couple 2 together to run LLAMA 405B, etc, locally, and train, etc.
@23:00 there is a range of lasers that are safe for your eyes, they've been using them in grocery stores for decades.
AI: It looks like a human in sector 3J is grumpy. The police have been dispatched.
VLC news is really nice
If you make the process to go through hours of b-roll to grab the cool parts to combine into short clips and segments, that would be massively cool... if you share it :)🙂
Do not over promise anything, Sam.
Matt, can you give us please some advice on which ai-glasses would you advise us to get, as high tier, high quality type of ai glasses, for dayly ai integrated work/activities. Also with possible api interaction with eg your own made automations/workflows. To save or 40+ old years necks and backs 🫤😉 (btw another great video Matt! Sending my colleague engineers to your website and the next wave podcast! )
I've actually been looking into ways to create my own AI server. And then link the open source home assistant device to be the bridge to command.
Does anyone know if any of these image generation tools can re-use a generated character? For example, if you want to make a kids book, and you wish to develop a character and out it in various scenes. Thanks!
Those graphics cards 🤤
I can't imagine how advanced technology will be by 2050, i don't think anyone can have a clue. The things that limit technology tend to be restrictions. removing that is going to bring insane tools but also insanely dangerous. part of the deal, you can't remove that , is human nature.
Thanks Matt. Sorry for the 2x work you’ve done for this video.
Just know you are the #1 source for AI. Keep up the good work!
Until I see actual changes to lifestyle and products then AGI isn’t here yet
The final shedding of Hunter-Gatherer, your super smart fridge... it takes one look and user opts for takeway as now the magic is gone
Imagine you pay for cheaper offline subscription for Sora or similar and compute unlimited videos with your NVIDIA AI dedicated computer. Nice instead of paying for cloud rendering which is getting more and more expensive.
I bought most high end M4 mac for the purpose but I have now calculated that private setup money wise makes no sense. Running all the time 70B LLM model I get max 10t/s and thus 25M tokens in 30 days. That costs between 25 to 50 usd in OpenRouter and would be way faster.
some of them are really amazing, but would not be out for so many years to come
simple, add 3-4 cameras to see from every angle :)))
Take a break and play for us a little banjo. Happy New Year!
Hey Matt, I'm a fan but the Nvidia glazing is crazy 😭
Sooo, the million $$ question… is it using just your normal searches or is it pulling in your incognito searches as well?!?
The fact that even in the west you’re considered ballsy if you criticise Putin!? I mean surely it shouldn’t be like that, otherwise one gets owned
Excellent explanation of Nvidia super computer.
1:43 that is a LIE, 5070s performance characteristics are not even close to 4090s.
Timestamps (Powered by Merlin AI)
00:05 - Nvidia dominates CES with powerful new 50 Series GPUs.
02:13 - Nvidia's 50 Series GPUs enhance performance with generative pixels and AI capabilities.
06:07 - Nvidia unveils AI blueprints for enhancing workflows and video analysis.
08:03 - Nvidia unveiled new AI models and virtual training environments at CES.
12:10 - Microsoft releases powerful open-source AI models for math and code generation.
14:11 - OpenAI advances toward AGI and ASI, aiming for significant impact by 2025.
18:07 - Adobe's new AI, transPixar, generates videos with transparent backgrounds.
19:51 - Samsung showcases interconnected smart appliances enhancing user convenience.
23:15 - Exciting AI technologies showcased, raising concerns over health and ethics.
25:06 - Technical difficulties delayed the AI news video post-CES.
This is pretty cool. How can i use this for other vids?
Hey Matt - I know the "live" brodacasts are important to you but please don't canabilize your historical style review videos. For instance your normal CES recap included all aspects/highlights of the events but you stated that will be included during your next live. Your standard event review videos were succient and on point. Thanks much :);)
Aren’t the Halliday lenses similar tech to Google Glass? Both beamed images into your eyes
If the super computer can be a computer itself, that would be perfect.
Plug a monitor in it, or ar glasses like Xreal, plug in a Metaquest for creating vr worlds
It might have Bluetooth for mouse and keyboard