It seemed like all your questions/answers could be done via voice chat on gpt without seeing a screen as the questions were pretty standard questions that didn’t relate to something very deep within a scene that would require much visual knowledge
An AI can click on a screen and can also type in numbers/text to change values. An AI can look at the results of what it did by looking at the screen. So it can and will replace users When it comes to making images or video, it won't use blender, it will directly imagine an image or an entire video inside of it's artificial brain and display it, it'll understand pretty accurately fluid sim/soft+ rigid body sim/light transport sim, inside 1 big software, an artificial brain... But AI does have the ability to use blender. Check out google's VEO 2 for instance.
I was planning on giving this a try today. Last night I played some games with it. It really feels like you have a personal tutor for anything you're doing on screen now. What a time to be alive!
I just tried it out, the AI can't really see or analyze the screen. The screen sharing functionality is limited for other humans collaborating. That's the AI's answer.
I'm facing an issue with Gemini. Whenever I share my screen with Gemini, it works fine initially, but after a few minutes, it suddenly stops responding. Is this a bug affecting everyone or just me? Is there any limitation like sharing the screen for more than two or three minutes?
As much as i love an assistent for learning. I am wondering how much ressources and power it will waste to compute such tasks. What looks like a great addition is not enough efficient for such tasks. Instead of playing around with ai to generate garbage and destroying income of hard working humen, we should think about what we do with this technology and where it would fit for a greater benefit for all.
I'm a 3d artist. It gives me an edge in my work. I personally find this quite exciting. I hope it can one day do everything for me simply by asking it.. "animate this character to jump over a wall.. make this building explode"
Exactly what we need so that it learns by us doing so that in a few years we are gone out of the jobs. YAASSS
It seemed like all your questions/answers could be done via voice chat on gpt without seeing a screen as the questions were pretty standard questions that didn’t relate to something very deep within a scene that would require much visual knowledge
At least it is a teacher rather than replacing the user.
Replaces teachers though so still replacing humans
@macIain yeah that's true.
An AI can click on a screen and can also type in numbers/text to change values.
An AI can look at the results of what it did by looking at the screen.
So it can and will replace users
When it comes to making images or video, it won't use blender, it will directly imagine an image or an entire video inside of it's artificial brain and display it, it'll understand pretty accurately fluid sim/soft+ rigid body sim/light transport sim, inside 1 big software, an artificial brain...
But AI does have the ability to use blender.
Check out google's VEO 2 for instance.
this is wild
Shut the front door....this is crazy!!
I was planning on giving this a try today. Last night I played some games with it. It really feels like you have a personal tutor for anything you're doing on screen now. What a time to be alive!
I'm wondering if data is also being collected here to train the AI.
Yes it is. Lol. But I still want it.😂
Woah. This is pretty huge. Now if we can have Google take the wheel and actually do what we ask it to, that will be next next level.
This is wild indeed! Finally, an AI tool that I can be proud of.
I hope it gets more development to be as intuitive in Houdini as it is in Blender.
I tried it. It says , it can't teach omplex software.
Amazing! How well does it work for Houdini?
I just tried it with Houdini, and guess what, it can't even guide me to create a normal explosion. So my guess is it still need a lots of development
@@jotoderjosh My guess is that RUclips is flooded with Blender tutorials compared to other 3D software, which is why it's so good at it.
@@Hi-HK my guess is that the instruction is still too simple. I want to see AI guide me through doing retopology stuff
My guess is that it's just the small Gemini FLASH 2.0 model, what when it's the Gemini PRO or ULTRA 2.0 model?
Dang, this is wild, kinda scary but kind of amazed aswell, thanks for sharing man!
I just tried it out, the AI can't really see or analyze the screen. The screen sharing functionality is limited for other humans collaborating. That's the AI's answer.
We all are doomed
For WHAT? 🤔👀
IKR! AI is still useless when it comes to medical stuff
Interesting stuff
Ok so we are now the machines and the robots😂!
I'm facing an issue with Gemini. Whenever I share my screen with Gemini, it works fine initially, but after a few minutes, it suddenly stops responding. Is this a bug affecting everyone or just me? Is there any limitation like sharing the screen for more than two or three minutes?
Look good so far. Hopefully it would be useful for more difficult tasks
Matt Damon??
Damn it! I only allot $20 on my ai stuff and google and OpenAI won’t calm down releasing cool stuff! 😂
It is replay sing you without you realising...and you are paying for it
That's amazing!
It seems that teachers will soon be unnecessary and RUclips lessons will not be so relevant.
Prob only a month or two away from controlling software while we just tell it what to do..
That's what I'm hoping for sure.
AI Agents are already doing this.
@@WanderlustWithT I was thinking with Audio, I know you can type it. I can imagine small models that know the entire software, which would be cool..
oh so I can stop watching youtube tutorials!
As much as i love an assistent for learning. I am wondering how much ressources and power it will waste to compute such tasks. What looks like a great addition is not enough efficient for such tasks. Instead of playing around with ai to generate garbage and destroying income of hard working humen, we should think about what we do with this technology and where it would fit for a greater benefit for all.
I'm a 3d artist. It gives me an edge in my work. I personally find this quite exciting. I hope it can one day do everything for me simply by asking it.. "animate this character to jump over a wall.. make this building explode"