"There are some architectures that may be important - that just because they're not easily expressed as tensors in Pytorch or Jax or whatever, that people shy away from because you only build the things that the tools you're familiar with are capable of building. I do think there is potentially some value there for architectures that, as a low-level programmer that's comfortable writing just raw CUDA and managing my own network communications for things, there are things that I may do that others wouldn't consider." "One of my boundaries is that it has to be able to run in real time. Maybe not for my very first experiment, but if I can't manage a 30hz/33-millisecond sort of training update for a continuous online-learned algorithm, then I probably won't consider it because I do consider that necessary for - even while you might grow in a simulated world at some point people aren't going to really buy it until they're having a Zoom call with the AI... So that ability to run in realtime goes against the current grain."
I’m a little late to the game on seeing this announcement. However, CONGRATS and GOOD LUCK! Having followed John’s posts in the first year he started his own journey into AGI I’m thrilled to see his continued contributions to this field and especially in Canada nonetheless :-)
John Carmack brought real-time 3d to the desktop. Hopefully he can bring LLMs, and stable diffusion to the desktop. It's a race. We have to get it into everyone's hands before governments can ban it. That's what we did with encryption; that's what has to happen here.
They should've let devs and academia to ask questions, instead of media.
"There are some architectures that may be important - that just because they're not easily expressed as tensors in Pytorch or Jax or whatever, that people shy away from because you only build the things that the tools you're familiar with are capable of building. I do think there is potentially some value there for architectures that, as a low-level programmer that's comfortable writing just raw CUDA and managing my own network communications for things, there are things that I may do that others wouldn't consider."
"One of my boundaries is that it has to be able to run in real time. Maybe not for my very first experiment, but if I can't manage a 30hz/33-millisecond sort of training update for a continuous online-learned algorithm, then I probably won't consider it because I do consider that necessary for - even while you might grow in a simulated world at some point people aren't going to really buy it until they're having a Zoom call with the AI... So that ability to run in realtime goes against the current grain."
I’m a little late to the game on seeing this announcement. However, CONGRATS and GOOD LUCK! Having followed John’s posts in the first year he started his own journey into AGI I’m thrilled to see his continued contributions to this field and especially in Canada nonetheless :-)
If Carmack is on the job, I expect AGI in a year! He is a true genius.
He said himself that he expect year 2030 for AGI at the probability of a coin toss and 2050 for sure.
You can do it, John! Unrelated, but I wish they'd stick with simple backgrounds behind speakers.
as if john carmack was not a big name already, he's really gonna hit it big time if they get this done
Missed the chance to say BFG
Terrible audio quality, maybe run it through an AI filter to clean it up
How do I work with JC on this? I have some pretty good ideas about how to help.
John carmack discusses learning about clipping … into a mic that is aggressively clipping
What's clipping?
‘I’m sorry, LightningAussie, I’m afraid I can’t do that.’
John Carmack brought real-time 3d to the desktop.
Hopefully he can bring LLMs, and stable diffusion to the desktop.
It's a race. We have to get it into everyone's hands before governments can ban it.
That's what we did with encryption; that's what has to happen here.
Those are already on my desktop... (16 gb vram here). I can load a 13b model easy (6gb vmram can load a 7b model)
49:10 bud looks pissed 😂
since the brain is essentially a quantum computer how do you even do this without quantum computers seems like that is the way to go