💓Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see! 📅 Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1 🔥 Become a Patron (Private Discord): patreon.com/WorldofAi 🚨 Subscribe to my NEW Channel! www.youtube.com/@worldzofcrypto 🧠 Follow me on Twitter: twitter.com/intheworldofai Love y'all and have an amazing day fellas.☕ To help and Support me, Buy a Coffee or Donate to Support the Channel: ko-fi.com/worldofai - Thank you so much guys! Love yal
They must be running a MOE type of parallelization without much "expert selection" for Gemini to have such lucidity at 1mil. Tasking still seems to take second seat to styling (broad system prompting or Fine tuned options limited with most "engineering" being safety and agent consistency). Makes sense though. Any transformer model kinda peters out after a few thousand tokens of context and Mamba might be faster and segmented to handle that many tokens through continuous memory update and do inference faster, but it has a harder time jumping the gap between NLP and NLU. Maybe the next step is a mixture of transformer experts with fast tools made of mixture of mamba with retrieval augmented generation tools. (You bet I wrote it out, the acronym is xxx.)
[Must Watch]: Sora: OpenAI's NEW Text-To-Video AI Model! Absolutely INSANE!: ruclips.net/video/nEuEMwU45Hs/видео.htmlsi=IwWkWMr6maOwFKvl AgentKit: Create Production-Grade Software Apps with AI toolkits!: ruclips.net/video/ebLM8B-DdI0/видео.htmlsi=FZJ-g6c7yL2qPhus Taipy: Create Production Ready Apps with AI Within Minutes! (Opensource): ruclips.net/video/hykTISZXBoQ/видео.htmlsi=R6r43K2z0c2dzvJz
💓Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see!
📅 Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1
🔥 Become a Patron (Private Discord): patreon.com/WorldofAi
🚨 Subscribe to my NEW Channel! www.youtube.com/@worldzofcrypto
🧠 Follow me on Twitter: twitter.com/intheworldofai
Love y'all and have an amazing day fellas.☕ To help and Support me, Buy a Coffee or Donate to Support the Channel: ko-fi.com/worldofai - Thank you so much guys! Love yal
They must be running a MOE type of parallelization without much "expert selection" for Gemini to have such lucidity at 1mil. Tasking still seems to take second seat to styling (broad system prompting or Fine tuned options limited with most "engineering" being safety and agent consistency).
Makes sense though.
Any transformer model kinda peters out after a few thousand tokens of context
and Mamba might be faster and segmented to handle that many tokens through continuous memory update and do inference faster, but it has a harder time jumping the gap between NLP and NLU.
Maybe the next step is a mixture of transformer experts with fast tools made of mixture of mamba with retrieval augmented generation tools.
(You bet I wrote it out, the acronym is xxx.)
There's 1 problem, you can't use it freely.
[Must Watch]:
Sora: OpenAI's NEW Text-To-Video AI Model! Absolutely INSANE!: ruclips.net/video/nEuEMwU45Hs/видео.htmlsi=IwWkWMr6maOwFKvl
AgentKit: Create Production-Grade Software Apps with AI toolkits!: ruclips.net/video/ebLM8B-DdI0/видео.htmlsi=FZJ-g6c7yL2qPhus
Taipy: Create Production Ready Apps with AI Within Minutes! (Opensource): ruclips.net/video/hykTISZXBoQ/видео.htmlsi=R6r43K2z0c2dzvJz
can you integrate google gemini in open interpreter