Prompt Engineer
Prompt Engineer
  • Видео 200
  • Просмотров 570 666
FREE Local RAG System with NVIDIA ChatRTX
In this video, we'll cover everything for downloading and installing ChatRTX.
ChatRTX is a demo app that lets you personalize a GPT large language model (LLM) connected to your own content-docs, notes, images, or other data. Leveraging retrieval-augmented generation (RAG), TensorRT-LLM, and RTX acceleration, you can query a custom chatbot to quickly get contextually relevant answers. And because it all runs locally on your Windows RTX PC or workstation, you’ll get fast and secure results.
#localllm #rag #chatrtx #nvidia
Links:
ChatRTX: www.nvidia.com/en-in/ai-on-rtx/chatrtx/
CHANNEL LINKS:
🕵️‍♀️ Join my Patreon for keeping up with the updates: www.patreon.com/PromptEngineer975
☕ Buy me a coffe...
Просмотров: 232

Видео

AI Scientist | A GROUND-BREAKING Research Paper
Просмотров 33321 день назад
The AI Scientist is a fully automated pipeline for end-to-end paper generation, enabled by recent advances in foundation models. Given a broad research direction starting from a simple initial codebase, such as an available open-source code base of prior research on GitHub, The AI Scientist can perform idea generation, literature search, experiment planning, experiment iterations, figure genera...
Function Calling with Ollama, Llama 3.1, Streamlit and RapidAPI
Просмотров 1,2 тыс.28 дней назад
In this video, we'll cover everything from learning about Ollama and function calling using Llama 3.1 to creating a frontend with Streamlit. This video is a one-stop solution for keeping up with the latest AI innovations. Trust me, it's going to be a game-changer for you. Without further ado, let's get started! #ollama #llama31 #streamlit Links: Github Code: github.com/PromptEngineer48/FC2 Olla...
Easiest Local Function Calling using Ollama and Llama 3.1 [A-Z]
Просмотров 2,7 тыс.Месяц назад
In this video, we are going to use Ollama to test out a local LLM viz. Llama 3.1 to try out function calling. Get a detailed understanding of what function calling is and how you can code it yourself at your own pc. This is also an example of local function calling using Ollama which shows the power of local LLMs and open-source LLMs Github Repo: github.com/PromptEngineer48/Function-Calling-Oll...
Debunked REST API for LLMs | with NVIDIA NIMS implementation
Просмотров 404Месяц назад
Debunked REST API for LLMs | with NVIDIA NIMS implementation In this video, we are going to explore REST APIs and how it is so important for anyone working in the domain of LLMs and AI. We are going to look at why REST APIs are so important, what are the features of REST API and how we can easily set this up and test it ourselves on the visual studio code editor. #REST-API #LLMs #VScodeeditor G...
Unveiling AGI: OpenAI's Five-Tier System and the Future of AI
Просмотров 257Месяц назад
Discover OpenAI's newly unveiled five-tier system for tracking the progress towards artificial general intelligence (AGI). We'll delve into the implications of AGI, explore similar classification systems from other AI pioneers, and discuss the ethical and safety concerns surrounding these advancements. Join us for a thought-provoking look at the future of AI and its impact on humanity. #agi #5t...
A talk on Conscious AI
Просмотров 478Месяц назад
As AI rapidly advances, one question looms large: Could machines ever develop consciousness? In this deep dive, we explore the cutting-edge science and philosophy behind machine consciousness. 🧠 Discover the leading theories of consciousness 🔬 Learn how current AI compares to the human brain 🤔 Explore the challenges of detecting machine consciousness 🌟 Contemplate the mind-bending implications ...
NexusRaven-V2: Revolutionize Function Calling with Open Source LLM🌟| Better than OpenAI
Просмотров 648Месяц назад
NexusRaven-V2, a groundbreaking 13B LLM, is now open source and outperforms GPT-4 in zero-shot function calling. This advanced capability transforms natural language instructions into executable code, enabling software tools usage and powering copilots and agents. With a mission to foster open source models for technological and societal progress, NexusRaven-V2 is a significant leap forward in ...
Latest AI Achievements you Can't miss 🚀| Where is AGI at?
Просмотров 2352 месяца назад
Welcome to another weekly video where we break down this week’s AI news and achievements. Links: forum.effectivealtruism.org/posts/htHfPjYdRqvfbzSXZ/aisn-38-supreme-court-decision-could-limit-federal-ability arxiv.org/abs/2406.04313 arxiv.org/pdf/2406.04313 arxiv.org/abs/2407.05377 arxiv.org/pdf/2407.05377 x.com/intern_lm/status/1808501625700675917 huggingface.co/collections/internlm/internlm25...
Easiest Set-up for RAG using Pinecone Assistant | Crazy
Просмотров 4412 месяца назад
I am thrilled to introduce Pinecone Assistant in beta! This powerful API service allows you to build AI assistants that answer complex questions about your proprietary data accurately and securely. With simplicity, high-quality results, and full control over your data, Pinecone Assistant makes prototyping and deploying AI assistants easier than ever. Upload your files, ask questions, and integr...
This New (Voice Ai) Stunts the Entire Industry (and even Beats OpenAI) | Must Watch
Просмотров 1,5 тыс.2 месяца назад
"Step into the future of AI with Moshi, the revolutionary voice AI that's changing the game in human-computer interaction. In this video, we take a deep dive into the cutting-edge technology behind Kyutai's latest innovation. You'll discover: How Moshi expresses over 70 emotions, making conversations feel incredibly lifelike The AI's ability to switch between accents and speaking styles effortl...
Post Unique Social Media Content Automatically using the Best Automation Tool
Просмотров 4452 месяца назад
Welcome to our channel! In this video, we’ll explore how Make revolutionizes workflow automation with its powerful visual platform. Whether you're in Marketing, Sales, Operations, IT, HR, Customer Experience, Finance, or Workplace Productivity, Make has got you covered. Try Make.com via this affiliate link: bit.ly/467DjAR #makedotcom #automation #llm #chatgt CHANNEL LINKS: 🕵️‍♀️ Join my Patreon...
Simulating 500 million years of evolution with an LLM | ESM3 is Insane | Powered by NVIDIA
Просмотров 1,8 тыс.2 месяца назад
Biology is fundamentally programmable. Every living organism shares the same genetic code across the same 20 amino acids-life’s alphabet. ESM3 understands all of this biological data, translates it, and speaks it fluently to be used as a generative tool. EvolutionaryScale unveils ESM3, a groundbreaking generative AI model for protein design trained on 2 billion protein sequences. Powered by NVI...
Unbelievable Capabilities of PromeAI | Best Tool for any Designer
Просмотров 3302 месяца назад
Are you ready to revolutionize the way you create? Join us at PromeAI and discover how our powerful AI-driven design assistants can transform your artistic visions into reality. Whether you're an amateur artist, architect, interior designer, product designer, or game/animation designer, PromeAI is your ultimate tool for creating stunning AI-generated art, images, graphics, videos, and animation...
Stay Informed in Minutes: Latest News Summarizer Tool. | Easy Set Up
Просмотров 3452 месяца назад
Never miss out on important news again! In this video, we introduce our cutting-edge Latest News Summarizer tool that brings you up to speed on current events quickly and efficiently. 🚀 Key Features: Condenses top headlines into easy-to-digest summaries Covers a wide range of topics: politics, technology, science, and more Saves you time while keeping you well-informed Uses advanced AI to ensur...
STOP your AI Agents Before it's Too LATE | AgentOps
Просмотров 9372 месяца назад
STOP your AI Agents Before it's Too LATE | AgentOps
OpenAI acquires Rockset | Super Powerful RAGs Now
Просмотров 2,1 тыс.2 месяца назад
OpenAI acquires Rockset | Super Powerful RAGs Now
The Superintelligence move by Ilya Sutskever | ASI vs AGI
Просмотров 8532 месяца назад
The Superintelligence move by Ilya Sutskever | ASI vs AGI
Claude 3.5 beats GPT-4o | That's terrific news
Просмотров 5242 месяца назад
Claude 3.5 beats GPT-4o | That's terrific news
Bringing Silent Videos to Life: DeepMind's Revolutionary Video-to-Audio (V2A) Technology
Просмотров 2212 месяца назад
Bringing Silent Videos to Life: DeepMind's Revolutionary Video-to-Audio (V2A) Technology
Mind-Blowing Runway Gen-3 AI Video: Watch the Magic Unfold!
Просмотров 2302 месяца назад
Mind-Blowing Runway Gen-3 AI Video: Watch the Magic Unfold!
95% Accurate LLM Agents | Shocking or Myth
Просмотров 2,3 тыс.2 месяца назад
95% Accurate LLM Agents | Shocking or Myth
Your Ultimate AI Copilot on the Desktop ! Run ANY LLMs Locally
Просмотров 1,2 тыс.2 месяца назад
Your Ultimate AI Copilot on the Desktop ! Run ANY LLMs Locally
7 new Best Models in NVIDIA API Endpoints | A Medical APP for my Parents
Просмотров 7912 месяца назад
7 new Best Models in NVIDIA API Endpoints | A Medical APP for my Parents
Use NVIDIA’s NIM for FREE | Limited Period
Просмотров 1 тыс.3 месяца назад
Use NVIDIA’s NIM for FREE | Limited Period
NVIDIA Computex 2024: HUGE AI Announcement
Просмотров 2523 месяца назад
NVIDIA Computex 2024: HUGE AI Announcement
Sam Altman reveals the future of AI and its relation to Mankind
Просмотров 2,1 тыс.3 месяца назад
Sam Altman reveals the future of AI and its relation to Mankind
Create entire TV shows FROM SCRATCH using AI | The Simulation Created Famous Series with a Twist
Просмотров 4793 месяца назад
Create entire TV shows FROM SCRATCH using AI | The Simulation Created Famous Series with a Twist
The #1 Code Generation LLM in History | Easy to Use with an Entirely open-source AI Code Assistant
Просмотров 1,6 тыс.3 месяца назад
The #1 Code Generation LLM in History | Easy to Use with an Entirely open-source AI Code Assistant
Why AI Goes Rogue: Understanding and Fixing AI Misbehavior!
Просмотров 9343 месяца назад
Why AI Goes Rogue: Understanding and Fixing AI Misbehavior!

Комментарии

  • @garethwoodall577
    @garethwoodall577 15 часов назад

    Any idea why it can complete a task but only sticks to a finite output? I have tried this on the past two versions. For example I have a local data folder with 120 documents (similar format), I ask for a list or table of the document name with a couple of attributes and it will only output 4 items then stop.

    • @PromptEngineer48
      @PromptEngineer48 6 часов назад

      I think that's likely a memory issue, if you have a good VRAM. Make some space and try again

    • @garethwoodall577
      @garethwoodall577 6 часов назад

      @@PromptEngineer48 I have a 3090. 24GB vram

  • @ahmedsayed7138
    @ahmedsayed7138 17 часов назад

    You're a life SAVER... many thanks

    • @PromptEngineer48
      @PromptEngineer48 17 часов назад

      Welcome

    • @ahmedsayed7138
      @ahmedsayed7138 16 часов назад

      @@PromptEngineer48 can i apply this inside streamlit web app so that users ask and get the answer on ui? Can these models be deployed ?

  • @RekdReation
    @RekdReation 18 часов назад

    I lost you when you went to github and started getting sidetracked. LOL @ "easy setup". Yeah, it's easy if you're a programmer.

  • @nufh
    @nufh 6 дней назад

    Hey, it's been many month already, how are you? Just starting back into AI again.

  • @MicheleHjorleifsson
    @MicheleHjorleifsson 7 дней назад

    after you add info to the knowledge base how do you ground the conversation to that document ?

  • @thomashuynh6263
    @thomashuynh6263 17 дней назад

    How run 2 instance of llama3.1:8b at the same time? Thank you so much.

  • @kevinfox9535
    @kevinfox9535 18 дней назад

    This no longer work

  • @jguillengarcia
    @jguillengarcia 19 дней назад

    Great Video!!!

  • @amadmalik
    @amadmalik 19 дней назад

    hi, can you update this so we can use LLama 3.1 instead, please provide a version that works with Apple silicon as this one fails on my M3 Mac

  • @michaelmurphy7031
    @michaelmurphy7031 20 дней назад

    excellent video 'but' you go to the install of a LLama3.1 405B, excellent. Install this into VS Code, really great. 'but' I am not sure if putting up OpenVoice.git runs agains / using the llama3.1? please verify, sorry i am a newbie at python also. thanks.

  • @darkmatter9583
    @darkmatter9583 20 дней назад

    and ubuntu?

  • @fluffsquirrel
    @fluffsquirrel 21 день назад

    Thank you so much, this is insane!!

  • @TiagoSantos-fd4le
    @TiagoSantos-fd4le 22 дня назад

    I'm just trying to understand here. How is this different from let's say put all that tool information in the system property of /generate? In the end the LLM decides to use it or not, there's no turning back but to adjust the prompt. It also does not take the json result and generate a coherent sentence after like a normal chat would (ex: the trip will take X amount), unless you would run that after once more with the json result, just for that.

    • @PromptEngineer48
      @PromptEngineer48 3 часа назад

      Possible to put all the tools description in the /generate. But in the end, we need to describe the tools complete functionality at the end.

  • @user-ju7or3fo6g
    @user-ju7or3fo6g 23 дня назад

    nice

  • @Abhijit-VectoScalar
    @Abhijit-VectoScalar 23 дня назад

    Please create a video on Using Open Source Models in production to create Multimodal RAG Chatbot using private data

  • @Abhijit-VectoScalar
    @Abhijit-VectoScalar 23 дня назад

    Very Well explained ! Would love to see more videos of this series. Also when we can expect the Open Source RAG ChatBot for private data. Please try to make it asap we all are waiting for your amazing videos with great explanation

  • @AnmollDwivedii
    @AnmollDwivedii 25 дней назад

    can you please add a video on how to change ui of ollama web ui i want to do some minor changes please add a video i see there are not more videos in youtube for this content so it will be best for you to add this video ;)

  • @ModestJoke
    @ModestJoke 26 дней назад

    What a giant pile of bullshit. You can't just "generate" research with a machine learning algorithm. It can only generate remixes of things it has been trained on. What a stupid idea. The last thing science needs is more AI bullshit.

    • @PromptEngineer48
      @PromptEngineer48 7 часов назад

      Agreed now.. not everything is great but it's a start

  • @proterotype
    @proterotype 28 дней назад

    Good stuff brother

  • @kashifrit
    @kashifrit Месяц назад

    can you make a video on integrating ollama (local llama 3.1) with MS-teams to do note taking and summarizing the meetings afterwards ? Thanks

  • @IdPreferNot1
    @IdPreferNot1 Месяц назад

    Great video. PLEASE drop the background music. Higher speed review of your videos is ruined with it.

  • @HeyBojoJojo
    @HeyBojoJojo Месяц назад

    When I run python3 ingest.py, I am getting an error ModuleNotFoundError: No module named 'chromadb'

  • @drhot69
    @drhot69 Месяц назад

    It absolutely refuses to use the tools. It keeps going to the basic llama3.1 llm to answer all my queries. When given two airport codes that llama3.1 could not resolve, but were in the database, it just gave search engine recommendations.

  • @i2c_jason
    @i2c_jason Месяц назад

    Help me understand - in the antonyms example, you have a get_antonyms() function that can optionally be used if the solution is found within get_antonyms() function. This would just be a classic expert system, 'software 1.0' use case. If antonym is not found in get_antonym(), the LLM can just return its LLM result instead. This would be a 'software 3.0' use case. But does the LLM use the contents of get_antonym() as an example too, so its context is extended or prompted by the antonym function's example contents? Thank you for the example!

    • @PromptEngineer48
      @PromptEngineer48 Месяц назад

      Hi. thanks for your interest here. the main purpose of showing the get_antonyms would be to simulate an actual api. so, in the real case, we wont be happy only a fixed number of word pairs, but given a request, our api would return the result. So, software 3.0 is not required in this case. and the llm is not intended to use the examples in get-antonyms here, as in the real case, the api would have probably thousands of such pair and it would be a waste of efforts to use the samein the content of the llm. SO, in summary, given the user question, the llm would decided if it needs to call the function get_antonyms. That would be when the user says "give me the opposite of something". otherwise the llm would give out its natural response.

    • @i2c_jason
      @i2c_jason Месяц назад

      @@PromptEngineer48 Ok, but if the api failed or couldn't return the value, then the LLM could give it a try with its 'big brain'?

    • @PromptEngineer48
      @PromptEngineer48 Месяц назад

      Yes.. absolutely. But normally when we fetch from an api, we are looking at say real time temperature of a place, where an LLM would definitely fail.

    • @i2c_jason
      @i2c_jason Месяц назад

      @@PromptEngineer48 Ok I see. In my application my API might be interfacing with Wolfram or an LLM to retrieve a geometrical or mathematical algorithm or some other engineering information. So in my case I am looking at something like this, but the function results would be an expert system result or a "see if we can get it to work" API call.

  • @rito_ghosh
    @rito_ghosh Месяц назад

    I would really appreciate it if you focussed on explaining concepts and code rather than going through the installation process of something that you have already created and written.

    • @PromptEngineer48
      @PromptEngineer48 7 часов назад

      Okay. Noted. I will divert my focus to that aspect as well

  • @mehmetbakideniz
    @mehmetbakideniz Месяц назад

    Thanks!

  • @mehmetbakideniz
    @mehmetbakideniz Месяц назад

    how can I load an already existing pyhton folder in my local drive? thank you very much for the video. I really apreciate it!

  • @mehmetbakideniz
    @mehmetbakideniz Месяц назад

    when I deploy the gpu, do I start spending immediately? or do I spend only when I execute code?

    • @PromptEngineer48
      @PromptEngineer48 Месяц назад

      First option is the correct. If however, you would like to have the second option, you need to go for serverless option.

    • @mehmetbakideniz
      @mehmetbakideniz Месяц назад

      @@PromptEngineer48 thank you very much.

  • @connectedonline1060
    @connectedonline1060 Месяц назад

    Humans are never been as close to ww3 as now. Humans are the cause for most dangers and pollution and bad influences for nature. Not AI

  • @Nate8247
    @Nate8247 Месяц назад

    It is irrelevant. Powerful intellect without consciousness is much more dangerous for humans.

    • @PromptEngineer48
      @PromptEngineer48 Месяц назад

      Right !

    • @radupaulalecu4119
      @radupaulalecu4119 Месяц назад

      At first glance, it seems so. But a powerful intellect endowed with subjectivity will put himself on first priority.

  • @sophiophile
    @sophiophile Месяц назад

    Microsoft has a standardized AI Chat Protocol API. That's what I use. Makes it really easy, especially when you want to make different LLMs chat with eachother.

  • @IdPreferNot1
    @IdPreferNot1 Месяц назад

    Like these ‘into the details’ videos, thx

  • @GrantCastillou
    @GrantCastillou Месяц назад

    It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at arxiv.org/abs/2105.10461

  • @ivanbakaev8872
    @ivanbakaev8872 2 месяца назад

    Thank you for the video. I'm new to LLMs, could you explain what's the role of an LLM in the process of the function calling? Is it a flexible user query? Or we just added more capabilities to existing LLM?

    • @PromptEngineer48
      @PromptEngineer48 Месяц назад

      with function calling, we are adding more tools to the llms so that it has enhanced capabilities.

  • @Canna_Science_and_Technology
    @Canna_Science_and_Technology 2 месяца назад

    This seems like a fine-tuned routing llm. Function calling is a bad but acceptable term for JSON output. The llm is not calling any function. Just venting. Lol

  • @techietoons
    @techietoons 2 месяца назад

    Will it recalculate embeddings everytime I add more pdf documents?

    • @PromptEngineer48
      @PromptEngineer48 Месяц назад

      yes

    • @techietoons
      @techietoons Месяц назад

      @@PromptEngineer48 I mean it should only compute embeddings for the new documents only, not for entire set.

  • @anujyotisonowal9213
    @anujyotisonowal9213 2 месяца назад

    🫰🫰🫰

  • @kaviarasana7584
    @kaviarasana7584 2 месяца назад

    I cant find the Deployment URL as illustrated. Where do I check them ?

  • @payamaemedoost5677
    @payamaemedoost5677 2 месяца назад

    please tell me how can run local-server in my server and use web gui chatbox (or somthing have like copilot or...) in my cleint (or any computer in my network) tnx

  • @윤명세
    @윤명세 2 месяца назад

    Thank you for the great video! I'm planning to create a chatbot using LM Studio for personal purposes in the same way as the image you uploaded. In the image above, it seems that the chatbot is implemented without inputting or learning a separate dataset, but how can I input the dataset I prepared based on this image and implement it? And when implementing a chatbot like the method shown in this video, in what format should I learn or input the dataset so that it can be implemented smoothly?

    • @PromptEngineer48
      @PromptEngineer48 Месяц назад

      What i understood is you want to train your llm on your dataset.. if that's the case, you need to use fine-tuning.

    • @윤명세
      @윤명세 Месяц назад

      @@PromptEngineer48 Oh, I understand. I really appreciate it:)

  • @Leon-c2z3b
    @Leon-c2z3b 2 месяца назад

    I get the error running when python app.py: pydantic.v1.error_wrappers.ValidationError: 2 validation errors for NVEModel base_url field required (type=value_error.missing) infer_path field required (type=value_error.missing) What is the problem please?

  • @thegooddoctor6719
    @thegooddoctor6719 2 месяца назад

    eh, your RAG system you developed is still the best one I found and use..........

    • @PromptEngineer48
      @PromptEngineer48 2 месяца назад

      Thanks. 😊😊 I dont know if this is a compliment. coz I did nothing. It's all done by pinecone

    • @thegooddoctor6719
      @thegooddoctor6719 Месяц назад

      @@PromptEngineer48 No its wasn't an insult, I just got the channels mixed up - I thought I was commenting in the Prompt Engineering channel - My apologies for the mix up

    • @PromptEngineer48
      @PromptEngineer48 Месяц назад

      yeah. thanks.

  • @kar9526
    @kar9526 2 месяца назад

    Hello! I have followed this, but I have a problem at the end. On visual code in "docker exec ollama_cat ollama pull mistral:7b-...", there is "error response from daemon: no such container: ollama_cat. How can I resolve? Thanks

  • @LuisBorges0
    @LuisBorges0 2 месяца назад

    It does not beat OpenAI, Gemini, Claude... it's cool but not that smart at all

  • @jonron3805
    @jonron3805 2 месяца назад

    BG MUsic too loud in thr start

  • @zynga726
    @zynga726 2 месяца назад

    Why does it start answering before the person is done talking? It makes it look fake. It seems like the AI is a recording and the people asking questions aren't taking fast enough to match the recording.

    • @PromptEngineer48
      @PromptEngineer48 2 месяца назад

      Absolutely not. This is what is being done to give it a natural talking as we humans do. This feature has been highlighted in the demo.

    • @zynga726
      @zynga726 2 месяца назад

      ​@@PromptEngineer48just some constructive criticism or potential for improvement. At about 8:10 in the video the person is asking for a scan of the planet and the AI replies "yes, sir" before the person says "of the atmosphere". In a ship, the crew would wait for the person in charge to be done taking before saying "yes, sir" it makes the AI seem rude or in a big hurry. In that role play the AI didn't change it's behavior and act as a crewman. It role played the situation but didn't truly assume the role. Still, I was impressed with the change of accent and the jokes. It's still an impressive demo and I am excited for what you are building.

    • @Booomshakalakah
      @Booomshakalakah 2 месяца назад

      @@PromptEngineer48 Agree, it's obviously scripted. Stopped watching when I noticed this after around two minutes.

    • @joki7352
      @joki7352 2 месяца назад

      because the latency of the ai is actually lower than that of a human responding lol

    • @joki7352
      @joki7352 2 месяца назад

      @@Booomshakalakah you can go to the website yourself and talk to it instead of typing misinformed comments

  • @iskendersalihcevik5146
    @iskendersalihcevik5146 2 месяца назад

    I have developed a similar software. The LLM changes according to the question you ask. For instance, when a mathematical question is asked during a conversation, it automatically connects to the GPT API for mathematical calculations. When an informatics question is asked, it connects to the Claude model. And it can write code on its own. For example, when you say "Get the first 100 products on my website," it writes the necessary Python code, runs it, and retrieves the code. One of its most important features is that it has memory. Regardless of which language model it connects to, it can remember everything with the database architecture I have set up. It constantly monitors you with a camera and performs sentiment analysis. These are recorded in the memory, and its behavior towards you changes because I have set it up so that the prompt changes automatically. I am in the final stage now. I have integrated uncensored LLMs. By uncensored LLMs, I mean that it directly answers questions like "How can I easily defraud someone" without requiring prompt engineering. And of course, you are talking to an avatar on the screen. I can't wait to publish my project. Kyutai's post inspired me about my project. I will develop them and write them here as a comment. Maybe I will even send it to you to try out.

  • @tee_iam78
    @tee_iam78 2 месяца назад

    Great content. Thank you very much.

  • @tonywhite4476
    @tonywhite4476 2 месяца назад

    Really bro?!