Maestro + Qwen2 + DeepSeek-Coder-V2 : Generate APPLICATIONS with ONE PROMPT (FREE, LOCAL & FAST)

Поделиться
HTML-код
  • Опубликовано: 4 авг 2024
  • In this video, I'll be telling you that how you Generate Applications with Local AI Model Agents. This way you can do Text - To - Applications for free and locally. In this video, I'll be using Maestro combined with Qwen2 and DeepSeek Coder model via Ollama. This way you can outperform Claude-3.5 sonnet while being free. We'll make agents and sub-agents with these models and we'll be generating Games, Websites, Applications and much more in this video and figuring out which model configuration works best for local usage. You can also do Text-To-Frontend, Text-To-Application and other things with this. This Maestro tool can also be used with any opensource LLM, OpenAI models or the other Claude models such as GPT-4O, Claude-3, CodeQwen, Mixtral 8x22b, Mixtral 8x7b, GPT-4, Grok-1.5 & Gemini Code Assist.
    -------
    Resources:
    Maestro Github Repo : github.com/Doriandarko/maestro
    ------
    Key Takeaways:
    📌 Revolutionary AI Creation: Discover how Maestro with Claude-3.5 Sonnet allows you to create apps like Desktop Apps, Web Apps, and Games with a single text prompt, using innovative AI Agents technology.
    💡 Agent-Based Framework: Learn about the powerful agent-based framework where the Orchestrator LLM breaks down tasks, Sub-Agent LLMs handle execution, and the Refiner LLM ensures quality, providing seamless AI development.
    💰 Cost-Effective Local LLMs: Explore the benefits of using local LLMs like DeepSeek Coder and Qwen, reducing API costs while maintaining high performance with the Maestro framework's flexibility.
    ⚙️ Step-by-Step Configuration: Follow our detailed guide to configure and install Ollama models, DeepSeek Coder V2, and Qwen2 72B, ensuring you set up your AI environment correctly and efficiently.
    🎮 Creating a Snake Game: Watch as we demonstrate creating a Snake game using HTML, CSS, and JS, comparing results between Claude and local models, showcasing the potential of different AI configurations.
    🌟 High Customizability: Understand the extensive customizability of the Maestro framework, allowing you to mix and match models from Anthropic, Gemini, and OpenAI to tailor your AI projects to specific needs.
    👍 Viewer Engagement: Engage with our content by sharing your thoughts in the comments, liking the video, subscribing to the channel, and using the Super Thanks option to support further innovative AI content creation.
    --------
    Timestamps:
    00:00 - Introduction
    00:12 - About Maestro (Text-to-Application with Agents)
    00:30 - About Agents
    01:04 - Using it with Local LLMs
    01:40 - Installation
    03:40 - Using just the DeepSeek-Coder-V2 model leads to FAILURE
    04:33 - Replacing DeepSeek-Coder-V2 with Qwen2 as Orchestrator
    05:36 - Insane Results Outperforming Claude-3.5-Sonnet
    07:53 - Conclusion
  • НаукаНаука

Комментарии • 65

  • @tomjia3072
    @tomjia3072 Месяц назад +8

    Thanks!❤

  • @josedelarocha2455
    @josedelarocha2455 Месяц назад +15

    Man, what is your setup to be able to run something like this?

  • @Laxobigging
    @Laxobigging 23 дня назад +1

    most underrated account. no bs. no psycho clickbait thumbnail. appreciate you

  • @Anthzqz
    @Anthzqz Месяц назад

    Appreciate your work. I was in the process of making my own autonomous framework but the complexity soon spiraled. Maestro is exactly what I needed, thanks!

  • @EladBarness
    @EladBarness Месяц назад +4

    You are the king of AI code king

  • @xspydazx
    @xspydazx 24 дня назад

    OH MY GOSH !-
    I was able to incorporate this into a Chain !
    It works very well ! on all models !
    RESPECT TO THE MAX !!

  • @Beatmakerniko
    @Beatmakerniko Месяц назад +2

    Its pretty cool!

  • @tobeyforsman
    @tobeyforsman Месяц назад

    Obviously pretty cool

  • @KS-tj6fc
    @KS-tj6fc Месяц назад +5

    7:40 can you show us the total Input/Output Tokens used for the various LLM configurations (in all videos going forward, like a token counter on the bottom corner of the screen/Visual Studio). Example 3.5 Sonnet as Orchestrator, (Free) Deepseek CoderV2, Gemini Flash for final?
    And how many tokens were used by Gemini Flash in All Roles, 3.5 Sonnet in All Roles. What’s the token savings when the main work is done by local (free) model, or cheaper Gemini Flash. Also - how about Deepseek Coder-V2 API, costs are only (In $0.14/ Out $0.28) per million tokens. Is Sonnet Using 10k tokens, Deepseek 150k and thus $0.18 sonnet and $0.04 Deepseek once combined??
    Thank you for all your help and videos!!!

  • @satjeet
    @satjeet Месяц назад +1

    Muchas gracias crack.

  • @mugenmugen5237
    @mugenmugen5237 Месяц назад +2

    Can it all install in different directory?

  • @jackflash6377
    @jackflash6377 Месяц назад +3

    Thanks!

  • @jaradaty88
    @jaradaty88 Месяц назад

    That is cool

  • @bobgeorgeff
    @bobgeorgeff Месяц назад +1

    When you test the generated code, what platform)s) are you using to compile/run it?

    • @bobgeorgeff
      @bobgeorgeff Месяц назад

      I know Claude has a “preview” box, but it looked like you’re using something else maybe?

    • @bobgeorgeff
      @bobgeorgeff Месяц назад

      Ok, answering my own question... I copied Claude's code, saved it with a .html extension and it opened in a browser. But what if Claude codes in Python?

  • @Techonsapevole
    @Techonsapevole Месяц назад +2

    Fantastic, what about the incremental changes via chat ? openui does it

    • @AICodeKing
      @AICodeKing  Месяц назад +1

      This doesn't do that but it can create much more than just frontend and in any programming language.

    • @jackflash6377
      @jackflash6377 Месяц назад +1

      You could feed the file you want to change to ChatGPT-4o or Sonnet and have them refine it. Both make good UI generators.. with feedback.

    • @leonwinkel6084
      @leonwinkel6084 Месяц назад +1

      Yea I think if this is possible we truly have something very useful. But as long as I can’t iterate over it it’s questionable. In theory with Gemini and 1m tokens context it should be possible to achieve this quite simple

  • @silentspy6980
    @silentspy6980 Месяц назад +2

    um could you please tell me like how much better is claude sonnet 3.5 than deepseek coder v2?Cuz I wanna make a real good txt to code ai .But the thing is I dont wanna pay and I have reverse engineered the deepseek coder v2 website giving me completely free and unlimited usage of deepseek coder v2 for free as an api.But it is almost impossible for the sonnet also it allows only a limited prompts so yeah if the difference is of 10-15 % then i will go with deepseek.edit:One nother question so like for the orchestrator can i take a ai that is good in chat but not in code or does it have to be good in code?

    • @AICodeKing
      @AICodeKing  Месяц назад +1

      I would say Sonnet is 30% better than DeepSeek

    • @silentspy6980
      @silentspy6980 Месяц назад

      ​@@AICodeKingholy shit well then I guess I will have to find a way to overcome the limited prompts and make it free and unlimited

    • @amritbanerjee
      @amritbanerjee 3 дня назад

      How did you reverse engineer that? Can you give us some hints? 🤞

  • @PratikBodkhe
    @PratikBodkhe Месяц назад +1

    I tried this setup with only the difference of qwen2 7B instead of 72B. Mine hallucinates like crazy. At one point, deviating from JS to python to add 2 numbers. Nobody knows how it works.

  • @OldGamr
    @OldGamr Месяц назад

    Would this work to create a wordpress plugin. Something more advanced. I'd love to see a video on how to do this. Something with authentication, admin and user configuration.

  • @dasdassdarrrr
    @dasdassdarrrr Месяц назад +2

    Can you upload your existing repo into it and generate new stuff with it?

  • @world-78913
    @world-78913 Месяц назад +3

    Bro what,s on you right side of visual studio code ?

    • @AICodeKing
      @AICodeKing  Месяц назад +2

      Actually, I use lightning AI for the videos. There you get multiple options like Terminal, VSCode, Jupyter Interface & Stuff like that.

    • @DavidSeguraIA
      @DavidSeguraIA Месяц назад +1

      @@AICodeKing thanks, with the free tier of lighting is enough or is the pro tier necessary? By the way which tier you use in this tutorial because Quentin 72B is 42gigabytes in size so it would need at least 2 cards of 24 GB of VRAM right?

  • @nothing7ish
    @nothing7ish 18 дней назад

    Thanks for the instruction. I gave it a shot, but Maestro doesn't put the codes in the specific folder after procedure. Can you tell what wrong is?

    • @AICodeKing
      @AICodeKing  18 дней назад +1

      It sometimes does this. Try specifically asking it to put files inside folders within the prompt. Also, If it doesn't work. Try using Aider. It's much better. I have multiple videos on it.

    • @nothing7ish
      @nothing7ish 18 дней назад

      @@AICodeKing I'll give it a shot. Thx!

  • @aamironline
    @aamironline Месяц назад +3

    Can it read the existing project code improve it?

  • @jocksizer1123
    @jocksizer1123 Месяц назад +2

    I have an RTX 4070 Super, I am pretty sure the 72B is impossible to use, will the 7B be sufficient?

    • @AICodeKing
      @AICodeKing  Месяц назад +3

      I have tried it with that and It was fine as well. Also, the best part is that you can try multiple LLMs to see what fits you the best.

    • @jocksizer1123
      @jocksizer1123 Месяц назад +1

      @@AICodeKing Thanks for the response! I'll do it then :) btw, loved much more this Mastro than the one with Sonnet 3.5. The homepage layout fits much better! Thanks for the constant updates on LLM's.

    • @FawziBreidi
      @FawziBreidi Месяц назад +2

      i wonder why vram is the only solution? for example i have 128GB of ram, cant ollama use system RAM? if someone would like to explain to me how these models consume system resources

    • @JackyZai
      @JackyZai Месяц назад

      ​@@FawziBreidiit can load into RAM as well, just that it will be slower

  • @subins2917
    @subins2917 Месяц назад

    Hey, I tried out the following, but the files are not getting created inside the folder. Only .md files being created. Tried with both groq and ollama options and on windows and linux os as well. But same issue. Know a workaround?
    And btw was the main branch the one used?

    • @AICodeKing
      @AICodeKing  Месяц назад

      Hmm. It sometimes happen. In the prompt try adding something like "Limit the sub agent taks to 3 and create files". I have seen that in somethings It doesn't create files but after adding this it mostly gets back to creating them.

    • @subins2917
      @subins2917 Месяц назад

      ​@@AICodeKing
      It now creates the files, but fails to write content in it. Hopefully we get a solution soon from the creators, thanks for the info.

    • @buanadaruokta8766
      @buanadaruokta8766 Месяц назад

      @@subins2917 same to me. @AICodeKing

    • @maresirenum
      @maresirenum 23 дня назад

      Has anyone been able to solve this problem? I applied the suggested agent limit and instructed it to write the files, but it still didn't work.

  • @AseemChishti
    @AseemChishti Месяц назад

  • @Noaman2022
    @Noaman2022 15 дней назад

    I am new php developer which llm i can use currently using deepseek ?

    • @AICodeKing
      @AICodeKing  15 дней назад

      DeepSeek is good in PHP

  • @diakorudd7268
    @diakorudd7268 Месяц назад

    the frontend looks alot like coursera!

  • @lokeshart3340
    @lokeshart3340 Месяц назад

    Lightning ai op

  • @yuyutsurao
    @yuyutsurao Месяц назад

    Does i need gpu for this ?

  • @legendarystuff6971
    @legendarystuff6971 Месяц назад +3

    I think I will go with sonnet for orchestration and refining and deepseek coder for the agentic taskss

  • @adamgkruger
    @adamgkruger Месяц назад

    Thanks!