Bolt.new + Ollama: AI Creates APPS 100% Local in Minutes

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024
  • ХоббиХобби

Комментарии • 84

  • @noahgottesla3439
    @noahgottesla3439 16 дней назад +1

    Yo. You just make the tedious process of the trifles into a juicy episode which is amazing for long term style for RUclips algorithm

  • @fourlokouva
    @fourlokouva 13 дней назад

    Thank you for your contributions Mervin! Including a repo link in description would be helpful, just saying

  • @CraigRussill-Roy
    @CraigRussill-Roy 17 дней назад

    Love your videos - nice work on showing warts and all

  • @d.d.z.
    @d.d.z. 17 дней назад

    I like this approach. Thank you Mervin

  • @naturelife418
    @naturelife418 16 дней назад

    Great job, well done!

  • @tonywhite4476
    @tonywhite4476 17 дней назад

    This is awesome!!!

  • @tenkhee6205
    @tenkhee6205 5 дней назад +1

    Hi dear, What is your computer specification?

  • @edgarmatthee2668
    @edgarmatthee2668 5 дней назад

    Thanks man i managed to set mine up. I am struggling to add addiotional api Keys se help

  • @Revontur
    @Revontur 17 дней назад +1

    good video as always.. but i think you missed out the creation of the .env.local file

    • @HikaruAkitsuki
      @HikaruAkitsuki 16 дней назад

      He assumed that you already know to use ollama and env tweaks.

    • @Luis-Traders
      @Luis-Traders 4 дня назад

      se salto muchas cosas

  • @CalsProductions
    @CalsProductions 17 дней назад +2

    Hey Mervin you forgot to tell your viewers to change the .env file to use Ollama local API

    • @ahmadghazali4014
      @ahmadghazali4014 17 дней назад +3

      Yes I want to know how to set .env for ollama

    • @Luis-Traders
      @Luis-Traders 4 дня назад

      exacto ahi que desle no me gusta al video para que haga las cosas bien como debe ser no solo hacer el video para el monetizar

    • @CalsProductions
      @CalsProductions 2 дня назад

      ​@@Luis-Traders 😃

  • @MuratKanik
    @MuratKanik 3 дня назад

    @3:16 when i select Ollama nothing there.. what would be the issue? i believe it is related to bolt-ai-dev warning pull access denied for bolt...

  • @saiganesh1074
    @saiganesh1074 6 дней назад +1

    Sir, can u explain clearly about after creating model file in which terminal we need give command ollama.create -fmodelfile qwen2.5-large:7b .And after installing docker where to give command docker-compose --profile development up.clarify my doubt sir....

    • @husenpatel9381
      @husenpatel9381 9 часов назад

      docker compose command is run in bolt.new-any-llm folder that you cloned from github

  • @JNET_Reloaded
    @JNET_Reloaded 17 дней назад

    Nice :D

  • @mrsebbig
    @mrsebbig 14 дней назад +1

    many thanks for the great video.
    one question: where is the LLM downloaded to? i want to make space again to try another LLM. how can i delete the 4GB again?

    • @mahamadsuhail6544
      @mahamadsuhail6544 12 дней назад +1

      ollama rm

    • @mitchellrcohen
      @mitchellrcohen 12 дней назад +1

      In Mac the files are hidden have to command shift . I think. Or command shift R

  • @marinob7433
    @marinob7433 17 дней назад

    it works perfect but GPU is a must to have the speed. for me when i asked , it corrected some files

  • @shay5338
    @shay5338 17 дней назад

    you should have given him credit!

  • @ZeeQueIT
    @ZeeQueIT 15 дней назад

    well why not use open router api key to test the local bolt. there you can find the big models for free to use and the bigger context length as well

  • @sefyou4171
    @sefyou4171 14 дней назад

    Can you show, how Can we use other LLM not only qwen2.5-large:7b?

  • @viangelo4z595
    @viangelo4z595 17 дней назад

    muchas gracias

  • @modoulaminceesay9211
    @modoulaminceesay9211 17 дней назад

    Thanks

  • @andrinSky
    @andrinSky 12 дней назад

    Hello do you perhaps know how I can import an existing project into Bolt.new locally so that I can continue working on it?

  • @marombeiro_cortes1
    @marombeiro_cortes1 10 дней назад

    Ollama API Key:
    Not set (will still work if set in .env file)

  • @zabique
    @zabique 10 дней назад

    I have this problem. Linux and rtx4090 where I can run qwen32b model at full speed (GPU) via Ollama in terminal but as soon as I run it via Bolt, it run on CPU instead at snail speed. VRAM goes to 22gb usage but no GPU utilization .

    • @navaneethk5099
      @navaneethk5099 5 дней назад +1

      To run 32B models effectively, you need at least 64 GB of RAM, though 128 GB is recommended for smooth performance. In your case, the issue happens because the model exceeds your RAM capacity, forcing the system to use virtual memory (swap) on your SSD, which is significantly slower than RAM. This delay occurs before the data reaches the GPU. While your GPU (RTX 4090) is powerful and its VRAM is utilized, the bottleneck lies in RAM and swap processing. So try upgrading your ram 😀

  • @munduzikkachok1936
    @munduzikkachok1936 13 дней назад +1

    why the hell when you install not asking fro API key but when i don it is?

  • @Jai-qf8lw
    @Jai-qf8lw 10 дней назад

    @MervinPraison how did you get cursor to suggest you lines of code in the terminal?

  • @HikaruAkitsuki
    @HikaruAkitsuki 16 дней назад

    How to know the context max length for each parameters?

  • @ShyamnathSankar-b5v
    @ShyamnathSankar-b5v 17 дней назад

    I have used gemini-1.5-pro but now also not working Gemini has context length of 2M there is some problem with the software only

  • @John-ek5bq
    @John-ek5bq 17 дней назад

    What is the best llm for coding apps?

  • @MagicBusDave
    @MagicBusDave 13 дней назад

    Just can't get ollama models to appear under ollama. 2 hours of diagnosing with claude and still nothing. everything appears to be running.

  • @elliscaicedo9045
    @elliscaicedo9045 8 дней назад

    Why don't any tutorials explain anything about the part of calling the APIs to make the model work, always skipping that important part?

  • @naturelife418
    @naturelife418 16 дней назад

    for some reason my instance defaults to anthropic no matter if i select ollama, the reason i discoverd that is that the anthropic key was not set and it complained about it in the error output even when ollama models was selected.

  • @MIkeGazzaruso
    @MIkeGazzaruso 13 дней назад

    Can this generates also backend or only frontend? If only frontend it's a waste of time.

  • @mr.gk5
    @mr.gk5 17 дней назад

    Can I also use openai api for this application?

  • @SuperstarSoccerLegends
    @SuperstarSoccerLegends 12 дней назад

    how can I reach in this "terminal"?

  • @JNET_Reloaded
    @JNET_Reloaded 17 дней назад

    do we need graphics card for this?

  • @John-ek5bq
    @John-ek5bq 17 дней назад

    Is bolt.new using claude by default?

  • @ShaikSadiq-zs6yj
    @ShaikSadiq-zs6yj 15 дней назад

    Where can I search modelfile

  • @wobbelskanker
    @wobbelskanker 14 дней назад

    ive done everything several times but im not getting the actual code and files

    • @MervinPraison
      @MervinPraison  14 дней назад

      Did you try increasing context length ?
      Also try various models

    • @wobbelskanker
      @wobbelskanker 14 дней назад

      @@MervinPraisonthanks for reply, do i need to change modelfile or any of the commands if i change model? also shouold i increase more than you have set?

  • @dylanwarrener5857
    @dylanwarrener5857 14 дней назад

    I think this is a good start. But, it is still not that powerful and for someone who already codes fairly quick. I feel like this is much slower at the moment, still. Give it a couple years and I reckon this might be worth.

  • @mikevanaerle9779
    @mikevanaerle9779 16 дней назад

    how do you make the modelfile?? what file does it has to be? I have no clue how to make this in my CMD prompt. ( windows computer )

    • @MervinPraison
      @MervinPraison  16 дней назад

      Right click and create a new file. Name it as modelfile

    • @mikevanaerle9779
      @mikevanaerle9779 16 дней назад

      @MervinPraison Thanks. But what kind of format file do I need to make. Just a folder, txt, ,?

    • @MervinPraison
      @MervinPraison  16 дней назад

      @@mikevanaerle9779
      Create modelfile.txt and run below command
      ollama create -f modelfile.txt qwen2.5-large:7b
      Doc: mer.vin/2024/11/bolt-new-ollama/

    • @mikevanaerle9779
      @mikevanaerle9779 16 дней назад

      @@MervinPraison Thank you

    • @mikevanaerle9779
      @mikevanaerle9779 16 дней назад

      ​@@MervinPraison for me it does not see the preview, or code in the right

  • @GusRJ70
    @GusRJ70 17 дней назад

    To run more than 7B , do will we need more RAM, right? 64gb or more?

    • @WebWizard977
      @WebWizard977 17 дней назад +2

      Yep

    • @matrix01mindset
      @matrix01mindset 17 дней назад

      25 GB ram

    • @GamersPlus-
      @GamersPlus- 12 дней назад +2

      I'm running @16gb ddr4's 2x, i5 11thG 114k, 2.70Ghz/4-6gbGpu, 6C-12Thrd, my laptop handles 7b, 8b, 11b & 16b. Any higher starts to slooow down lol.

  • @tomasbusse2410
    @tomasbusse2410 17 дней назад

    Can I install this from within VS terminal?

    • @MervinPraison
      @MervinPraison  17 дней назад +1

      Yes, You can use any terminal.

    • @tomasbusse2410
      @tomasbusse2410 17 дней назад +1

      @ oh great this really looks interesting will try to install it. Thanks.

  • @moonduckmaximus6404
    @moonduckmaximus6404 15 дней назад

    DOES THIS WORK ON WINDOWS?

  • @writetopardeep
    @writetopardeep 17 дней назад

    what all are we talking here? API?

    • @carstenli
      @carstenli 17 дней назад

      This fork of bolt.new enables the use of any provider including local (on machine) inference provided by Ollama as in this example.

  • @clemenceabel5494
    @clemenceabel5494 16 дней назад

    Hey, I saw your videos. They're great and informative but your thumbnails are not appealing enough. I think you should hire a Professional Thumbnail Artist for your videos to increase your view count cause every impression matters. I can improve your ctr from 2-3% to 15%. Please acknowledge and share your contact details to get your thumbnail.

  • @Luis-Traders
    @Luis-Traders 4 дня назад

    este es el tipo de video y contenido el cual ahi que dale no me gusta por el simple echo que no fue claro con la explicacion ahi cosas que no se logran hacer como la parte del modelfile la tercera linea como la puso entre otros detalles este es un video que esta echo solo para monetizar y ganar dinero no para enseñar realmente

  • @ShaikSadiq-zs6yj
    @ShaikSadiq-zs6yj 14 дней назад

    Could please again you can do proper video explanation , this is not not good explanation sir

    • @trilloclock3449
      @trilloclock3449 4 дня назад

      Literally nobody is explaining it proper for people like us to understand. No one on youtube. Smh

  • @Giulio.t
    @Giulio.t 17 дней назад +3

    I don't understand anything since the start.... You say "In your terminal" but what terminal? Dude you can't start a video tutorial by assuming certain things

    • @jstthomas1111
      @jstthomas1111 17 дней назад

      Visual studio code terminal or your preferred ide. You are cloning the GitHub repository

    • @zipaJopa
      @zipaJopa 17 дней назад +8

      You can't expect the tutorial starts with an explanation on how to turn your computer on.

    • @ShadowDoggie
      @ShadowDoggie 17 дней назад +2

      @@zipaJopa you must be the funniest person at home.. this youtuber didn't even share the git clone command as he claimed in his video..

    • @zipaJopa
      @zipaJopa 17 дней назад

      @@ShadowDoggie but it's bolt.new-any-llm?

    • @carstenli
      @carstenli 17 дней назад +2

      Terminal / Shell / Console / Command Line all mean the same.