How To Use Meta Llama3 With Huggingface And Ollama

Поделиться
HTML-код
  • Опубликовано: 22 май 2024
  • Llama3 is available now in huggingface,kaggle and with ollama.
    code: colab.research.google.com/dri...
    -------------------------------------------------------------------------------------------------
    Support me by joining membership so that I can upload these kind of videos
    / @krishnaik06
    -----------------------------------------------------------------------------------
    Fresh Langchain Playlist: • Fresh And Updated Lang...
    ►LLM Fine Tuning Playlist: • Steps By Step Tutorial...
    ►AWS Bedrock Playlist: • Generative AI In AWS-A...
    ►Llamindex Playlist: • Announcing LlamaIndex ...
    ►Google Gemini Playlist: • Google Is On Another L...
    ►Langchain Playlist: • Amazing Langchain Seri...
    ►Data Science Projects:
    • Now you Can Crack Any ...
    ►Learn In One Tutorials
    Statistics in 6 hours: • Complete Statistics Fo...
    End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's
    Machine Learning In 6 Hours: • Complete Machine Learn...
    Deep Learning 5 hours : • Deep Learning Indepth ...
    ►Learn In a Week Playlist
    Statistics: • Live Day 1- Introducti...
    Machine Learning : • Announcing 7 Days Live...
    Deep Learning: • 5 Days Live Deep Learn...
    NLP : • Announcing NLP Live co...
    ---------------------------------------------------------------------------------------------------
    My Recording Gear
    Laptop: amzn.to/4886inY
    Office Desk : amzn.to/48nAWcO
    Camera: amzn.to/3vcEIHS
    Writing Pad:amzn.to/3OuXq41
    Monitor: amzn.to/3vcEIHS
    Audio Accessories: amzn.to/48nbgxD
    Audio Mic: amzn.to/48nbgxD

Комментарии • 57

  • @rumingliu9787
    @rumingliu9787 Месяц назад

    Thanks sir. Very helpful. Just one question, what's the benefit of Ollama compared with hugging face? I guess it is Local deployed but has some basic requirements for your laptop's hardware.

  • @sanadasaradha8638
    @sanadasaradha8638 Месяц назад +20

    Instead of showing all new models it is better to implement a single open source llm for all use cases including fine tuning. At the same time it is better to make an end to end project with opensource llm.

    • @THOSHI-cn6hg
      @THOSHI-cn6hg Месяц назад

      Agreed

    • @devagarwal3250
      @devagarwal3250 Месяц назад +6

      There are already same video about showing new model.
      It is better to make a video on how to implement it

    • @KumR
      @KumR Месяц назад +1

      I am with you. New models will keep coming. Focus needs to be on an end to end project

    • @Shubhampalzy
      @Shubhampalzy День назад

      how to fine tune ?
      I need some help to build a custom chatbot trained on custom dataset using llama 3.
      Please help

  • @siddhanthbhattacharyya4206
    @siddhanthbhattacharyya4206 Месяц назад

    Krish i wanted to know what would be the pre requisites to follow your langchain series? How much knowledge do i need?

  • @KunalDixitEdukraft
    @KunalDixitEdukraft Месяц назад

    Hi Krish,
    Firstly, thanks to your consistent efforts to keep us updated and learn the latest techs in the realm of Data Science.
    How can I sponsor you on Git Hub and earn a badge?

  • @gan13166
    @gan13166 Месяц назад

    do we really need ollama to run llama3. when we are able to download/ clone the entire model from HF, do we still need Ollama for running the model? What is the next step after you download the model from HF? how to use it in the Langchain code without Ollama? is that possible?

  • @nitinjain4519
    @nitinjain4519 24 дня назад

    When using the Llama3 model, sometimes it gives me an incomplete answer. What can I do to avoid incompleteness when generating responses from the Serverless Inference API?

  • @janneskleinau6332
    @janneskleinau6332 Месяц назад

    Please make a Video on how to finetune LLaMA! I would appreciate it :) Love your videos btw

  • @vysaivicky4724
    @vysaivicky4724 Месяц назад

    Sir one doubt how much knowledge of dsa is required in data scientist field please clarify

  • @r21061991
    @r21061991 Месяц назад

    Hey Krish, It will be more helpful if you can take a session on how to use an offline LLM on a custom dataset for QnA

  • @JorgeLopez-gw9xc
    @JorgeLopez-gw9xc 26 дней назад

    I have ollama on my computer and I am currently using it to run AI models through Python. I need to correct complex instructions that I can only run with the 70B model, the problem is that due to its complexity it takes a long time to execute (2 minutes), how can I lower the times? Currently the model runs on the CPU, how can I configure ollama to use the GPU?

  • @herashak
    @herashak Месяц назад

    When doing Question answering I got an error about logits and LlamaForCausalLM not being compatible, not sure how you got that to work as you said

  • @viratsasikishorevarma3535
    @viratsasikishorevarma3535 Месяц назад +1

    Hi Krish sir I need a help, please make a video on this basic topic: how ,why to setup virtual environment for python.❤

  • @TheIITianExplorer
    @TheIITianExplorer Месяц назад +1

    Guys how to fine-tune this model,
    I mean, i want to train in colab free version
    Does it support Qlora configuration?

  • @kavururajesh1760
    @kavururajesh1760 Месяц назад

    Hi Krish can you please upload a video on Moirai for Time Series LLM Model

  • @girishkumar862
    @girishkumar862 Месяц назад +3

    Hi, there will be 10 billion models coming in future and so on..

  • @asadurrehman3591
    @asadurrehman3591 14 дней назад

    sir plzzzzzzzz tell me about this error.
    RuntimeError: "triu_tril_cuda_template" not implemented for 'BFloat16'

  • @vamsitharunkumarsunku4583
    @vamsitharunkumarsunku4583 7 дней назад

    How to download llama3 model in local from NVIDIA NIMS? kindly make a video on it please. Thank you

  • @claudiograssi5192
    @claudiograssi5192 Месяц назад

    to run locally which gpu do you use?

  • @cairo8905
    @cairo8905 22 дня назад

    Hi
    I have a voice model on Google drive
    but I don't know how to upload it on huggingface
    can you tell me how to upload it?
    or giving you the model link and you upload it if you don't mind 😁

  • @mhemanthkmr
    @mhemanthkmr Месяц назад +1

    Hii Krish I too tried the llama3 in ollama response is slow but in your machine the response is fast you are using GPU then what GPU you using ?

    • @shotbotop3790
      @shotbotop3790 Месяц назад

      He has a Titan RTX (around 64gp vram) 💀

  • @KumR
    @KumR Месяц назад

    New models will keep mushrooming every day. I think now videos should focus on more end to end projects using these models. Not just sentiment analysis or language translation or text summarization. Some real life project end to end.

  • @itzmeakash9695
    @itzmeakash9695 Месяц назад

    Hello sir, I have a doubt. Is there any platform to find the latest research papers to read? Also, how can I stay updated about the latest developments in the fields of general AI and AI?

  • @happyhours.0214
    @happyhours.0214 Месяц назад +2

    Sir, please make a llm video on how to train llm models on custom data.

  • @ChemFam.
    @ChemFam. Месяц назад

    Sir how and from where we will get the api key

  • @tejas4054
    @tejas4054 Месяц назад

    Ye kaam.chatgpt bhi krta hai to ye kyu use kre hum. Llama

  • @0f9yxtizitdl
    @0f9yxtizitdl Месяц назад

    Liked your new look, Mr clean.

  • @OmSingh-ng3np
    @OmSingh-ng3np Месяц назад +1

    This can be fined tune in the same way right?

    • @Shubhampalzy
      @Shubhampalzy День назад

      how to fine tune ?
      I need some help to build a custom chatbot trained on custom dataset using llama 3.
      Please help

  • @spiritualworld842
    @spiritualworld842 Месяц назад

    Sir I'm totally stuck between data field and software field plzz suggest me to overcame from depression 😢😪

    • @tejas4054
      @tejas4054 Месяц назад

      Best way don't see youtube its too much overloaded ,,,,,with so much go back to time travel use books read books programming and use pen paper this overload information on youtube is dangerous

  • @rishiraj2548
    @rishiraj2548 Месяц назад

    🙏🙂

  • @itxmeJunaid
    @itxmeJunaid Месяц назад

    😮

  • @Superteastain
    @Superteastain 29 дней назад

    This guys good.

  • @podunkman2709
    @podunkman2709 2 дня назад

    Take look at this demo:
    >>> How many liters of water per minute can a Dutch windmill pump out?
    That's an interesting question!
    The answer depends on the specific design and size of the windmill. However, I can give you some general
    information.
    Traditionally, Dutch windmills are designed to pump water from shallow sources, such as polders or wells, for
    irrigation purposes. The capacity of these windmills varies greatly, but a typical small to medium-sized windmill
    might be able to pump around 0.5 to 2 liters per minute (L/min).
    Some larger industrial-scale windpumps can pump much more, up to 10-20 L/min or even more, depending on the
    turbine design and the pressure head of the water.
    Really? What a sh****

  • @kshitijnishant4968
    @kshitijnishant4968 Месяц назад

    my command prompt is raising error saying Ollama not found? any help guys?

    • @krishnaik06
      @krishnaik06  Месяц назад

      U need to download and install it

  • @tarunmohapatra5734
    @tarunmohapatra5734 Месяц назад

    Sir please activate neurolab

  • @deepak4166
    @deepak4166 Месяц назад

    What's app meta ai is awesome 🎉

  • @danielfischer4079
    @danielfischer4079 28 дней назад

    Ollama is downloading really slow for me, any1 else?

  • @tejas4054
    @tejas4054 Месяц назад

    Pair kyu hilaate ho bhai video me

  • @mohsenghafari7652
    @mohsenghafari7652 Месяц назад

    tanks krish .
    please answer my email ❤

  • @AnkitVerma-62990
    @AnkitVerma-62990 Месяц назад

    First Comment 😅

  • @JourneyWithMystics
    @JourneyWithMystics 5 дней назад

    Bhaiya how do I convert Hindi video into Hindi text, please 🙏 reply much needed ❤