Super Lazy Coder
Super Lazy Coder
  • Видео 393
  • Просмотров 203 594
Build Llama3.2 chat api with JAVA + @GroqInc #generativeai #chatbot #llama3 #projectidx #lowcode
Join this channel to get access to perks:
@superlazycoder1984
Click "Super Thanks" to support this channel
Introducing Langchain for Java using langchain4j-open-ai package in Spring project. In this video we will use this package to integrate Llama 3.2 with Groq API to a spring rest controller. We will also experiment with Llama3.1 70B and compare the results. and all of it in JAVA (No python was used here)
Also everything was done in Project IDX which is no installation and everything free.
---------------------------------------------
[Key Takeaways]
🦙 Llama 3.2 and Groq Integration: Gain an introduction to the Llama 3.2 model and learn how to integrate it with the Groq API in Java, expandin...
Просмотров: 43

Видео

Project IDX + Cline (Claude Dev) + Llama3: This will blow your mind #free #nocode #lowcode #google
Просмотров 132День назад
Join this channel to get access to perks: @superlazycoder1984 Click "Super Thanks" to support this channel Project IDX is an AI-assisted workspace for full-stack, multiplatform app development in the cloud. With support for a broad range of frameworks, languages, and services, alongside integrations with your favorite Google products, IDX streamlines your development workflow so you can build a...
Fastest RAG pipeline with Llama3.2 & LlamaIndex for FREE #llama3 #lowcode #generativeai #ai #llm
Просмотров 1,1 тыс.14 дней назад
Join this channel to get access to perks: @superlazycoder1984 Click "Super Thanks" to support this channel Meta recently released Llama 3.2, which includes small and medium-sized vision LLMs (11B and 90B), and lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices, including pre-trained and instruction-tuned versions. In this tutorial, we walk you through the steps of u...
Effortless Finetuning of Google Gemini Models: No-Code Walkthrough #finetuning #nocode #colab #genai
Просмотров 11521 день назад
Join this channel to get access to perks: @superlazycoder1984 In this tutorial, we walk you through fine-tuning models in Google AI Studio using a no-code approach. You'll learn how to select data sources, configure model hyperparameters, fine-tune your model, and access it via API. We also show how to integrate these tuned models into Google Colab for easy deployment. Perfect for beginners and...
Image story generator with Llama3.2 11B Vision #llama3 #imagetotext #content #generativeai #free #ai
Просмотров 839Месяц назад
Join this channel to get access to perks: @superlazycoder1984 Meta recently released Llama 3.2, which includes small and medium-sized vision LLMs (11B and 90B), and lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices, including pre-trained and instruction-tuned versions. In this tutorial, we walk you through the steps of using the powerful Llama3.2 Vision model to ge...
Try Llama3.2 (11B, 90B) for FREE. Create App with screenshots. #nocode #generativeai #llama3 #vision
Просмотров 1,6 тыс.Месяц назад
Join this channel to get access to perks: @superlazycoder1984 Meta recently released Llama 3.2, which includes small and medium-sized vision LLMs (11B and 90B), and lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices, including pre-trained and instruction-tuned versions. In this video, We'll be trying the new Llama-3.2 Vision Models (11B, 90B) for free with Napkins.d...
Use Llama3.1 405B 100% free with SAMBANOVA World's Fastest AI Inference #ai #free #opensource #llama
Просмотров 232Месяц назад
Join this channel to get access to perks: @superlazycoder1984 SAMBANOVA is an AI inference platform which provides free access to Llama3.1 405B model along with its other variations. Artificial Analysis has independently benchmarked SambaNova as achieving record speeds of 132 output tokens per second on their Llama 3.1 405B cloud API endpoint. In this video we will be looking at SambaNova and i...
Aider + Llama3/Gemini : Free AI Pair Programmer BETTER than Github's Copilot #nocode #copilot #ai
Просмотров 213Месяц назад
Hello everyone in this video I will be showing you a tutorial for Aider which is an AI pair programming in your terminal. We will use Aider free of cost with Gemini and Llama3 through @GroqInc Aider lets you pair program with LLMs, to edit code in your local git repository. Start a new project or work with an existing git repo. Aider works best with GPT-4o & Claude 3.5 Sonnet and can connect to...
New announcement !! @HuggingFace presents SQL console. #huggingface #sql #datasets #datascience #ai
Просмотров 48Месяц назад
NEW SQL Console on Hugging Face Datasets Viewer 🦆🚀 🔸 Run SQL on any public dataset 🔸 Powered by DuckDB WASM running entirely in the browser 🔸 Share your SQL Queries via URL with others! 🔸 Download the dataset as a parquet file Thank you so much for watching. Please like, share and subscribe. 0:00 Introduction 0:22 SQL Console 2:20 Like, Share and Subscribe [Link's Used]: Hugging Face link - hug...
Generate story from images with Llava 1.5 and Llama 3.1 #groq #llama3 #generativeai #contentcreation
Просмотров 524Месяц назад
LLaVA (Large Language-and-Vision Assistant) is a powerful vision model that combines the capabilities of Large Language Models (LLMs) with image analysis. This open-source model can answer visual questions, generate captions, and perform Optical Character Recognition (OCR), making it an ideal solution for applications that require image-based text generation. The llava-v1.5-7b-4096 model is now...
Personal Copilot with LlamaCoder, No installation, Llama3.1 405B #local #llama3 #free #generativeai
Просмотров 3872 месяца назад
Llama Coder is an open source Claude Artifacts which can be used to generate small apps with one prompt. Powered by Llama 3 405B & Together.ai. In this video we will run this repo completely on our local without any installation and free of cost. Thank you so much for watching. Please Like Share and Subscribe. 0:00 Introduction 0:44 What is llamaCoder 1:57 Tech Stack for LlamaCoder 3:09 Cloning...
THE BEST AI enabled programming IDE of 2024 REPLIT #coding #ide #generativeai #ai #ml #codinglife
Просмотров 2242 месяца назад
THE BEST AI enabled programming IDE of 2024 REPLIT #coding #ide #generativeai #ai #ml #codinglife
Filter UNSAFE prompts with Llama Guard 3 #contentmoderation #llama3 #generativeai #promptengineering
Просмотров 1732 месяца назад
Filter UNSAFE prompts with Llama Guard 3 #contentmoderation #llama3 #generativeai #promptengineering
Apply Llama 3.1 @meta with Typescript and @GroqInc in 5 mins #typescript #groq #llama3 #llm #ai
Просмотров 1272 месяца назад
Apply Llama 3.1 @meta with Typescript and @GroqInc in 5 mins #typescript #groq #llama3 #llm #ai
Build RAG pipeline with Llama 3.1 @meta @HuggingFace @GroqInc @LlamaIndex #generativeai #llama3
Просмотров 9603 месяца назад
Build RAG pipeline with Llama 3.1 @meta @HuggingFace @GroqInc @LlamaIndex #generativeai #llama3
AI vs. Human: Rock Paper Scissors Showdown with Generative AI! ft. @HuggingFace , @OpenAI @Google
Просмотров 863 месяца назад
AI vs. Human: Rock Paper Scissors Showdown with Generative AI! ft. @HuggingFace , @OpenAI @Google
Leetcode 202. Happy Number#leetcode #faang #interview #codinginterview #leetcodequestions
Просмотров 353 месяца назад
Leetcode 202. Happy Number#leetcode #faang #interview #codinginterview #leetcodequestions
No code finetuning with Gradient.AI in 15 mins #generativeai #finetuning #llm #machinelearning #ai
Просмотров 1393 месяца назад
No code finetuning with Gradient.AI in 15 mins #generativeai #finetuning #llm #machinelearning #ai
Finetune NousHermes2 with GradientAI and @LlamaIndex in 5 min #finetuning #generativeai #llamaindex
Просмотров 904 месяца назад
Finetune NousHermes2 with GradientAI and @LlamaIndex in 5 min #finetuning #generativeai #llamaindex
Multi-modal RAG with LlamaIndex and @Google Gemini - ft. Messi #llamaindex #gemini #generativeai
Просмотров 3934 месяца назад
Multi-modal RAG with LlamaIndex and @Google Gemini - ft. Messi #llamaindex #gemini #generativeai
Text to SQL RAG pipeline with LlamaIndex, Llama3 and Groq in 15 mins #groq #llama3 #llamaindex #llm
Просмотров 8264 месяца назад
Text to SQL RAG pipeline with LlamaIndex, Llama3 and Groq in 15 mins #groq #llama3 #llamaindex #llm
Create RAG pipeline with LlamaIndex, Llama3 and Groq in 20 mins #groq #llama3 #llamaindex #llm
Просмотров 7854 месяца назад
Create RAG pipeline with LlamaIndex, Llama3 and Groq in 20 mins #groq #llama3 #llamaindex #llm
Structured response from Llama3 with Groq #json #groq #llama #mixtral #mistral #llm #generativeai
Просмотров 4405 месяцев назад
Structured response from Llama3 with Groq #json #groq #llama #mixtral #mistral #llm #generativeai
Run AI applications with Zero GPU on @HuggingFace #gpu #generativeai #huggingface #ai #gpumining
Просмотров 1,2 тыс.5 месяцев назад
Run AI applications with Zero GPU on @HuggingFace #gpu #generativeai #huggingface #ai #gpumining
@HuggingFace x LangChain: Best AI combo #generativeai #huggingface #langchain #largelanguagemodels
Просмотров 2715 месяцев назад
@HuggingFace x LangChain: Best AI combo #generativeai #huggingface #langchain #largelanguagemodels
Finetune custom models with Huggingface AutoTrain Spacerunner #huggingface #generativeai #lowcode
Просмотров 5325 месяцев назад
Finetune custom models with Huggingface AutoTrain Spacerunner #huggingface #generativeai #lowcode
Leetcode 380. Insert Delete GetRandom O(1) #leetcode #faang #interview #coding #leetcodesolutions
Просмотров 445 месяцев назад
Leetcode 380. Insert Delete GetRandom O(1) #leetcode #faang #interview #coding #leetcodesolutions
Fastest finetuning of Phi3 with LlaMa-Factory in 15 mins #generativeai #llama #finetuning #microsoft
Просмотров 2,9 тыс.5 месяцев назад
Fastest finetuning of Phi3 with LlaMa-Factory in 15 mins #generativeai #llama #finetuning #microsoft
Leetcode 1984. Minimum Difference between highest and lowest of k scores #leetcode #faang #interview
Просмотров 776 месяцев назад
Leetcode 1984. Minimum Difference between highest and lowest of k scores #leetcode #faang #interview
Track autotrain finetuning in real time with WANDB #generativeai #huggingface #nocode #wandb.ai #ml
Просмотров 2486 месяцев назад
Track autotrain finetuning in real time with WANDB #generativeai #huggingface #nocode #wandb.ai #ml

Комментарии

  • @user-hl1nf8jo7i
    @user-hl1nf8jo7i 10 дней назад

    Hi, very nice video. i think this will be a game changer on flutter development. Do you have to pay in order to have access to claude API? thanks!

    • @superlazycoder1984
      @superlazycoder1984 10 дней назад

      @@user-hl1nf8jo7i yes for Claude you have to pay via anthropic

  • @gu9838
    @gu9838 14 дней назад

    so upset that altspace vanished it was really cool!

  • @paulyflynn
    @paulyflynn 16 дней назад

    thanks for the overview

  • @muzammildafedar1909
    @muzammildafedar1909 19 дней назад

    Nice ! Without using an api key, could we be able to run this using Open source basically llama 3.2 1B? So that we can save resources cost

    • @superlazycoder1984
      @superlazycoder1984 18 дней назад

      I think so. With Ollama it should be possible. Good idea I will try it too

  • @pavankumarreddy9871
    @pavankumarreddy9871 19 дней назад

    does langchain support groq llama 3.2 vision ?

  • @harishraya67
    @harishraya67 23 дня назад

    Dude monetize your channel, cz you deserve more subscribers...😊

  • @NanoGi-lt5fc
    @NanoGi-lt5fc 25 дней назад

    Hey di we are not receiving notifications of your videos you hv tg channel ..

    • @superlazycoder1984
      @superlazycoder1984 25 дней назад

      @@NanoGi-lt5fc not sure. Did you click the bell icon ?

    • @NanoGi-lt5fc
      @NanoGi-lt5fc 24 дня назад

      @@superlazycoder1984 yes di I hv clicked the bell icon. If u say I will double click it 😂

    • @superlazycoder1984
      @superlazycoder1984 24 дня назад

      @@NanoGi-lt5fc hehe. I understand. I will raise it to the RUclips support. Thank you for letting me know

  • @shubhamk840
    @shubhamk840 28 дней назад

    you just explained what is needed to be done, not why !!!!

    • @superlazycoder1984
      @superlazycoder1984 27 дней назад

      Not sure if U meant the logic. But there could be multiple ways to do this . I just chose the one which suited the best then

    • @shubhamk840
      @shubhamk840 26 дней назад

      @@superlazycoder1984 It's okay that is fine. We are thankful for the efforts that you put.

  • @THAKURPRAJWALSINGH-o7o
    @THAKURPRAJWALSINGH-o7o 28 дней назад

    excellent

  • @bagguji1455
    @bagguji1455 Месяц назад

    CAN we use llama3.2 for image generartion

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      Not yet. right now it can only be used for image analysis

    • @bagguji1455
      @bagguji1455 Месяц назад

      @@superlazycoder1984 thanks

  • @bagguji1455
    @bagguji1455 Месяц назад

    thanks brother it means a lot

  • @yogesh-ru7uu
    @yogesh-ru7uu Месяц назад

    Can you make video on theory part of LLM evaluation techniques, also fine tunning of llm without Auto trainer.

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      @@yogesh-ru7uu that's a great idea . Definitely will try to make a video on it

  • @NanoGi-lt5fc
    @NanoGi-lt5fc Месяц назад

    Awesome di 🎉

  • @UtkarshRishi-q3o
    @UtkarshRishi-q3o Месяц назад

    You also use Gemini Flash it is very fast than any vision LLM

  • @UtkarshRishi-q3o
    @UtkarshRishi-q3o Месяц назад

    Great

  • @UtkarshRishi-q3o
    @UtkarshRishi-q3o Месяц назад

    Wow. But Can you make this real time vision model in python for Jarvis like assistants

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      I think if you use OpenCV to read video frames then U can use the image frames to send to LLama3.2 vision. Llm can do real time analysis on it. Still might be a bit slow. But that's how I would do it

    • @UtkarshRishi-q3o
      @UtkarshRishi-q3o Месяц назад

      @@superlazycoder1984 Yes please

  • @WaveSurfer
    @WaveSurfer Месяц назад

    can you tell me how to use chroma db as a vector store in this?

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      Hi . I havent tried it but hopefully this can help you sort it docs.llamaindex.ai/en/stable/examples/vector_stores/chroma_metadata_filter/

  • @slc388
    @slc388 Месяц назад

    Hi sister please make a video on LLM unsupervised fine-tuning, I mean if we have data in pdf ,text like etc ,and let's assume the data in this belongs to a specific domain here how we can fine tune and also it comes under unsupervised. Please make a video on this topic?

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      If I understand for unsupervised data maybe it's better to make a rag pipeline and answer questions from the document data

    • @slc388
      @slc388 Месяц назад

      @@superlazycoder1984 No your understanding is not correct because when rag fails we choose fine tune option,it means that already what ever the llm model we have it is already trained on huge data as well as we have rag (knowledge base on custom data) when we ask question first it goes to db(chroma,pinecone,weaviet,fiass) then fetch top k from db based on similarity search(cosine,eculdian,dotproduct) and then the response are going to the llm here the llm is analyse the these answers and his knowledge after this it sends answer,here the response is not relevant to the question what we are asking then in this situation rag fails so here we definitely goes to fine-tune on custom data and also here we are doing two ways to fine-tune the model one is supervised fine tune and another one is unsupervised fine-tuning, supervised means labelled data and unsupervised means unlabel data. I hope you understand.

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      As of now I have only came across supervised fine tuning. Unsupervised LLM training is done but for fine-tuning I have only seen supervised. That is why I suggested for your use case RAG so you can read data from any type of document or text.

  • @Emran-i8u
    @Emran-i8u Месяц назад

    Amazing

  • @NanoGi-lt5fc
    @NanoGi-lt5fc Месяц назад

    Thanks for uploading di !!!

  • @denkling
    @denkling Месяц назад

    Really great introduction. Thanks. I'm now interested in AI :)

  • @NanoGi-lt5fc
    @NanoGi-lt5fc Месяц назад

    Thanka for uploading mam lama 3 available at huggingface ?

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      The 8B model is available. But I couldn't find the 405B one huggingface.co/docs/transformers/en/model_doc/llama3

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Месяц назад

    Who are the team behind it?

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      Company behind it is Sambanova systems. It was founded in 2017 sambanova.ai/enterprise-ai-company

  • @mal-avcisi9783
    @mal-avcisi9783 Месяц назад

    this is so expensive. i need a way to use H100 24/7 for about 5 dollar per month.

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      Unfortunately I don't know either. A100 is paid on Google Colab pro subscription. Worth checking if they have some discount

  • @global.pradachan
    @global.pradachan Месяц назад

    this is the greatest video ive seen on aider,

  • @regularmail8085
    @regularmail8085 Месяц назад

    make video on android jetpack compose also

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      Sure will try to explore that too. For now I made an introduction video of jetpack with WearOS ruclips.net/video/wII1J9VrK2A/видео.html . If u wanna check that.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Месяц назад

    Is it a security risk?

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      @@user-wr4yl7tx3w as we will be using aider on local it's pretty safe in my opinion. If U don't trust the LLM from groq you can even have them on local with Ollama

  • @NanoGi-lt5fc
    @NanoGi-lt5fc Месяц назад

    Wow that's cool didi thanks for uploading

  • @VenkatesanRamasamy-g4x
    @VenkatesanRamasamy-g4x Месяц назад

    Great video

  • @sebastianeyzaguirre8394
    @sebastianeyzaguirre8394 Месяц назад

    Great video! Just wanted to ask if you ever used a fine tuned model on a python file with the transformers library? I'm trying to figure it out but I'm running into config.json and some other file problems. Would be great to get some feedback!

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      I made a video to fine tune in hugging face with trainer library ruclips.net/video/OCNraV2Toa0/видео.html. You can check it out

  • @chomchom216
    @chomchom216 Месяц назад

    Thank you very much for your didactic video. I guess some libraries are deprecated since I get this error: ModuleNotFoundError: No module named 'milvus_lite' Any idea of how to tackle this error?.

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      Hi Robert, seems like an OS issue to me. I didn't face this before but did U try this code in Linux environment github.com/milvus-io/milvus/issues/34854

    • @chomchom216
      @chomchom216 Месяц назад

      @@superlazycoder1984 Thank you so much for your quick response. I already fixed the problem by installing pip install langchain_community

  • @NanoGi-lt5fc
    @NanoGi-lt5fc Месяц назад

    Thanks for uploading di

  • @ShirleyHoward-h4z
    @ShirleyHoward-h4z Месяц назад

    Cann 1:31 ot understand a word s woman

    • @superlazycoder1984
      @superlazycoder1984 Месяц назад

      Apologies could be the accent. Please try using subtitles it might make it clear. Will also try improving audio

  • @NanoGi-lt5fc
    @NanoGi-lt5fc 2 месяца назад

    Hi di is there any way I can find tune an llm text generation mode and make its api using free resources like Collab ??

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      So you can find tune model using Transformers trainer module or auto train for free if your dataset is small as it uses CPU for free. After that to use that model you can create a Collab notebook where you can load that model hugging face transformers library again. However if you want to create it's rest API and host it then you need web hosting service which might not be free. But still you can create apps with that model and host it on hugging face spaces

    • @NanoGi-lt5fc
      @NanoGi-lt5fc 2 месяца назад

      @@superlazycoder1984 I want to find tune chat gpt 2.0 on my dataset which is about 102 mb and than host it on hugging face is it possible to do that auto train doesn't support text generation

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      Hmm I don't think gpt2.0 is open source? Are U sure that's the model

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      And hugging face does support text generation models U can check the drop-down for options

    • @NanoGi-lt5fc
      @NanoGi-lt5fc 2 месяца назад

      @@superlazycoder1984 yupp di gpt 2 is open source . You are right auto train support text generation model but it didn't support auto training for text generation task

  • @NanoGi-lt5fc
    @NanoGi-lt5fc 2 месяца назад

    Thanks for making video

  • @JustArtsCreations
    @JustArtsCreations 2 месяца назад

    I really hate calling out click bait but the video thumbnail and title imply this can be run 100% locally on 8 GB of ram which simply is not true using the 405 B Parameter model. 8 GB of ram cant even run the 12 B parameter model locally? Like not even close.. Edit: as per Meta to run the 405B locally at its LOWEST quantization you need at least 149 GB of RAM. Minimum. On 8 GB of ram locally ran you can use the 8B parameter model (and not well)

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      Agreed I just mentioned the config for anyone curious. Have removed the 8GB config. However still everything here is done on local with no installation.

    • @JustArtsCreations
      @JustArtsCreations 2 месяца назад

      ​@@superlazycoder1984 yeah I know i watched the video but i click the video expecting something vastly different. Thus the clickbait.

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      Sorry about that experience. Unfortunately cannot change the thumbnail now. But will be careful in future

    • @JustArtsCreations
      @JustArtsCreations 2 месяца назад

      @@superlazycoder1984 You can change the title and thumbnail anytime you want? I do it all the time on my channel I upload content to daily. In fact, most channels try multiple title and thumbnail combinations to have the best results and best clickthrough rate for their viewers... RUclips itself allows you to test 3 right from the start. Just a lil RUclips protip for you there ;) it might be in your channel name but no need to be lazy all the time..

  • @berkpoyraz1668
    @berkpoyraz1668 2 месяца назад

    Hi, can I download and use the final model with rag on my local? I mean Is it possible?

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      Hi, Yes it's possible you can download the model on local with Ollama and then use it with the Rag

  • @VanshMaurya-b3b
    @VanshMaurya-b3b 2 месяца назад

    RLHF when?

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      Hey sorry didn't understand the full question? Did U mean how to integrate human feedback with LlamaFactory?

    • @VanshMaurya-b3b
      @VanshMaurya-b3b 2 месяца назад

      @@superlazycoder1984 Yea I was hoping for a feedback learning video too. Love your content btw.

  • @PtYt24
    @PtYt24 2 месяца назад

    Just a observation, you use "Guys" a lot.

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      @@PtYt24 I agree guess just a habit. But will try make it better

  • @NanoGi-lt5fc
    @NanoGi-lt5fc 2 месяца назад

    Did it support spring framework?

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      For now it doesn't support spring. But because it supports Java, I would believe it would be still possible to code in spring

    • @NanoGi-lt5fc
      @NanoGi-lt5fc 2 месяца назад

      @@superlazycoder1984 okay di happy rhaki

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      Thank you so much

  • @MonaLove143
    @MonaLove143 2 месяца назад

    Hi sis, what course do you recommend to study the basics of llm model?

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      Would recommend to follow this track github.com/mlabonne/llm-course

  • @CuriousBeingVP
    @CuriousBeingVP 2 месяца назад

    Thank you, it helped

  • @steveymcneckbeard
    @steveymcneckbeard 2 месяца назад

    Brilliant, thank you

  • @ArtifactWordz
    @ArtifactWordz 2 месяца назад

    go back to making curry

  • @CuriousBeingVP
    @CuriousBeingVP 2 месяца назад

    super cool. Do you have any reference how can i implement this using typescript/Javascript?

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      Yes we created a video on that as well. Please refer ruclips.net/video/7eRKObFFe7A/видео.htmlsi=u2LzUk68uqrk0nO-

    • @CuriousBeingVP
      @CuriousBeingVP 2 месяца назад

      @@superlazycoder1984 Thank you

  • @NanoGi-lt5fc
    @NanoGi-lt5fc 2 месяца назад

    Wow di 😯😯 thanks for uploading

  • @annwang5530
    @annwang5530 2 месяца назад

    hi girl can you take one task to autotrain a csv file of 1400 rows I got? how much?

    • @superlazycoder1984
      @superlazycoder1984 2 месяца назад

      Hey sorry won't be able to do it for you but if you face an error please post here. I can suggest fixes

  • @ammartaj
    @ammartaj 2 месяца назад

    Well-explained on such a freezing chilly day (temp displayed in bottom-left 1 to 0 Degree Celsius :) thanks!

  • @keva3563
    @keva3563 2 месяца назад

    Very helpful, thanks!

  • @harsh6980
    @harsh6980 3 месяца назад

    Great video i did see the full thing