Abonia Sojasingarayar
Abonia Sojasingarayar
  • Видео 30
  • Просмотров 28 499
Crewai Ollama Agent -Build an AI Article Generator with CrewAI|Ollama |Streamlit - Complete Tutorial
Learn how to create a AI-driven article generator using CrewAI and Ollama in this step-by-step tutorial. This hands-on guide demonstrates how to integrate multiple AI agents for research and writing, all running locally with open-source models.
📌 What You'll Learn
0:00 Setting up CrewAI and Ollama
3:32 Configuring AI agents for research and content creation
8:11 Creating a Streamlit-based web application for generating articles
11:25 Demo - Genrating Articles
15:33 Best practices for efficient Agent workflows
🔧 Tools Used
- CrewAI
- Ollama (with Mistral, openhermes model)
- Streamlit
📥 Resources
- Full source code: github.com/Abonia1/CrewAI-Article-Generator
- CrewAI Documentation: docs.crewai.com/int...
Просмотров: 277

Видео

Welcome and Join the Journey - AI, ML, DataScience, GenerativeAI, ComputerVision | Channel Trailer
Просмотров 107Месяц назад
🌟 Welcome to our Channel 🎥 In this trailer, I’ll give you a glimpse of what this channel is all about. Thank you for being here-I can’t wait to share this journey with you! 🔔 Get our Newsletter and Featured Articles: abonia1.github.io/newsletter/ 🔗 Linkedin: www.linkedin.com/in/aboniasojasingarayar/ 🔗 Find me on Github: github.com/Abonia1 🔗 Medium Articles: medium.com/@abonia
Easily Run Hugging Face GGUF Models Locally with Ollama #LLM #HuggingFace #GGUFModels #Ollama#asitop
Просмотров 184Месяц назад
In this video, we'll dive into the latest and most efficient method for running Hugging Face GGUF models with Ollama. We’ll download the MistralLite model from the Hugging Face hub, run it locally, and monitor resource and memory usage using asitop.The tutorial could cover: 1.What GGUF models are and their use cases. 2.Why use Ollama for local inference. 3.Step-by-step setup and execution. 4.Re...
Part 2 | MLOps On GitHub | Deploy and Automate ML Workflow |Using GitHub Actions and CML for CI & CD
Просмотров 3152 месяца назад
Comprehensive tutorial on using GitHub Actions and Continuous Machine Learning (CML) to automate machine learning workflows! In this video, we’ll walk through the complete process of setting up a CI/CD pipeline for a machine learning project. By the end, you’ll be able to create and deploy automated workflows, monitor model performance, and collaborate seamlessly with your team! ⭐️ Contents ⭐️ ...
Part 1 | MLOps On GitHub | Deploy and Automate ML Workflow |Using GitHub Actions and CML for CI& CD
Просмотров 3862 месяца назад
Comprehensive tutorial on using GitHub Actions and Continuous Machine Learning (CML) to automate machine learning workflows! In this video, we’ll walk through the complete process of setting up a CI/CD pipeline for a machine learning project focused on churn prediction. By the end, you’ll be able to create and deploy automated workflows, monitor model performance, and collaborate seamlessly wit...
Clustering using Embedding - KMeans - PCA - Visualization
Просмотров 1643 месяца назад
Clustering using Embedding - A Hands-on Guide to Web Scraping, Text Embedding, and KMeans Clustering with Python This tutorial demonstrates the clustering of random Wikipedia articles using HF embedding and K-means clustering. It showcases the entire pipeline from data collection to visualization of clusters. ⭐️ Methodology and Contents ⭐️ 0:00 Introduction 04:27 Fetch random Wikipedia articles...
Running a Streamlit App from Google Colab - Serve an LLM app in Colab
Просмотров 5983 месяца назад
How to build and deploy a powerful sentiment analysis web app using Streamlit and DistilBERT, a state-of-the-art transformer model fine-tuned for sentiment classification. We'll walk through setting up the model, building the web interface, and deploying it with localtunnel in Google Colab. Whether you're analyzing customer reviews or social media posts, this tutorial will help you create a fas...
Table Extraction from PDF using Camelot - Tabula - PDFPlumber #PDFTableExtraction #Hands-On
Просмотров 3873 месяца назад
Python provides strong libraries which allows smart table extraction from PDF , offering flexibility, automation, and handling of various PDF formats. We will explore the following Python libraries that were specifically developed for easier table extraction: 1. Camelot 2. Tabula 3. Pdfplumber ⭐️ Contents ⭐️ 0:00 Introduction 03:22 Setup and Installation 04:41 Camelot 8:30 Tabula 10:38 PDFplumb...
📚 Book Review - Mastering NLP from foundations for LLMs
Просмотров 2204 месяца назад
Book Review - Mastering NLP from foundations for LLMs 🔔 Newsletter and Featured Articles: abonia1.github.io/newsletter/ 🔗 Linkedin: / aboniasojasingarayar 🔗 Find me on Github : github.com/Abonia1 🔗 Medium Articles: / abonia 🔗 Substack AI Magazine: aboniasojasingarayar.substack.com/
Running Ollama in Colab (Free Tier) - Step by Step Tutorial
Просмотров 2,7 тыс.4 месяца назад
Ollama empowers you to leverage powerful large language models (LLMs) like Llama2,Llama3,Phi3 etc. without needing a powerful local machine. Google Colab’s free tier provides a cloud environment perfectly suited for running these resource-intensive models. This tutorial details setting up and running Ollama on the free version of Google Colab, allowing you to explore the capabilities of LLMs wi...
PandasAI and Ollama running locally
Просмотров 5875 месяцев назад
PandasAI represents a major advancement in data analysis, effectively bridging the gap between traditional coding methods and intuitive natural language interactions. By automating repetitive tasks, generating accurate insights, and seamlessly transforming data, it enables users to extract maximum value from their data without needing extensive coding. ⭐️ Contents ⭐️ 00:00 Introduction to Panda...
SAM 2 Segment Anything - Image and Video Segmentation #computervision #objectsegmentation #sam #meta
Просмотров 5895 месяцев назад
Advanced model for comprehensive object segmentation in both images and videos. It features a unified, promptable model architecture that excels in processing complex visual data in real time and supports zero-shot generalization. ✨ Key Features ✨ ✅ Unified Model Architecture ✅ Real-Time Performance ✅ Zero-Shot Generalization ✅ Interactive Refinement ✅ Advanced Visual Handling 🌟 Content 🌟 00:00...
📚 Book Review - Transformers for Natural Language Processing and Computer Vision - 3rd Edition
Просмотров 3136 месяцев назад
Book Review: Transformers for Natural Language Processing and Computer Vision - Third Edition 🔔 Newsletter and Featured Articles: abonia1.github.io/newsletter/ 🔗 Linkedin: www.linkedin.com/in/aboniasojasingarayar/ 🔗 Find me on Github : github.com/Abonia1 🔗 Medium Articles: medium.com/@abonia 🔗 Substack AI Magazine: aboniasojasingarayar.substack.com/
Fine-Tuning YOLOv10 for Object Detection on a Custom Dataset #yolo #finetuning
Просмотров 1,9 тыс.6 месяцев назад
YOLOv10 is a new generation in the YOLO series for real-time end-to-end object detection. It aims to improve both the performance and efficiency of YOLO models by eliminating the need for non-maximum suppression (NMS) and comprehensively optimizing the model architecture. In this tutorial, we will explore its architecture and how to fine-tune it to detect cancer cells for cancer diagnosis. ⭐️ C...
Building and Testing a Multi-Modal Retriever - Hands-On #llamaindex #CLIPembeddings #Image-TextIndex
Просмотров 1607 месяцев назад
Building and Testing a Multi-Modal Retriever - Hands-On #llamaindex #CLIPembeddings #Image-TextIndex
Anylabeling - Image Annotation Tool - ObjectDetection and Instance Segmenation #Computervision #YOLO
Просмотров 5688 месяцев назад
Anylabeling - Image Annotation Tool - ObjectDetection and Instance Segmenation #Computervision #YOLO
Top LLM and Deep Learning Inference Engines - Curated List
Просмотров 2508 месяцев назад
Top LLM and Deep Learning Inference Engines - Curated List
Summarization with LangChain using LLM - Stuff - Map_reduce - Refine
Просмотров 9999 месяцев назад
Summarization with LangChain using LLM - Stuff - Map_reduce - Refine
Deploying a Retrieval-Augmented Generation (RAG) in AWS Lambda
Просмотров 3,1 тыс.10 месяцев назад
Deploying a Retrieval-Augmented Generation (RAG) in AWS Lambda
Build and Deploy LLM Application in AWS Lambda - BedRock - LangChain
Просмотров 9 тыс.10 месяцев назад
Build and Deploy LLM Application in AWS Lambda - BedRock - LangChain
Run Ollama with Langchain Locally - Local LLM
Просмотров 2,1 тыс.11 месяцев назад
Run Ollama with Langchain Locally - Local LLM
LLMLingua - Prompt Compression for LLM Use Cases 🔥
Просмотров 36211 месяцев назад
LLMLingua - Prompt Compression for LLM Use Cases 🔥
What is RAG (Retrieval-Augmented Generation)?
Просмотров 262Год назад
What is RAG (Retrieval-Augmented Generation)?
BERTScore Explained in 5 minutes
Просмотров 2 тыс.Год назад
BERTScore Explained in 5 minutes
Must read LLM and AI Research Papers of 2023 🔥
Просмотров 208Год назад
Must read LLM and AI Research Papers of 2023 🔥

Комментарии

  • @aakash321
    @aakash321 23 часа назад

    thank you for reference and candidate example picture at 2:38 very well explained !

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 19 часов назад

      Glad it helped. Its the official architecture from BERTScore paper.

  • @Giorgio_Caniglia
    @Giorgio_Caniglia 4 дня назад

    Doesn't work, Google gives me an error

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 3 дня назад

      @@Giorgio_Caniglia Can you post the error screen here please? Cheers

    • @Giorgio_Caniglia
      @Giorgio_Caniglia 3 дня назад

      @AboniaSojasingarayar so kind thank you, don't worry I solved It by opening the colab in Chrome, thank you!

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 3 дня назад

      Great 👍

  • @chaithanyavamshi2898
    @chaithanyavamshi2898 13 дней назад

    Great Video..Can you please share github link for code?

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 12 дней назад

      Thanks for your kind words. Thanks for letting me know. Here is the link: github.com/Abonia1/CrewAI-Article-Generator/tree/main

  • @artificialnudge6610
    @artificialnudge6610 13 дней назад

    Timestamps 00:03 - Build a local AI article generator with CrewAI and Ollama. 02:16 - Setup and configure AI article generation agents with necessary libraries. 04:25 - Creating AI agents for research and writing tasks. 06:32 - Build an AI article generator using CrewAI and Streamlit. 08:37 - Building an AI-powered article generator using CrewAI and Ollama. 10:37 - Building a web app to generate articles using CrewAI and Streamlit. 12:46 - Article generation with download feature through AI agents. 14:47 - Building a user-friendly AI article generator using CrewAI and Streamlit.

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 12 дней назад

      Wow :) Thank you so much for this detailed timeline. Thanks for your support.

  • @HelloIamLauraa
    @HelloIamLauraa 21 день назад

    thank youu I really liked it, very good explained

  • @zameerahmed1775
    @zameerahmed1775 Месяц назад

    ur a bit fast .......... pls slow down ur pace.

    • @AboniaSojasingarayar
      @AboniaSojasingarayar Месяц назад

      Hi Zameer Ahmed , Ah sorry about it, Sure will have this in my mind. thanks for your kind feedback and wish you an advance happy new year 🎉🙂

  • @leelasankar1
    @leelasankar1 Месяц назад

    Thank you very much such a great video

  • @יהונתןבוגנים-מ6כ
    @יהונתןבוגנים-מ6כ Месяц назад

    Question about lambda function. If I decide in x time to change the rag mechanism, and I change my code. All this after I pushed a Docker image once and created a lambda function. What steps should I take? Does every change in my code require me to redeploy? Retag an image, upload to ecr, push it, create a new lambda function? I would appreciate help

    • @AboniaSojasingarayar
      @AboniaSojasingarayar Месяц назад

      Hello, yes, every code change requires rebuilding the Docker image, pushing it to ECR and then explicitly updating the Lambda function configuration to use the new image URI. This ensures you're deploying a complete, tested version each time. Use version tags on your ECR images (e.g v1.0) for better tracking.

  • @namankumarmuktha4507
    @namankumarmuktha4507 Месяц назад

    Life Saver..

  • @sanreetkaurmann4796
    @sanreetkaurmann4796 Месяц назад

    Oh my god, may god bless you forever and ever, you literally have no idea how this video helped... Thank you so so very muchhhhh!!!✨✨✨✨✨✨ Hope you have an amazinggg dayyy aheadd✨✨😇😇❤❤

    • @AboniaSojasingarayar
      @AboniaSojasingarayar Месяц назад

      Hi Sanreet, Happy to help and I'm glad it helped. Thank you so much for your kind words 🙂

  • @apocart8426
    @apocart8426 Месяц назад

    hi thank you very much for this insightful video. I was also wondering if we could run image generation models such as flux on google collab as well.

    • @AboniaSojasingarayar
      @AboniaSojasingarayar Месяц назад

      Hi, yes, absolutely we can. Sample notebook: colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/Flux/Run_Flux_on_an_8GB_machine.ipynb Hope this helps.

    • @apocart8426
      @apocart8426 Месяц назад

      @@AboniaSojasingarayar You're amazing thankyou

    • @AboniaSojasingarayar
      @AboniaSojasingarayar Месяц назад

      Happy to help.

    • @apocart8426
      @apocart8426 Месяц назад

      @@AboniaSojasingarayar hey so i tried this out, one of the issues im having is that this exhausts all of the free resources of colab. so if you know any good model that we can use on colab and how to use it then please do share

    • @AboniaSojasingarayar
      @AboniaSojasingarayar Месяц назад

      You may try using a quantized model and GPU runtime. To install such model , recently published a tutorial on how we can pull gguf quantized model into ollama: ruclips.net/video/8MjS0aOV8tE/видео.htmlsi=J5fCzL6lw_zRGJ2p - Additionally try using subprocess and threading for better performance.sample code here: def run_ollama(): subprocess.Popen(["ollama", "serve"]) ollama_thread = threading.Thread(target=run_ollama) ollama_thread.start() Hope this helps.

  • @zerofive3699
    @zerofive3699 2 месяца назад

    👍

  • @marvinacklin792
    @marvinacklin792 2 месяца назад

    What language in this?

  • @Bumbblyfestyle
    @Bumbblyfestyle 3 месяца назад

    Very useful

  • @johannes7856
    @johannes7856 3 месяца назад

    Nice Tutorial, thanks. 😊

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 3 месяца назад

      Thank you so much! 😊 I’m glad you found the tutorial helpful!

    • @johannes7856
      @johannes7856 3 месяца назад

      @@AboniaSojasingarayar Do you know if there is a tool that can convert the Annoted json from the Anylabeling tool to the yolo format?

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 3 месяца назад

      @johannes7856 Hi Johannes, You may try following library github.com/rooneysh/Labelme2YOLO If not you can convert any labelling json to coco json and again convert it to yolo using the above library. Hope this helps.

  • @DevSingh-v2h
    @DevSingh-v2h 3 месяца назад

    can you please share the colab notebook

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 3 месяца назад

      Sure, here it is: gist.github.com/Abonia1/fc442374e1c20c86db8effbf95d93eb6

  • @khlifimohamedrayen1303
    @khlifimohamedrayen1303 3 месяца назад

    Thank you very much for this tutorial! I was having many problems running the ollama server on colab without the colabxterm... You're such a life saver!

  • @Bumbblyfestyle
    @Bumbblyfestyle 3 месяца назад

    Good info 😊

  • @mohamadadhikasuryahaidar7652
    @mohamadadhikasuryahaidar7652 3 месяца назад

    thanks for the tutorial

  • @anandrajgt3602
    @anandrajgt3602 3 месяца назад

    Please post a video regarding github actions

  • @Nabeel27
    @Nabeel27 3 месяца назад

    I get error: Runtime.ImportModuleError: Unable to import module 'lambda_function': Error importing numpy: you should not try to import numpy from its source directory; please exit the numpy source tree, and relaunch your python interpreter from there. followed all steps as in your video.

    • @Nabeel27
      @Nabeel27 3 месяца назад

      Looks like I had to setup the lambda as arm64 and the layer (created on mac Docker) also as arm64. Next, it also requires Bedrock setup and access request to llama model to use. llama 2 is no longer available, have to request llama 3 8B or something else.

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 3 месяца назад

      Hello Nabeel, Are you still facing the above issue?

    • @Nabeel27
      @Nabeel27 3 месяца назад

      @@AboniaSojasingarayar Thank you so much for following up! the error I am getting now is this: "errorMessage": "Error raised by bedrock service: An error occurred (AccessDeniedException) when calling the InvokeModel operation: User: arn:aws:sts::701934491353:assumed-role/test_demo-role-sfu6wu6d/test_demo is not authorized to perform: bedrock:InvokeModel on resource: arn:aws:bedrock:us-east-1::foundation-model/meta.llama3-8b-instruct-v1:0 because no identity-based policy allows the bedrock:InvokeModel action",

    • @Nabeel27
      @Nabeel27 3 месяца назад

      @@AboniaSojasingarayar I was able to solve it. I got the permission to use llama3 and also had to update role permissions to use Bedrock.

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 3 месяца назад

      @@Nabeel27 Great 🎉

  • @Bumbblyfestyle
    @Bumbblyfestyle 3 месяца назад

  • @zerofive3699
    @zerofive3699 3 месяца назад

    Awesome abo keep up the good work

  • @enia123
    @enia123 4 месяца назад

    thank you I was studying something related, but my computer's performance was very poor due to lack of money. I had a problem with ollama not working in Colab, but it was resolved! thank you I would like to test a model created in Colab. Is there a way to temporarily run it as a web service?

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 4 месяца назад

      Most welcome. Great and glad to hear that finally it worked. 1. Of course we can use the flask API and ColabCode package to serve your mode via endpoint in ngrok temporary URL. github.com/abhishekkrthakur/colabcode 2. And another way is using flask and flask-ngrok. pypi.org/project/flask-ngrok/ pypi.org/project/Flask-API/ Sample code for reference: from flask import Flask from flask_ngrok import run_with_ngrok app = Flask(__name__) run_with_ngrok(app) @app.route("/") def home(): return "Hello World" app.run() If needed I'll try to do a tuto on this topic in future. Hope this helps:)

    • @enia123
      @enia123 4 месяца назад

      @@AboniaSojasingarayar thank you Have a nice day~

  • @tapiaomars
    @tapiaomars 4 месяца назад

    Hi, its possible integrate DynamoDB for store and retrive context of last user prompts in lambda function?

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 4 месяца назад

      Hello, Yes , DynamoDB, S3, or in-memory storage depending on requirements. Each piece of context is associated with a user ID, ensuring that contexts are isolated per user with conversation ID. Hope this helps.

    • @tapiaomars
      @tapiaomars 4 месяца назад

      @@AboniaSojasingarayar Thanks, I'll try it and let you know how it goes.

  • @ziaullah2115
    @ziaullah2115 4 месяца назад

    please create one video for breast cancer detection in yolov10 model

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 3 месяца назад

      Absolutely, I’ll work on getting it ready shortly. If there are specific areas you want me to concentrate on, just let me know! Also, do you have any custom dataset you'd like to use for this tutorial? Thanks

  • @iroudayaradjcalingarayar317
    @iroudayaradjcalingarayar317 4 месяца назад

    Super

  • @VenkatesanVenkat-fd4hg
    @VenkatesanVenkat-fd4hg 4 месяца назад

    Great discussion....

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 4 месяца назад

      Thank you Venkatesan. I'm glad you enjoyed the discussion.

  • @mayshowgunmore5269
    @mayshowgunmore5269 4 месяца назад

    Hi I'm trying to run these processes, but in this video 12:36 how to create and execute the file named ".env" , it always show Error , I can't figure it out. Thanks!

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 4 месяца назад

      Hello, You can use local VScode or any IDE to create .env New file -> name it as .env And add your API key as follows: ROBOFLOW_API_KEY=your_api_key Once done drag and drop it in colab. Hope this helps.

  • @VenkatesanVenkat-fd4hg
    @VenkatesanVenkat-fd4hg 5 месяцев назад

    Great share, insightful share as always...Are u using obs studio for recording....by Senior Data Scientist....

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 5 месяцев назад

      Glad it helped. Not really! Just using the built-in recording and iMovie to edit it.

  • @alvaroaraujo7945
    @alvaroaraujo7945 5 месяцев назад

    Hey, Abonia..Thanks for the amazing content. I just had one issue though: on executing the 'map_reduce_outputs' function, I had the ConnectionRefusedError: [Errno 61]. Hope someone know what it is

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 5 месяцев назад

      @@alvaroaraujo7945 Hello , thanks for your kind words. It may be related to your ollama serve.Are you sure Ollama is running ?

  • @machinelearningzone.6230
    @machinelearningzone.6230 5 месяцев назад

    Nice explanation and walkthrough. Could you provide the link to the code repo for this exercise.

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 5 месяцев назад

      Glad it helped. As mentioned in the description, you can find the code and explanation in this article walkthrough. medium.com/@abonia/deploying-a-rag-application-in-aws-lambda-using-docker-and-ecr-08e246a7c515

  • @zerofive3699
    @zerofive3699 5 месяцев назад

    It is very helpful mam , it is useful on impliying

  • @zerofive3699
    @zerofive3699 5 месяцев назад

    Nice video mam

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 5 месяцев назад

    Can we simply rely on open source only without using Amazon? What if it is just prototyping?

  • @World-um5vo
    @World-um5vo 5 месяцев назад

    Hi, Thank you for the video, So if we want to fine tune the model and evaluate it for videos, then how to do it ?

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 5 месяцев назад

      Your most welcome. Here I have introduced basic usage of SAM 2 models. If you want to evaluate your finetuned model you may try mean IoU score, for a set of predictions and targets or DICE, precision, recall, and mAP.

  • @Basant5911
    @Basant5911 6 месяцев назад

    streaming does't work via doing this. I wrote code from scratch without langchain.

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 6 месяцев назад

      @@Basant5911 can you share your code base and error or issue that you are facing currently please?

  • @DenisRothman
    @DenisRothman 6 месяцев назад

    ❤Thank you for this fantastic educational video on my book!!! 🎉

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 6 месяцев назад

      @@DenisRothman Thank you for your kind words. I'm grateful for the opportunity to review the book and share my thoughts. Your work is well-deserved and truly one of the most insightful books I've read.

  • @MohamedMohamed-xf7wh
    @MohamedMohamed-xf7wh 6 месяцев назад

    You used a webpage as a data source for the RAG app, what If I add pdf file instead of the webpage as a data source, how can I deploy it in aws lambda?

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 6 месяцев назад

      To build RAG with pdf in AWS ecosystem, you need to follow steps that involve uploading the PDF to an S3 bucket, extracting text from the PDF, and then integrating this data with your RAG application.

    • @MohamedMohamed-xf7wh
      @MohamedMohamed-xf7wh 6 месяцев назад

      @@AboniaSojasingarayar Can I locally extract text from pdf and build vector DB locally using vscode and then build the docker image and push it to ECR AWS like what you did in the video?

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 6 месяцев назад

      @@MohamedMohamed-xf7wh Yes, you can locally extract text from PDF files, build a vector database and then prepare your application for deployment on AWS Lambda by building a Docker image and pushing it to ECR. But which vector db are you using? It can be accessible with API?

    • @MohamedMohamed-xf7wh
      @MohamedMohamed-xf7wh 6 месяцев назад

      @@AboniaSojasingarayar FAISS .. what is the problem with vector db?

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 6 месяцев назад

      @@MohamedMohamed-xf7wh Great!

  • @htayaung3812
    @htayaung3812 6 месяцев назад

    Really Nice! Keep going. You deserve more subscribers.

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 6 месяцев назад

      @@htayaung3812 Thank you so much for your support! I'm working to bring more tutorials.

  • @raulpradodantas9386
    @raulpradodantas9386 6 месяцев назад

    Save my life to create lambda layers... I have been trying for days. TKS!

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 6 месяцев назад

      @@raulpradodantas9386 Glad to hear that! You most welcome.

  • @SidSid-kp4ij
    @SidSid-kp4ij 7 месяцев назад

    Hi I'm trying to run my trained model with interface to webcam but getting error can you share any insight on it

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 6 месяцев назад

      @@SidSid-kp4ij Hello Sid, Sure can you post your error message here please?

  • @gk4457
    @gk4457 7 месяцев назад

    All the best

  • @RajuSubramaniam-ho6kd
    @RajuSubramaniam-ho6kd 8 месяцев назад

    Thanks for the video. Very useful for me as I am new to AWS lambda and bedrock. Can you please upload the lambda function source code? Thanks again!

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 8 месяцев назад

      Glad it helped. Sure you can find the code and complete the article on this topic in the description. In any way here is the link to the code : medium.com/@abonia/build-and-deploy-llm-application-in-aws-cca46c662749

  • @jannatbellouchi3908
    @jannatbellouchi3908 8 месяцев назад

    Which version of BERT is it used in BERTScore ?

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 8 месяцев назад

      As we are using lang= "en" so it uses roberta-large. We can also customize it using the model_type param of BERScorer class For default model for other languages,find it here: github.com/Tiiiger/bert_score/blob/master/bert_score/utils.py

  • @jagadeeshprasad5252
    @jagadeeshprasad5252 8 месяцев назад

    hey great content. please continue to do more videos and real time projects. Thanks

  • @zerofive3699
    @zerofive3699 8 месяцев назад

    Awesome mam , very easy to understand

  • @NJ-hn8yu
    @NJ-hn8yu 8 месяцев назад

    Hi Abonia, thanks for sharing. I am facing this error . can you please tell how to resolve it "errorMessage": "Unable to import module 'lambda_function': No module named 'langchain_community'",

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 8 месяцев назад

      Hello, You are most welcome. You must prepare your ZIP file with all the necessary packages. You can refer to the instructions starting at the 09:04

  • @humayounkhan7946
    @humayounkhan7946 8 месяцев назад

    Hi Abonia, thanks for the thorough guide, but i'm abit confused with the lambda_layer.zip file, why did you have to create it through docker? is there an easier way to provide the dependencies in a zip file without going through docker? Thanks in advance!

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 8 месяцев назад

      Hi Humayoun Khan, Yes we can but Docker facilitates the inclusion of the runtime interface client for Python, making the image compatible with AWS Lambda. Also it ensures a consistent and reproducible environment for Lambda function's dependencies. This is crucial for avoiding discrepancies between development, testing, and production environments. Hope this helps.

  • @evellynnicolemachadorosa2666
    @evellynnicolemachadorosa2666 8 месяцев назад

    hello! Thanks for the video. I am from Brazil. What would you recommend for large documents, averaging 150 pages? I tried map-reduce, but the inference time was 40 minutes. Are there any tips for these very long documents?

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 8 месяцев назад

      Thanks for you kind words and glad this helped. Implement a strategy that combines semantic chunking with K-means clustering to address the model’s contextual limitations. By employing efficient clustering techniques, we can extract key passages effectively, thereby reducing the overhead associated with processing large volumes of text. This approach not only significantly lowers costs by minimizing the number of tokens processed but also mitigates the recency and primacy effects inherent in LLMs, ensuring a balanced consideration of all text segments.

    • @VirtualMachine-d8x
      @VirtualMachine-d8x 4 месяца назад

      ​@@AboniaSojasingarayar Video was great and very useful.. can you make the small video on this clustering method using embedding ?

    • @AboniaSojasingarayar
      @AboniaSojasingarayar 4 месяца назад

      @@VirtualMachine-d8x Sure will do, happy to hear from you again. Thanks for the feedback.