Is Falcon LLM the OpenAI Alternative? An Experimental Setup with LangChain

Поделиться
HTML-код
  • Опубликовано: 7 июн 2023
  • 👉🏻 Kick-start your freelance career in data: www.datalumina.io/data-freela...
    The Technology Innovation Institute in Abu Dhabi has launched Falcon, a new, advanced line of language models, available under the Apache 2.0 license. The standout model, Falcon-40B, is the first open-source model to compete with existing closed-source models. This launch is great news for language model enthusiasts, industry experts and businesses, as it presents many opportunities for new use cases. In this video, we are going to compare the new Falcon-7B model against OpenAI's text-davinci-003 model to see if open-source can take on the battle with paid models.
    🔗 Links
    huggingface.co/blog/falcon
    github.com/daveebbelaar/langc...
    huggingface.co/tiiuae/falcon-...
    Introduction to LangChain
    • Build Your Own Auto-GP...
    Copy my VS Code Setup
    • How to Set up VS Code ...
    👋🏻 About Me
    Hey there, my name is @daveebbelaar and I work as a freelance data scientist and run a company called Datalumina. You've stumbled upon my RUclips channel, where I give away all my secrets when it comes to working with data. I'm not here to sell you any data course - everything you need is right here on RUclips. Making videos is my passion, and I've been doing it for 18 years.
    While I don't sell any data courses, I do offer a coaching program for data professionals looking to start their own freelance business. If that sounds like you, head over to www.datalumina.io/ to learn more about working with me and kick-starting your freelance career.
  • НаукаНаука

Комментарии • 44

  • @daveebbelaar
    @daveebbelaar  Год назад +3

    👋🏻I'm launching a free community for those serious about learning Data & AI soon, and you can be the first to get updates on this by subscribing here: www.datalumina.io/newsletter

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Год назад +10

    Can we try fine tuning the Falcon for future video

  • @oshodikolapo2159
    @oshodikolapo2159 Год назад +1

    Just what i was searching for. Thanks for this. bravo!

  • @ingluissantana
    @ingluissantana Год назад

    As always GREAT video!!!! Thanks!!!!

  • @user-jt1zw1ux8z
    @user-jt1zw1ux8z 9 месяцев назад

    Thanks man! Finally some code that was working for me 👍

  • @sree_9699
    @sree_9699 Год назад

    Interesting! I was exploring the same thing just an hour ago on HF and ran into this video as I opened the RUclips. Good content.

  • @PeterDrewSEO
    @PeterDrewSEO Год назад

    Great video mate, thank you!

  • @VaibhavPatil-rx7pc
    @VaibhavPatil-rx7pc 10 месяцев назад

    Excellent detailed information

  • @marcova10
    @marcova10 Год назад

    Thanks, Dave
    with some trials it seems that this version of falcon works for short questions,
    I am finding that in some cases the LLM spits several repeated sentences, may need some tweaking in the output to clean it up
    Great alternative for certain uses

  • @Jake_McAllister
    @Jake_McAllister 10 месяцев назад

    Hey Dave, love the video! How did you create your website, looks amazing bro 👌

  • @datanash8200
    @datanash8200 Год назад +1

    Perfect timing, need to implement some LLM for a work project 🙌

  • @aditunoe
    @aditunoe Год назад +1

    In the special_tokens_map.json file of the HF repo there are some special tokens defined that differ from what OpenAI or others use a little bit. Integrating those into a prompt template of the chains seemed to improve the results for me (Also wrote on example in the HF comments). Three interesting ones in particular:
    >>QUESTIONSUMMARY>ANSWER

  • @user-lf2tj6bz3m
    @user-lf2tj6bz3m 11 месяцев назад

    Thanks for Your video... do you have how implemented that with node js?

  • @xXWillyxWonkaXx
    @xXWillyxWonkaXx Год назад +1

    Hey man, love your videos. Two questions:
    Q1. 11:50 are you talking about embedding?
    Q2. From your experience/deduction /observation of the LLM on huggingface, can you train a model like MosaicML MPT-7B, through in QLorRA in the mix and train it to be like GPT4 or even slightly better in terms of understanding/alignment - could using tree of thought mitigate or solve a small percentage of that?

    • @daveebbelaar
      @daveebbelaar  Год назад

      A1 - No I don't use embeddings in this example. Just plain text sent to the APIs
      A2 - Not sure about that

  • @esakkisundar
    @esakkisundar 7 месяцев назад

    How to run the FalconModel locally. Does providing a key run the model in HuggingFace server?

  • @KatarzynaHewelt
    @KatarzynaHewelt Год назад +1

    Thanks Dave for another great video! Do you know if I can perhaps download falcon locally and then use it privatelly - without HF API?

    • @daveebbelaar
      @daveebbelaar  Год назад

      Thanks Katarzyna! I am not sure about that.

  • @Esehe
    @Esehe 10 месяцев назад

    @17:30 interesting how my OpenAI output/summary is different, than yours:
    " This article explains how to use Flowwise AI, an open source visual UI Builder, to quickly build
    large language models apps and conversational AI. It covers setting up Flowwise, connecting it to
    data, and building a conversational AI, as well as how to embed the agent in a Python file and run
    queries. It also shows how to use the agent to ask questions and get accurate results."

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Год назад

    Do you have a video on pre training an LLM?

  • @bentouss3445
    @bentouss3445 Год назад +2

    Really great video on this hot topic of open llm vs closed ones...
    It will be really interesting to see how to self host a open llm to not go through any external inference API.

    • @daveebbelaar
      @daveebbelaar  Год назад

      Thanks! Yes that is very interesting indeed!

  • @felixbrand7971
    @felixbrand7971 10 месяцев назад

    I’m sure this will be basic question, but where is the inference running here? Is it local, or is it on huggingface’s resources?

    • @vuktodorovic4768
      @vuktodorovic4768 9 месяцев назад

      That is what I wanted to ask. I mean I loaded this model into the google collab free tier and it took 15gb of ram and 14gb of GPU memory, I cant imagine what hardware you should have to run something like this locally. Also, I can't imagine that hugging face would give you their resources just like that. His setup seems very strange.

  • @luis96xd
    @luis96xd 11 месяцев назад

    Great video, I have a doubt, what are the requirements to run locally Falcon-7B instruct? Can I use a CPU?

    • @fullcrum2089
      @fullcrum2089 11 месяцев назад +2

      15GB GPU Memory

    • @luis96xd
      @luis96xd 11 месяцев назад

      @@fullcrum2089 Thank you so much! That's a Lot 😱

  • @shakeebanwar4403
    @shakeebanwar4403 7 месяцев назад

    Can i run this 7b model without gpu my system ram is 32 gb

  • @GyroO7
    @GyroO7 11 месяцев назад

    I feel like using chunk size of 1000 with 200 overlaps will improve the results

  • @ko-Daegu
    @ko-Daegu Год назад

    How are you runing a .py file as a jypeter Notebook on the side like that how are you taking each line inside it's one block to the side interactive
    this setup looks neat

    • @daveebbelaar
      @daveebbelaar  Год назад

      Check out this video: ruclips.net/video/zulGMYg0v6U/видео.html

    • @ko-Daegu
      @ko-Daegu Год назад

      @@daveebbelaar Merci

  • @Eloii_Xia
    @Eloii_Xia Год назад

    Imagine combine it with Obsidian, Notion or other similar software

  • @fdarko1
    @fdarko1 11 месяцев назад

    I am new to Data science and want to know more about it to become a Pro. Please mentor me.

    • @daveebbelaar
      @daveebbelaar  11 месяцев назад

      Subscribe and check out the other videos on my channel ;)

  • @mayorc
    @mayorc Год назад

    But don't you get a free amount of tokens for free that recharge every month using OpenAI, or not? So unless you go over the amount you shouldn't get charged.

  • @pragyanbhatt6200
    @pragyanbhatt6200 Год назад +1

    Nice tutorial Dave, but isn't it unfair to compare two models with different parameters count? falcon -7b has 7billion where as text-davinci-003 has almost 175 billion parameters?

    • @daveebbelaar
      @daveebbelaar  Год назад +1

      It's definitely unfair, but that's why it's interesting to see the performance of a much smaller, free to use model.

  • @deliciouspops
    @deliciouspops Год назад +1

    i like how degraded our society is

  • @noelwos1071
    @noelwos1071 9 месяцев назад

    UNFAIR ADVANTAGE
    .What do you think, as a European citizen, would you have to sue Europe, which hinders the development of progress offered by artificial intelligence and thus causes enormous damage in Europe's lagging behind the whole world. isn't the EU a responsible institution

  • @VaibhavPatil-rx7pc
    @VaibhavPatil-rx7pc 10 месяцев назад

    Excellent detailed information