Lucidate
Lucidate
  • Видео 188
  • Просмотров 443 710
When not to use GenAI! Navigating the AI Roadmap: Beyond the Hype.
🔍 When not to use GenAI: The ultimate guide to navigating the AI hype! 🚀
Is GenAI just hype, or a game-changer? In this eye-opening 13-minute video, discover when to use GenAI and when other AI tools are the right choice for your business. Don't waste resources on misapplied tech - learn to match the right tool to each job!
🕒 Timestamps:
0:00 Introduction
1:15 The Gartner Hype Cycle and AI
2:30 12 Common AI Use Cases
5:45 6 Key AI Technologies
8:30 Matching Tech to Use Cases
11:00 When to Use (and Not Use) GenAI
12:30 Conclusion
With credit to, and based on: pub.towardsai.net/do-not-use-llm-or-generative-ai-for-these-use-cases-a819ae2d9779
Dive into:
The truth behind the GenAI hype
12 real-world AI ap...
Просмотров: 490

Видео

AI Email Revolution: Reclaim 28% of Your Workday in 3 Steps
Просмотров 377Месяц назад
Discover how AI is revolutionizing email management and reclaiming 28% of your workday. In this video, we explore a groundbreaking AI system that classifies, prioritizes, and drafts responses to your emails. Learn how LangChain and CrewAI frameworks are making this accessible to everyone, not just coders. We break down the components of this system - agents, tools, tasks, and graphs - and show ...
From CrowdStrike to Black Swans: Mastering Financial Crises with AI
Просмотров 1812 месяца назад
🚨 Financial Crisis Management Revolutionized by AI 🚨 In today's volatile financial markets, being prepared for the unexpected is crucial. Discover how cutting-edge AI technology is transforming crisis response in banking and financial institutions. 🔑 Key Topics: - AI in Finance: The Future of Risk Management - Rapid Information Access: 10X Faster Crisis Response - Collaborative AI Tools for Fin...
Advanced NLP for LLMs: LoRA & Hugging Face Pipelines for Next-Level BERT Fine-Tuning
Просмотров 4503 месяца назад
Welcome to our deep dive into the revolutionary world of fine-tuning large language models with Hugging Face Pipelines and Trainer APIs, presented by Richard Walker from Lucidate. This detailed session explores cutting-edge techniques like LoRA (Low Rank Adaptation) to enhance NLP performance using BERT and DistilBERT models. What you'll gain from this video: - A comprehensive guide to utilizin...
Mastering LLM Fine-Tuning: Boost Performance with Hugging Face & LoRA
Просмотров 5843 месяца назад
Link to GitHub repo: github.com/mrspiggot/Lucidate-FineTune Join us in this comprehensive tutorial as we explore the world of fine-tuning Large Language Models (LLMs) for sentiment analysis using the powerful Hugging Face ecosystem. Discover how to build cutting-edge LLM applications in 2024 and beyond! We begin with an introduction to LLMs and their diverse applications, highlighting the impor...
Unleashing the Power of Machine Learning: How Big Tech Solves Real-World Problems
Просмотров 2613 месяца назад
How Apple, Amazon, Facebook and Google use Machine Learning in 2024 See how 4 technology companies use 10 different machine learning algorithms in 2024. And how you can apply these algorithms to your own business Discover how tech giants like Amazon, Google, Facebook, and Apple are revolutionizing their businesses with machine learning. In this video, we explore the innovative ways these compan...
10 Essential Machine Learning algorithms for 2024 in 17 minutes - A Visualisation & Intuition
Просмотров 5973 месяца назад
A visualisation and intuition for the ten 10 most important Machine Learning Algorithms in less than 17 minutes. Links to related playlists: Machine Learning ruclips.net/p/PLaJCKi8Nk1hwklH8zGMpAbATwfZ4b2pgD Ensembles ruclips.net/p/PLaJCKi8Nk1hxRtzY-8M2r3nDnRcyXU99Z Machine Learning Shorts ruclips.net/p/PLaJCKi8Nk1hzrIQ17UknBGl5NlHc15c6F Chapters: Support Vector Machine (SVM) 1:02 Linear Regress...
Mastering LoRA: Efficient Fine Tuning for Large Language Models - LLMs | PEFT Guide
Просмотров 4694 месяца назад
LoRA - Low Rank Adaptation and PEFT - Parameter Efficient Fine Tuning. Are you looking to master the art of fine tuning Large Language Models like GPT-3, BERT, and T5? Look no further! In this comprehensive video, we dive deep into the world of Parameter Efficient Fine Tuning (PEFT) methods, with a special focus on the game-changing technique called Low Rank Adaptation (LoRA). Discover why fine...
From Lehman Brothers to US Debt Ceiling: Leveraging AI and Fine-Tuned LLMs for Risk Management
Просмотров 3834 месяца назад
We explore how Fine-Tuning Large Language Models (LLMs) can revolutionize risk management in the Capital Markets industry, particularly during crisis events like the Lehman Brothers default, Euro Sovereign Debt Crisis, Brexit, and US Debt Ceiling debates. Discover how Fine-Tuning LLMs to access Risk System APIs can generate custom risk reports in real-time using natural language queries. Learn ...
Building Local & Hosted AI Applications with Langchain and Hugging Face Models: A StreamlitTutorial
Просмотров 5194 месяца назад
Link to GitHub repository: github.com/mrspiggot/Lucidate-FineTune Leveraging Hugging Face Models with Langchain Accessing Hugging Face Models In this video, we'll explore how to tap into Hugging Face's vast array of language models using the Langchain library. Langchain provides a high-level interface for integrating these models into your workflows. Prompt Engineering Crafting effective prompt...
Introduction to AI App development & Fine-Tuning. How to build AI apps with LLMs & LangChain in 2024
Просмотров 7344 месяца назад
Welcome to our comprehensive tutorial series on "Fine-Tuning Large Language Models (LLMs)"! If you're new to LLMs or looking to deepen your expertise, this video is the perfect starting point. Dive into the fundamentals with our "Introduction to LLMs" and discover essential techniques for optimizing LLM performance. Andrew Ng Video 'Opportunities in AI:: ruclips.net/video/5p248yoa3oE/видео.html...
Benchmarking AI: Finding the Best Code Generation Model using CodeBleu
Просмотров 1 тыс.5 месяцев назад
Discover the future of AI code development in this comprehensive look at code generation models! Richard Walker from Lucidate delves into the exciting world of Large Language Models (LLMs) like GPT-4 and how they're shaping our coding landscape. From examining coding communities' contributions to exploring advanced fine-tuning on platforms like HuggingFace and Ollama, this video is your ultimat...
Text Summarisation Showdown: Evaluating the Top Large Language Models (LLMs)
Просмотров 4265 месяцев назад
Dive into the world of AI with Richard Walker, founder of Lucidate, as we embark on a quest to discern the most effective Large Language Model for text summarization. This in-depth video is tailored for AI enthusiasts, data-driven professionals, and decision-makers looking to leverage the power of artificial intelligence for summarizing complex information, especially within the financial secto...
Revolutionize Document Creation with Generative AI: Using LangChain ReAct Agents and Tools
Просмотров 6716 месяцев назад
Learn how Generative AI is transforming document creation in this Lucidate Alchemy tutorial [8:30]. Discover how to use AI-powered tools to turn unstructured content into polished documents, enhance your writing with the latest web information, and ensure accuracy with automated fact-checking. In this video, we dive into the technology behind Lucidate Alchemy, including: 0:00 - Introduction and...
AI Document Writing Made Easy: Create, Enhance, & Verify in Minutes
Просмотров 5226 месяцев назад
Discover the unparalleled power of Generative AI in document creation with Lucidate Alchemy. In this in-depth tutorial, we unveil how AI document creation can revolutionize the way professionals like you manage and enhance business documents. 📈 Enhance Your Papers with the Latest Information: Lucidate Alchemy is not just an AI-powered writing tool; it's your partner in achieving comprehensive, ...
AI Document Creation Revolution in 2024: How to Automate & Mobilize with AI - New Strategies!
Просмотров 5786 месяцев назад
AI Document Creation Revolution in 2024: How to Automate & Mobilize with AI - New Strategies!
AI-Powered Alchemy: Transforming Financial Data into Strategic Gold
Просмотров 5737 месяцев назад
AI-Powered Alchemy: Transforming Financial Data into Strategic Gold
The fundamentals of LLMs and Prompt Engineering in 3 easy steps!
Просмотров 1,5 тыс.8 месяцев назад
The fundamentals of LLMs and Prompt Engineering in 3 easy steps!
AI's Game-Changing Role in Derivatives Trading: Expert Insights Revealed
Просмотров 4929 месяцев назад
AI's Game-Changing Role in Derivatives Trading: Expert Insights Revealed
Revolutionize Equity Analysis: How AI is and LLMs are Changing the Game in Finance
Просмотров 76810 месяцев назад
Revolutionize Equity Analysis: How AI is and LLMs are Changing the Game in Finance
From Pints to Insights: Unveiling Semantic Search Power with Word Embeddings and Vector Databases
Просмотров 32210 месяцев назад
From Pints to Insights: Unveiling Semantic Search Power with Word Embeddings and Vector Databases
From raw Excel spreadsheet to client-ready powerpoint using a fine-tuned LLM. Derivatives & LDI
Просмотров 88710 месяцев назад
From raw Excel spreadsheet to client-ready powerpoint using a fine-tuned LLM. Derivatives & LDI
Witness AI Magic: Risk Insights in Seconds
Просмотров 57611 месяцев назад
Witness AI Magic: Risk Insights in Seconds
See How A.I. Can Streamline Your Equity Investment Analysis Process
Просмотров 549Год назад
See How A.I. Can Streamline Your Equity Investment Analysis Process
How AI Unlocks Hidden Insights in Research Reports
Просмотров 1,3 тыс.Год назад
How AI Unlocks Hidden Insights in Research Reports
What if AI Could Out-trade Human Experts?
Просмотров 1,5 тыс.Год назад
What if AI Could Out-trade Human Experts?
I Built an AI Financial Advisor in 10 Minutes using LangChain with Chain of Thought & ReAct
Просмотров 4,3 тыс.Год назад
I Built an AI Financial Advisor in 10 Minutes using LangChain with Chain of Thought & ReAct
Build your own Finance AGI!
Просмотров 3,7 тыс.Год назад
Build your own Finance AGI!
Mastering AI FinBot Development: Tutorial Guide to LangChain, Prompt Engineering, & Tree of Thoughts
Просмотров 2,8 тыс.Год назад
Mastering AI FinBot Development: Tutorial Guide to LangChain, Prompt Engineering, & Tree of Thoughts
Revolutionizing FinTech: Build Your Own Robo-Adviser with LangChain
Просмотров 3,9 тыс.Год назад
Revolutionizing FinTech: Build Your Own Robo-Adviser with LangChain

Комментарии

  • @musterschnitt
    @musterschnitt 3 дня назад

    Really fantastic introduction for the very curious and interested, but at this starting point more or less clueless, layman. Thank you A LOT!!!!

    • @lucidateAI
      @lucidateAI 2 дня назад

      Glad it was helpful!

    • @lucidateAI
      @lucidateAI 2 дня назад

      @@musterschnitt did you have a chance to take a look at any of the other videos I this series? Neural Network Primer ruclips.net/p/PLaJCKi8Nk1hzqalT_PL35I9oUTotJGq7a.

    • @musterschnitt
      @musterschnitt 2 дня назад

      @@lucidateAI Thanks for your reply, definitely my "homework" every evening this coming week! Just recently touched upon the entire subject of what a modern transformer is. I'm a partner at small media agency, so the topic is obviously more than important to us. And, honestly, fascinating to me, always had a lay interest in language philosophy and neurosciences - and somewhat all this kind of seems to be coming together here. 👍

    • @lucidateAI
      @lucidateAI 2 дня назад

      A background in neurosciences is certainly helpful! At the risk of being that hated teacher that sets more homework here is a Transformers playlist - ruclips.net/p/PLaJCKi8Nk1hwaMUYxJMiM3jTB2o58A6WY&si=1dg2RW8Yy9skruVb. And for 'just the facts' - intro to transformers in sixty seconds playlist - ruclips.net/p/PLaJCKi8Nk1hxM3F0E2f2rr5j6wM8JzZZs&si=rcnwmbBabF9XM25a Also 'neural networks in sixty seconds' playlist - ruclips.net/p/PLaJCKi8Nk1hxNSMM8FSWCWsScstfutKGn&si=UPVG7iDFMd8vrNUs If you get a chance to watch any of them, then I hope you find them useful. Let me know!

    • @musterschnitt
      @musterschnitt 2 дня назад

      @@lucidateAI Neuroscience. it's really just lay knowledge, i.e. lots of books about it. But, yeah, some allegories obviously in the structural design and set-up of digital "neurons", in relation to biological ones. No risk at all, Sir! Here's one who constantly bites off more than he can chew. :) I've marked all those playlists already and will dive in! Hope my non-existent math knowledge won't bring it to a halt too soon. But then again, why not ask a GPT to give me a fresh-up on matrix notation and processing, for example? 😂 Really curious to understand why GPT processing seems to be a "black box", even for the developers, from a certain point. Has it do to with the sheer complexity and size of functions & parameters? Would seem like another allegory to the biological brain, where a singe neuron with dendrites, axon and synapses and electro-chemical signaling is a lot. But to grasp that potential being multiplied by ~16 billion in the cortex alone is impossible. Thanks again for so generously sharing your knowledge in such an accessible way!

  • @pascalschm1111
    @pascalschm1111 6 дней назад

    I have one question left: In the Context of inference it was only talked about sentence completion. How does it differ between the different use cases (especially Q&A)? Why/How does chatGPT not just create a new, to the prompt concatenated, Sentence?

    • @lucidateAI
      @lucidateAI 6 дней назад

      Chat models contain the entire context of the conversation - or at least as much as can fit in the model's context window. So it adds the new generated text to the existing conversation chain. Thus the prior conversations and exchanges will influence the future response generated. This video ruclips.net/video/BCabX69KbCA/видео.html explores this phenomenon in more detail if you are keen to dive deeper.

  • @DenghePrece
    @DenghePrece 12 дней назад

    Great video as always! 👍 Need some advice: 🙏 I have a set of words 🤷‍♂️. (behave today finger ski upon boy assault summer exhaust beauty stereo over). I don't know what they are. What should I do with them? 🤷‍♀️

    • @lucidateAI
      @lucidateAI 11 дней назад

      Looks like a crypto wallet seed phrase. If it is then if I were you I'd keep them a little more secret than you have to date. If not that then perhaps you have stumbled across the menu at your local hipster café - 'I'll have the "finger ski" with a side of "exhaust beauty," please!' Just be careful not to order the "assault summer" - I hear it's a bit too spicy for most people. And whatever you do, don't say them three times in front of a mirror, or you might summon the ghost of a confused lexicographer

  • @matrixpredator
    @matrixpredator 16 дней назад

    iam sure open ai integrated this idea .)

    • @lucidateAI
      @lucidateAI 16 дней назад

      Agreed. While OpenAI have not definitively stated that Strawberry uses these techniques (at least not in the papers I’ve seen thus far), they provide some pretty strong hints that they are using these techniques, or at least analogous ones. It should be acknowledged that they are more sophisticated that those presented in this video; but variations on forward chaining and backtracking seem to be at the heart of their reasoning.

  • @karthikkjs9859
    @karthikkjs9859 20 дней назад

    What is N, d1, d2

    • @lucidateAI
      @lucidateAI 20 дней назад

      They are all defined at 0:20 in the video. “N” is the Normal or Gaussian distribution from statistics. d1 is a formula with inputs of the option strike, the underlying price, the volatility, the interest rate, the dividend yield and time to maturity of the option. d2 is also a formula with inputs of d1 (determined as above) along with the volatility and time to maturity. Once you have values for d1 and d2 you can use them in the formula to calculate the ideal call and put valuation (also shown at 0:20) in the video. You use the normal distribution to operate on d1 and d2, along with the strike price and price of the underlying to come up with the call and put price. As an exercise you can use a spreadsheet or write some python code to implement all of these formulae to come up with option prices for a range of strikes, underlying prices, volatilities, time to expiration etc. if you are new to Black Scholes and options pricing I’d highly recommend doing this for the insights you will gain into time decay, sensitivity to vol etc.

  • @atulbhardwaj90
    @atulbhardwaj90 Месяц назад

    You are leagues apart when it comes to explaining complex concepts! Thanks and please never stop :)

    • @lucidateAI
      @lucidateAI Месяц назад

      Thank you for your kind words and compliments. Greatly appreciated!

  • @may4081
    @may4081 Месяц назад

    This is an underrated video. Matching the solution to the right use case and perhaps the right combinative strategy towards a use case are the most interesting/promising approaches IMO.

    • @lucidateAI
      @lucidateAI Месяц назад

      Thanks @may4081. Glad you found the video informative and the content compelling. Greatly appreciated!

  • @Mari_Selalu_Berbuat_Kebaikan
    @Mari_Selalu_Berbuat_Kebaikan Месяц назад

    Let's always do alot of good ❤ Nam myoho renge kyo

    • @lucidateAI
      @lucidateAI Месяц назад

      @@Mari_Selalu_Berbuat_Kebaikan Agreed

  • @HuwAllen
    @HuwAllen Месяц назад

    This could be so valuable. I start the day going through my email inbox, and because it's such a cumbersome, disorganised process, it can take up to 3 hours to complete, well, practically complete to use a construction term. The depressing thought is that I have to action a proportion of them in the afternoon and like Groundhog day, start the whole process again the next day.

    • @lucidateAI
      @lucidateAI Месяц назад

      Glad you found it useful! Appreciate the comment. GenAI can add huge value here. Right now you have to ‘roll your own’ but my assumption is that this type of technology will soon be baked into all the popular email apps.

  • @ahishverma181
    @ahishverma181 Месяц назад

    hi , i had built a similar product for a client . But what should i charge to them

    • @lucidateAI
      @lucidateAI Месяц назад

      What is it worth to them? What is their next best alternative to what you have built? If you know the answers to these questions then pricing for any product is straightforward. Without answers to these questions you are just guessing.

  • @Ony_mods
    @Ony_mods Месяц назад

    it would be better without animations from the 90x tobe honest

  • @The...0_0...
    @The...0_0... Месяц назад

    This was great thanks 🎉

    • @lucidateAI
      @lucidateAI Месяц назад

      Glad you found it useful.

  • @ocin3055
    @ocin3055 Месяц назад

    Thanks, that was super helpful!

    • @lucidateAI
      @lucidateAI Месяц назад

      Glad to hear you found it useful.!

  • @zengxiliang
    @zengxiliang 2 месяца назад

    Wow this is stunning ❤

    • @lucidateAI
      @lucidateAI 2 месяца назад

      Glad you found it useful! Do you have additional use-cases of your own for this text-to-code (in this case text-to-SQL) approach?

    • @zengxiliang
      @zengxiliang 2 месяца назад

      @@lucidateAI absolutely, I am focused on building AI agents that automates the bi dashboard creations, currently working on private equity fund investments space

  • @NithinDinesh-l3h
    @NithinDinesh-l3h 2 месяца назад

    What an awesome video. Probably the best video on the internet for positional encodings. Loved every bit of it.

    • @lucidateAI
      @lucidateAI 2 месяца назад

      Glad you enjoyed it!

  • @whodat8528
    @whodat8528 2 месяца назад

    What’s good

  • @stevenicfred
    @stevenicfred 3 месяца назад

    The Lucidate series are excellent - so rare to find the combination of Richard's deep understanding harnessed with his excellent communication skills. Highly recommended to follow.

    • @lucidateAI
      @lucidateAI 3 месяца назад

      That is extremely kind of you to say so. I’m glad you are enjoying the materials and finding them useful. Any suggestions for material that Lucidate hasn’t covered yet?

  • @anthonyzeal6263
    @anthonyzeal6263 3 месяца назад

    Most cop out and assume there enough content for step one for people To look into. Thanks for being comprehensive.

    • @lucidateAI
      @lucidateAI 3 месяца назад

      You are welcome. Appreciated. Glad you found it informative and comprehensive.

  • @hoangnam6275
    @hoangnam6275 3 месяца назад

    Great source of knowledge. Used to register to ur channel with tier 2 membership since first/second quarter of 2023 due to ur comprehensive knowledge for my case. But gonna re-register in the next few months due to job requirement. Great works🎉, thanks for ur contribution

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Thanks. I’m glad you find the content on the channel useful. Best wishes in utilizing AI productively in your job.

  • @thegoldenvoid
    @thegoldenvoid 3 месяца назад

    Really well explained! Thanks!

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Glad you enjoyed it!

  • @gmax876
    @gmax876 3 месяца назад

    😐

  • @CharlesOkwuagwu
    @CharlesOkwuagwu 3 месяца назад

    I left a comment on the provided repo. I've been unable to extend it for classes beyond NEGATIVE, POSITIVE. seems 'distilbert-base-uncased-finetuned-sst-2-english' model is designed for just these two classes.

    • @lucidateAI
      @lucidateAI 3 месяца назад

      You are correct. The classification head on this model is for binary classification. At 14:00 or so in this video one of the suggestions for extending this app is to look at a classification head for multiple classes. This discussion thread on HF has some links that you might find useful discuss.huggingface.co/t/multilabel-classification-using-llms/79671. If you are happy to share your results and experiences I’d live to hear how you get on.

    • @CharlesOkwuagwu
      @CharlesOkwuagwu 3 месяца назад

      @@lucidateAI seen the discussion. Clearly text-classification is the domain of encoder based models, or BERT-like models. I'll keep Searching. But I'm studying the hugging face tutorials from scratch. I'll revisit this once I have a good grounding of the hugging face pipelines and their intended use from the ground up

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Makes sense. I think the HF YT tutorials are excellent. Time well spent.

  • @CharlesOkwuagwu
    @CharlesOkwuagwu 3 месяца назад

    Hi, please can you inform us on the effects of imbalanced data on fine-tuning

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Hi @CharlesOkwuagwu clearly each model and training set will have its own idiosyncrasies, so it is naturally impossible to say for certain, but you would expect top see bias in the results at inference time where the model performs well on the majority data that it has seen, but poorly for the minority classes. It will also generalise poorly to real-world unbiassed examples as the model has been trained on biassed data that does not reflect the distribution in the real world. Also the performance metrics will likely be skewed - check out this video ruclips.net/video/a2oZwdwo0M0/видео.html on performance metrics which explains how performance metrics like accuracy can be misleading. In imbalanced datasets, a model can achieve high accuracy by simply predicting the majority class most of the time. This does not mean the model is performing well on all classes. More informative metrics such as precision, recall, F1-score, and the area under the receiver operating characteristic (ROC-AUC) should be used to evaluate the model's performance more effectively.

    • @CharlesOkwuagwu
      @CharlesOkwuagwu 3 месяца назад

      @@lucidateAI thanks for the response. I have a curated dataset of customer chats, each labeled by a human, a total of 130 classes. The number in each class goes from 5 to over 6000 . Total records over 30,000. I'm trying to see if maybe I should use an LLM to synthesize chat samples where we have less real samples, to get a better balance. Your thoughts🤔💭

    • @lucidateAI
      @lucidateAI 3 месяца назад

      I guess the first question to ask is - “Is the statistics of the dataset representative of the real world?” (And I’m certain that you have already posed that question!!). Clearly if it is then there is little value in generating synthetic data. If not then I’d first ask is there a way to get a more representative dataset before generating synthetic data. While generating synthetic data is a common approach, and with the right controls reasonably safe I’m leery of it. The problem is that AI models will find patterns, whether there is a pattern there or not. If there are biases in the synthetic dataset production that introduces some artificial artefacts in the synthetic dataset then the LLM (or frankly any other AI/ML system) will almost always “discover” it. This can massively contaminate performance during inference. In capital markets many firms generate synthetic prices, and unless you are very careful models trained on synthetic prices perform poorly on real world data. Then consider LLMs themselves. Trained on vast corpora of data sourced from the public Internet. At first it is a reasonable assumption that the idioms if language they were learning were genuine human language. As more and more content becomes LLM generated there is clearly a chance that all LLMs learn is “LLM-ese”. So a long-winded answer (but you did ask!) is to use synthetic data as a last resort and be very careful with its construction. Far better to try and source a representative data set if you can. Good luck!

  • @BABEENGINEER
    @BABEENGINEER 3 месяца назад

    How do you generate these graphics/animations in your videos? They're too good!

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Thanks @BABEENGINEER I use Manim (MAthematics ANIMation) docs.manim.community/en/stable/tutorials/quickstart.html. This video ruclips.net/video/WoittT72pgA/видео.htmlsi=euGXRqlxBjGuaa1e at 8:51 shows how I’ve fine-tuned an LLM, CodeLlama in this case, to help write the classes to produce the animation a little faster (and in some cases much better!) than I can craft by hand

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Did you get a chance to check out manim?

  • @BABEENGINEER
    @BABEENGINEER 3 месяца назад

    Really good explanation and the background music kept my focus flowing 👏👏👏

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Glad you found it insightful!

  • @nickwang4777
    @nickwang4777 3 месяца назад

    Does this project have GitHub repo?

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Hi @nickwang4777. No it does not.

  • @volkerengels5298
    @volkerengels5298 3 месяца назад

    Without seeing the video yet -> *create real problems for the world Rob Miles isn't an idiot. seen it now "More co2 is good for plants" is similar half-true

    • @lucidateAI
      @lucidateAI 3 месяца назад

      I’m probably guilty here of a misleading title. In this video the “problems” that are being solved are problems that pertain only to the Big Tech companies themselves. A more representative title might be “Unleashing the Power of Machine Learning: How Big Tech Solves _their own_ Real-World Problems.” But this is perhaps a bit too long. Or have I misunderstood the point you are making?

    • @volkerengels5298
      @volkerengels5298 3 месяца назад

      @@lucidateAI In short - my dog wants out :) "their own" is more fitting. I also think it makes sense to make people realize that AI, ML can actually work, make money and solve problems. On the really grim side: Have you seen Rob Miles' new video? It's worth it... We cannot live in a world where 5 companies hold all the power and money - with no control whatsoever. We cannot implement AI in THIS world in peace. imo Thanks

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Thanks. I’ve not yet seen Rob Miles’ new video. I’ll check it out.

  • @adamelkhanoufi6126
    @adamelkhanoufi6126 3 месяца назад

    Is there any chance you can share the source code to your streamlit app. I've been looking to create my own LLM benchmarking tool on streamlit as well and when I saw you pull out your benchmarking app I got super excited. But unfortunately no link in description :(

    • @lucidateAI
      @lucidateAI 3 месяца назад

      github.com/mrspiggot/LuciSummarizationApplication With thanks and apologies. I've just updated the description. Enjoy the repo!

    • @adamelkhanoufi6126
      @adamelkhanoufi6126 3 месяца назад

      @@lucidateAI No, thank you for the rapid response. You sir just earned another subscriber👍

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Thanks! I hope you enjoy the other videos on the channel as much as this one.

    • @lucidateAI
      @lucidateAI 3 месяца назад

      How have you got on with the code in the repo? Have you been able to use it as a platform to add your own functionality?

    • @Barc0d3
      @Barc0d3 Месяц назад

      @@lucidateAI❤

  • @hoangnam6275
    @hoangnam6275 3 месяца назад

    Nice to have u back

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Did you miss me?

    • @hoangnam6275
      @hoangnam6275 3 месяца назад

      @@lucidateAI ur video provide a comprehensive knowledge in this field with technical details, and ur viewer must be someone who very passionate in AI

    • @lucidateAI
      @lucidateAI 3 месяца назад

      Thanks!

  • @pmatos0071
    @pmatos0071 4 месяца назад

    Great video, thank you for the share

    • @lucidateAI
      @lucidateAI 4 месяца назад

      Glad you enjoyed it

  • @Eltaurus
    @Eltaurus 4 месяца назад

    10:15 - This is not true, though. Eucledean distance does not only depend on the lengths of the vectors added, but also on the angles between the added encoding vectors and the original embedding vectors, which won't be the same if words are swaped. That can easily be checked with a direct computation: In the first case the distance between vectors corresponding to words "swaps" and "are" is equal to √[(-35.65-19.66)² + (59.47+61.65)² + (35.25-34.55)² + (-21.78-88.36)² + (33.44-50.35)² ] = 173.627 while in the second case it equals √[(-36.65-20.66)² + (60.47+62.65)² + (35.25-34.55)² + (-21.78-88.36)² + (33.44-50.35)² ] = 175.671 So with one-hot positional encoding the distances just as well depend on the positions of words in a sentence. The reason for not using one-hot encodings for positions is actually a completely different one.

  • @ricardofernandez2286
    @ricardofernandez2286 4 месяца назад

    Hi, this is the first time I watch one of your videos, and I've found your explanations mind opening. In this video you mention another videos that are recommended in order to better understand some complex concepts. I searched your channel for a sort of "series" but I could not find one that glues all these videos together. As a newbie, however eager to learn on the topic, I was unable to determine that myself. Would you be so kind to mention which videos and in which order should we watch them in order to get a comprehensive understanding of the topic, from the most basic concepts to the current state of development? It will be much appreciated!! Best regards! Ricardo

    • @lucidateAI
      @lucidateAI 4 месяца назад

      Thanks @ricardofernandez2286 for your kind words. I'm glad you enjoyed the video. This particular video is part of this larger playlist -> ruclips.net/p/PLaJCKi8Nk1hwaMUYxJMiM3jTB2o58A6WY

    • @lucidateAI
      @lucidateAI 4 месяца назад

      You can find a list of all the Lucidate playlists here -> www.youtube.com/@lucidateAI/playlists

    • @lucidateAI
      @lucidateAI 4 месяца назад

      Take a look at these as well ruclips.net/p/PLaJCKi8Nk1hzqalT_PL35I9oUTotJGq7a&si=cDgVTll8TiWNK4RV and

    • @ricardofernandez2286
      @ricardofernandez2286 4 месяца назад

      @@lucidateAI You deserve!! And thank you very much for your comprehensive and fast response. I will certainly look at the playlists you recommended! Best regards!!

    • @lucidateAI
      @lucidateAI 4 месяца назад

      I can't wait to hear what you think!

  • @jayhu6075
    @jayhu6075 4 месяца назад

    What a great explanation about this topic.

    • @lucidateAI
      @lucidateAI 4 месяца назад

      You are welcome! Glad you enjoyed it!

  • @zengxiliang
    @zengxiliang 4 месяца назад

    Exciting to see the potentials of specialized and enhanced LLMs!

    • @JoshuaCunningham-vg7xg
      @JoshuaCunningham-vg7xg 4 месяца назад

      Agreed!

    • @lucidateAI
      @lucidateAI 4 месяца назад

      Me too and I think that the potential is going to increase exponentially. Appreciate the comment as well as your membership and subscription. Richard.

    • @lucidateAI
      @lucidateAI 4 месяца назад

      Glad you agree with @zengxiliang. I agree too (naturally...). Are there any areas of focus you are interested in? Mine is predominantly capital markets - which is probably evident. But in my consulting business I see interest from a wide variety of industry sectors outside of finance, and I'm always curious and excited to see where people are utilising generative and agentic AI. Appreciate the comment and the support of the channel. Richard

    • @zengxiliang
      @zengxiliang 4 месяца назад

      @@lucidateAI Thanks Richard! I work at a pension fund, we are actively exploring applications of LLMs now, your content is very inspiring and helpful!

    • @lucidateAI
      @lucidateAI 4 месяца назад

      Glad you are finding the material useful.

  • @joshuacunningham7912
    @joshuacunningham7912 4 месяца назад

    So good! Thank you for educating in a way that’s easy to understand. 👏

    • @lucidateAI
      @lucidateAI 4 месяца назад

      You are welcome. Delighted you found the content useful.

  • @Blooper1980
    @Blooper1980 4 месяца назад

    CANT WAIT!!!!!!!

    • @lucidateAI
      @lucidateAI 4 месяца назад

      Glad you found it useful. Videos 2 and 3 are already complete and should be on general release next week. (Currently they are available to Lucidate members at the VP, MD or CEO levels). I'm just finishing of the LoRA video as I type. That should be out the week after next. Appreciate the support and I hope you found the content insightful.

  • @AbdennacerAyeb
    @AbdennacerAyeb 4 месяца назад

    you are a jem stone. Tank you for sharing knowledge.

    • @lucidateAI
      @lucidateAI 4 месяца назад

      Thanks @AbdennacerAyeb! Greatly appreciated. I'm glad you enjoyed the video!

  • @jon4
    @jon4 4 месяца назад

    Another great video. Really looking forward to this series

    • @lucidateAI
      @lucidateAI 4 месяца назад

      You are welcome. Really glad you found it useful.

  • @encapsulatio
    @encapsulatio 4 месяца назад

    Which LLM from all you tested up to now(in general, not only the ones you talked about in this video) is the best at this moment at breaking down subjects that are at a university level using pedagogical tools? If I request the model to read 2-3 books on pedagogical tools can it properly learn how to use these tools and actually apply them on explaining clearer and better the subjects?

    • @lucidateAI
      @lucidateAI 4 месяца назад

      This video is focused on which models perform the best at generating source code (that is to say Java, C++, python etc.). On the other hand the subject of this video -> Text Summarisation Showdown: Evaluating the Top Large Language Models (LLMs) ruclips.net/video/8r9h4KBLNao/видео.html is on text generation/translation/summarization etc. Perhaps the other video is more what you are looking for? In either event the key takeaway is that by all means rely on public, published benchmarks. But if you want to evaluate models on your specific use-case (and if I correctly understand your question, I think you do) then it might be worth considering setting up your own tests and your own benchmarks for your own specific evaluation. Clearly there is a trade off here. Setting up custom benchmarks and tests isn’t free. But if you understand how to build AI models, then it isn’t that complex either.

    • @encapsulatio
      @encapsulatio 4 месяца назад

      @@lucidateAI I reformulated a bit my inquiry since it was not clear enough. Can you read it again please?

    • @lucidateAI
      @lucidateAI 4 месяца назад

      Thanks for the clarification. The challenge with reading 2 or 3 books will be the size of the LLMs context window (the amount of tokens that can be input at once). Solutions to this involve using vector databases - example here -> ruclips.net/video/jP9swextW2o/видео.html This involves writing Python code and development frameworks like LangChain. You may be an expert at this, in which case I'd recommend some of the latest Llama models and GPT-4. Alternatively you can use Gemini and Claude 3 and feed in sections of the books at a time (up to the token limit of the LLM). These models tend to perform the best when it comes to breaking down complex, university-level subjects. They seem to have a strong grasp of pedagogical principles and can structure explanations in a clear, easy-to-follow manner. That said, I haven't specifically tested having the models read books on pedagogical tools and then applying those techniques. It's an interesting idea though! Given the understanding these advanced models already seem to have, I suspect that focused training on pedagogical methods could further enhance their explanatory abilities. My recommendation would be to experiment with a few different models, providing them with sample content from the books and seeing how well they internalize and apply the techniques. You could evaluate the outputs to determine which model best suits your needs.

  • @sandstormfeline3664
    @sandstormfeline3664 5 месяцев назад

    I was looking for a video to help get my head around tree of thought with a working example, and I found it. great work thanks :)

    • @lucidateAI
      @lucidateAI 5 месяцев назад

      You are very welcome. I’m glad you found it insightful. ruclips.net/p/PLaJCKi8Nk1hyvGVZub2Ar7Az57_nKemzX&si=JwiUaQ-UojUXoOwA here are some other video explainers on other Prompt Engineering techniques that I hope you find equally informative.

  • @joshuacunningham7912
    @joshuacunningham7912 5 месяцев назад

    This is one of the most underrated AI RUclips channels by far. Thanks Richard for another phenomenal video.

    • @lucidateAI
      @lucidateAI 5 месяцев назад

      Appreciate that! Thanks! Glad you found this video and other content on the channel insightful.

  • @paaabl0.
    @paaabl0. 6 месяцев назад

    Well, you didnt explain a thing about autogpt here :/

    • @lucidateAI
      @lucidateAI 6 месяцев назад

      Sorry @paaabl0, but thanks for leaving a comment. Let me try, if I may, from another angle. The inputs and outputs to LLMs are natural language. Human text. (Yes, literally they are vectors of subword tokens, but I hope you will forgive the abstraction). If you type text into an LLM, you get text out. AutGPT works by using this feature of LLMs and putting an LLM into a loop. As the inputs and outputs are both natural language you can use clever prompts to control and direct this loop. While there are many prompting techniques you can use 'Plan & Execute' as well as 'ReAct' (REasoning & ACTion) are popular choices here. They work by first instructing. the LLM to go through a sequence of steps - such as 1 Question, 2 Thought, 3 Action, 4 Action Input, 5 Observation (repeat previous 5 steps steps until) 6 Thought == 'I now know the answer to the original question', 7 Divulge answer. See an example of this type of Prompt here: Answer the following questions as best you can. You have access to the following tools: {tools} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: {input} Thought:{agent_scratchpad} This is authored by Harrison Chase, founder of Langchain and you can access it at the LangChain hub under 'hwchase17/react'. This is the heart of AutoGPT (and other similar attempts at AGI). Buy using the 'input is language / output is also language / prompt LLM into a loop where early stages are about thinking and planning, middle stages are about Reasoning and action and final stages are about conclusion and output', you achieve the type of behaviour associated with tools/projects like AutoGPT. Perhaps this different explanation helped a little, perhaps not. Clearly there a a good many great YT sites on AI and I hope one of them is able to answer your questions around AutoGPT better then I'm able. With thanks for you taking the time to comment on the video.

  • @SameerGilani-zy6sf
    @SameerGilani-zy6sf 6 месяцев назад

    I am not able to install langchain.experimental.plan_and_execute. Can you plz help me

  • @joshuacunningham7912
    @joshuacunningham7912 6 месяцев назад

    Dear @LucidateAI, Pay no attention to @avidlearner8117. They obviously lack a fundamental understanding of business and public social interaction. I am very appreciative of your content and always look forward to it.

  • @avidlearner8117
    @avidlearner8117 6 месяцев назад

    OK, so you went from analysis to pushing your product on every new videos? SMH...

    • @lucidateAI
      @lucidateAI 6 месяцев назад

      Don't break your neck!

    • @avidlearner8117
      @avidlearner8117 6 месяцев назад

      @@lucidateAI Oh, I hit a nerve. Get it?

    • @lucidateAI
      @lucidateAI 6 месяцев назад

      Then I'd stop shaking if I were you!

    • @avidlearner8117
      @avidlearner8117 6 месяцев назад

      @@lucidateAI You thought I was talking about my neck! Ah well.

    • @lucidateAI
      @lucidateAI 6 месяцев назад

      And a beautiful neck it is I'm sure!@@avidlearner8117