Stable Discussion
Stable Discussion
  • Видео 36
  • Просмотров 55 876
Fine Tuning is Still a Waste of Time
Today we delve into the limitations of fine tuning AI models and explore new studies highlighting its effects on model accuracy and hallucinations. We also discuss innovative new RAG capabilities, including context caching, and their potential to transform AI development. Lastly, we touch on the influence of hype in machine learning and its various impacts on the field.
00:00 Introduction to Fine Tuning
00:43 New Studies on Fine Tuning
04:46 Mapping the LLM's Brain
07:43 Revolutionary RAG Capabilities
14:22 The Hype in Machine Learning
18:28 Conclusion and Final Thoughts
Show Links:
Previous video on Fine Tuning - ruclips.net/video/jd7h2vm7SFw/видео.html
Fine Tuning Hallucinations paper - arxiv.or...
Просмотров: 195

Видео

Enabling Frontend Engineers to Build AI Features - APIs and IPAs Toronto Talk
Просмотров 261Месяц назад
Today I gave a talk where I share insights on how to enable frontend engineers to build new AI features. I discuss my journey from backend development to a customer-centric approach and highlight the importance of integrating AI into frontend processes. I explores the challenges and opportunities in the AI space, emphasizing the need for customer-centric innovation and collaboration. Key tools ...
Create Prototypes that Match your Design Definition Fast Using Claude and Claude Artifacts
Просмотров 5372 месяца назад
Ever wanted to just create an MVP or an idea yourself without needing the technical knowledge required? This seems like it might be just around the corner with Claude Artifacts, an amazing AI tool that can create real working code using a ChatGPT style generation. Today we dive into an example of how to get started creating a set of design files that can help you to quickly and easily get start...
Doesn't it feel like Figma's AI is missing something?
Просмотров 6782 месяца назад
In today's episode, we compare Figma's latest AI features, announced at Config 2024, with Claude's new Sonnet 3.5 model. We explore how Figma's enhancements assist designers in automating tasks like naming layers, performing searches, and generating designs, versus Claude's innovative approach for non-specialists to quickly prototype interactive POCs and demos. We also evaluate the impact of th...
PDF Parsing has changed in GPT-4o - 1000 Subscriber Highlight
Просмотров 3,7 тыс.3 месяца назад
Today we're here to celebrate reaching 1000 subs on Stable Discussion! Thanks so much for following and tuning in for every new video! We're excited that OpenAI released their latest model to help us celebrate and today we talk about how this latest model has completely changed how we think about PDF Parsing. While we know that correctly parsing information out of PDFs is critical for creating ...
AI Still Can't Code Alone
Просмотров 6154 месяца назад
Today we dive into what makes an AI incapable of coding entire solutions without developer guidance and why AI have become helpful assistants over becoming junior engineers on software teams. Problem domains have a lot to do with it and we discuss where these problems occur and why. Links: Devin announcement: www.cognition-labs.com/introducing-devin Devin debunk: ruclips.net/video/tNmgmwEtoWE/в...
The AI Note Taking Powerhouse - Obsidian
Просмотров 10 тыс.5 месяцев назад
Today we dive into how to enhance our note taking experience using modern LLMs like ChatGPT, Gemini, and Claude. Obsidian is uniquely positioned to enable you to enhance your notes with very little overhead and at no cost (outside what AI tools you pay for). Check out how to get things setup and get started empowering yourself with a unique note taking setup! Show Links: Obsidian: obsidian.md/ ...
Staying Creative in 2024 - Stable Discussion Podcast - Episode 9
Просмотров 1046 месяцев назад
The podcast returns to discuss the significant advancements in AI image generation and video creation technologies, focusing on MidJourney 6 for its photorealism and stylistic capabilities, DALLE 3's improvements alongside ChatGPT, and the emergence of Stable Diffusion 3. They highlight the rapid maturation of image generators, mentioning developments in real-time generation and the potential a...
Google Gemini and The Possibilities of Massive Context Windows
Просмотров 3006 месяцев назад
Today we're discussing what the changes to context size will mean for retrieval augmented generation and how Google Gemini is completely changing how we architect ai applications using generative models. Google Gemini - gemini.google.com/ Gemini 1.5 Announcement - blog.google/technology/ai/google-gemini-next-generation-model-february-2024 Simon Wilson showing Gemini Video - simonwillison.net/20...
Opportunities to Refine RAG: Evaluating Your Options
Просмотров 2357 месяцев назад
Today we dive into how RAG (Retrieval Augmented Generation) can be improved and optimized. There are a number of patterns evolving around these systems and it's more important than ever to understand the fundamentals of how all of these tools work. Stay tuned for a longer deep dive into the important aspects of these tools Stable Discussion Article on RAG: blog.stablediscussion.com/p/conversati...
Avoiding HypeGPT: Navigating The AI LLM Hype
Просмотров 1,1 тыс.7 месяцев назад
Avoiding HypeGPT: Navigating The AI LLM Hype
Stable Canvas: A New AI Art Interface for Exploration
Просмотров 1477 месяцев назад
Stable Canvas: A New AI Art Interface for Exploration
ChatGPT Shows How Most AI Tech Has Been Unused for Years!
Просмотров 1,8 тыс.8 месяцев назад
ChatGPT Shows How Most AI Tech Has Been Unused for Years!
Speaking the Right Language with ChatGPT and Other AI Models
Просмотров 2048 месяцев назад
Speaking the Right Language with ChatGPT and Other AI Models
Why Your GPT Prompts Always Fail and How Measurement Can Help
Просмотров 4228 месяцев назад
Why Your GPT Prompts Always Fail and How Measurement Can Help
Will AI Agents Take Jobs from Programmers?
Просмотров 5669 месяцев назад
Will AI Agents Take Jobs from Programmers?
Claude 2.1 and GPT-4 Turbo Miss The Mark on Large Contexts
Просмотров 2 тыс.9 месяцев назад
Claude 2.1 and GPT-4 Turbo Miss The Mark on Large Contexts
Returning to AI - Stable Discussion Podcast - Episode 8
Просмотров 1059 месяцев назад
Returning to AI - Stable Discussion Podcast - Episode 8
Fine Tuning ChatGPT is a Waste of Your Time
Просмотров 22 тыс.9 месяцев назад
Fine Tuning ChatGPT is a Waste of Your Time
Should Devs Worry About OpenAI?
Просмотров 1,2 тыс.9 месяцев назад
Should Devs Worry About OpenAI?
Will Junior Devs Survive AI and ChatGPT?
Просмотров 1,1 тыс.9 месяцев назад
Will Junior Devs Survive AI and ChatGPT?
Learning AI as a JavaScript Developer
Просмотров 2899 месяцев назад
Learning AI as a JavaScript Developer
The OpenAI API is better than ChatGPT
Просмотров 5 тыс.9 месяцев назад
The OpenAI API is better than ChatGPT
Don't just use ChatGPT with your PDFs
Просмотров 1,1 тыс.9 месяцев назад
Don't just use ChatGPT with your PDFs
What is AI: Beyond ChatGPT
Просмотров 50Год назад
What is AI: Beyond ChatGPT
Meeting AI Innovators - Stable Discussion Podcast - Episode 7
Просмотров 55Год назад
Meeting AI Innovators - Stable Discussion Podcast - Episode 7
Humanism Meets AI - Stable Discussion Podcast - Episode 6
Просмотров 36Год назад
Humanism Meets AI - Stable Discussion Podcast - Episode 6
AI Research for the Rest of Us - Stable Discussion Podcast - Episode 5
Просмотров 48Год назад
AI Research for the Rest of Us - Stable Discussion Podcast - Episode 5
Take Your AI To Work Day - Stable Discussion Podcast - Episode 4
Просмотров 86Год назад
Take Your AI To Work Day - Stable Discussion Podcast - Episode 4

Комментарии

  • @blackblather
    @blackblather 3 дня назад

    Good point

  • @Fs3i
    @Fs3i 6 дней назад

    I’d say the solution is ordering in the answer (names and places first) or two independent queries. However, independent queries have a disadvantage- you pay twice for the input tokens (with the current on-demand pricing), which is less than desirable The advantage of independent queries is lower answer latency, though. And with cheaper models, you can often afford the double input tokens depending on the usecase

    • @StableDiscussion
      @StableDiscussion 5 дней назад

      Yeah, there are some aspects that need to be balanced. In the source video we add this issue to the new feature by Anthropic to cache context for a 90% cost reduction on requesting against a known context. Really opens up this problem to making more small requests than trying to tune a prompt that returns a lot of different data points

  • @techracoon7180
    @techracoon7180 10 дней назад

    Cool but fine tuning is a necessary tool if you want to lock domain specific information that doesn't change frequently into the model while freeing up the context window for more dynamic content. An example: I want to make an AI model that generates quests in a game. For this I need to finetune the model to have the basics of the game universe and such and free up the context window to include the information that is coming from the game world, such as population of each territory, which faction controls which places, the user's location and progress, etc.

    • @StableDiscussion
      @StableDiscussion 8 дней назад

      Thanks the comment, however I'm unconvinced that it's a good idea for locking in a domain unless you have a very specific way you want it to answer. Say, in your example, you want it to structure quests in a specific way that has enum values or other formatting that needs to be adhered. That could be a good means of fine tuning but you might see a drop in overall quest creativity. I'd find using a RAG-like approach to only pull in context about the world at quest generation time to be a better and more scaleable approach. You are in control of the factors and can adjust and change how you add context as you tune the game you're creating. This pushed me to summarize and put out another post on this topic which leverages your example in some of my thinking: ruclips.net/video/ZI0ujkLhlCY/видео.html

    • @techracoon7180
      @techracoon7180 8 дней назад

      @@StableDiscussion Thank you for your reply. As I understand from your explanation, you are saying that with fine-tuning, I will be unable to lock in the extra domain-specific data into the model. I would only be able to teach it for subtle formatting and such things. I agree with that upon some investigations, and the fact that I don't have labeled data for it. A RAG approach would fit this usecase much better indeed. Thank you for the clarification, good videos overall.

  • @llvkv
    @llvkv 28 дней назад

    Do you use native AI note app like Mem or Saner?

    • @StableDiscussion
      @StableDiscussion 8 дней назад

      I've never tried either of these. I generally like seeing inside the box and having a bit more insight into how the tools that operate on my notes work. That said Mem looks pretty cool!

  • @larsfaye292
    @larsfaye292 Месяц назад

    You guys are producing some of the best conversations out there around AI and web development! And your speaking style is simply top notch.

  • @petportal_ai
    @petportal_ai Месяц назад

    Great content, and we appreciate the Pet Portal AI shout-out in the presentation! You've been incredible to work with and have really helped bring our vision to life! Love the way you break down and simplify concepts AND processes (i.e., Sanity 😊)

  • @scottvickrey2743
    @scottvickrey2743 Месяц назад

    Thanks for your indepth explanation. It has effect.

  • @droidtafadzwa5545
    @droidtafadzwa5545 Месяц назад

    You got yourself a new subscriber

  • @chrisbreault81
    @chrisbreault81 Месяц назад

    Without any prior coding experience, I ambitiously tried to build a solution outside of an AI framework. The main issue was the inconsistent quality of my images, which made it difficult to extract text from PDFs and scanned images accurately. This often led to mistakes. Fortunately, I received some tips and tricks from other RUclips videos using some pretty powerful python libs If you'd like, feel free to send me a few pages you're having trouble with, and I'll run them through my text script. I'll let you know what's working and recommend some oth3er youtubers that might have helpful vids for you. FYI this is some of the first code i wrote so....yeah its a mess, But im pretty confident i might be able to help (for free obviously)

  • @leah.internet
    @leah.internet 2 месяца назад

    Spot on. I'm a product designer who for the last year has switched to mostly dev. I've always done a little bit of html/css but now with Claude, I'm actually able to bypass the great divide and push front-end.... and backend code. And it's not absolutely terrible either, crazy!

  • @m.x.
    @m.x. 2 месяца назад

    Good luck iterating over the initial design/code/test :)

    • @StableDiscussion
      @StableDiscussion 2 месяца назад

      Iterations are actually better than I’m used to seeing in AI coding and I think that’s partly due to the speed of feedback. There’s still a lot of the caveats that I mentioned in my AI coding video but it is amazing for isolated interface design which makes up a significant chunk of product work on web. Coding Video for reference: ruclips.net/video/BxKXSlc759Y/видео.htmlsi=fnqeV0mzZ9emt9SL

  • @EvansOasis
    @EvansOasis 2 месяца назад

    Very well done explaining the uses of this tool! I believe future version of it will change how we go about everything, and a big step will be the ability to visualize and interact with these connections. I wanted to let you (and whoever sees this) know that I've collaborated with Brian (Creator of Smart Connections) and released the official plugin companion: "Smart Connections Visualizer". For this first version, it's an obsidian graph view that shows relevant connections to your current note. I'll still be adding much much more to it, also with an ability to customize how you see things like no other! Give my channel a peak if you want to find out more If it's not too much, could you pin this comment to let people know about this tool to enhance their experience with SC?

  • @Saintel
    @Saintel 3 месяца назад

    If you use ChatGPT does that not just make your Obsidian notes no longer private?

    • @StableDiscussion
      @StableDiscussion 2 месяца назад

      Sure, there’s some risk there. If you’re leveraging the API and this risk is meaningful to you, you can request to use it with a zero retention policy: community.openai.com/t/does-gpt-api-keep-data-acquired-from-client-request-private/315844/5

    • @koska3
      @koska3 2 месяца назад

      I think it wouldn't be too much of a problem, unless you're a criminal

    • @Saintel
      @Saintel 2 месяца назад

      @@koska3 Wanting privacy does not make you a criminal.

    • @Saintel
      @Saintel 2 месяца назад

      @@StableDiscussion Thanks :)

  • @ramakrishnaprasadvemana7833
    @ramakrishnaprasadvemana7833 3 месяца назад

    This is good Working tutorial will help viewers to understand more deeply nuances and start applying the concepts learnt. And some of them will come back and enrich every one with their experience

  • @yanrongliao846
    @yanrongliao846 3 месяца назад

    Hello, I don't know the principle of the pipeline, I wonder how can I establish a didicated law's gpt, I just upload some pdf, and each is at about 10Mb,but I failed to let gpt answer my quetion even if the pdf are so precise.

  • @AdnanAli
    @AdnanAli 3 месяца назад

    Congratulations. There are tools like Langsmith now that can be used to show the chain used by the GPT.

    • @StableDiscussion
      @StableDiscussion 3 месяца назад

      Thanks! And thank you for watching! I like Langsmith but if you’re on an old version of Langchain I think it won’t be compatible with the current method usage as the API has changed a lot. Especially with old workarounds from 7 months ago. Couldn’t even get the old app building so it was unlikely I’d be able to add tooling on top. But thanks for the recommendation!

  • @DevulNahar
    @DevulNahar 3 месяца назад

    This is pretty cool. Can you give a tutrial of PDF ingestion pipeline?

    • @StableDiscussion
      @StableDiscussion 3 месяца назад

      Thanks for watching! Hoping to dive into more detailed work at a future milestone. Will update here when that comes!

  • @MichealScott24
    @MichealScott24 3 месяца назад

  • @petportal_ai
    @petportal_ai 3 месяца назад

    Congratulations on the 1k subscriber milestone! Well deserved with such great content. Turning my notifications on!

  • @TechAtScale
    @TechAtScale 4 месяца назад

    Have you taken a look at Amazon Q with dev mode in an IDE yet?

  • @MichealScott24
    @MichealScott24 4 месяца назад

    🫡❤

  • @marketfarm
    @marketfarm 4 месяца назад

    "hallucinates a guess". I like that. 😆

  • @Maisonier
    @Maisonier 4 месяца назад

    There is anyway to use the GPU for this?

  • @101RealTalker
    @101RealTalker 4 месяца назад

    My vault is over 3million words across 2k+ files, all geared towards one project. I was excited to use the Co-Pilot plugin because it advertises "vault mode", but was quickly disappointed when its default reference amount was only 3 notes/files at a time (lol), and can only stretch to 10 but gives a warning that it will prolly screw up the responses. I want to communicate with my vault as a whole for perfect macro context, but it seems my case use is still not possible with current Ai? SmartConnections doesn't seem to be any better, or am I mistaken?

    • @StableDiscussion
      @StableDiscussion 4 месяца назад

      Sweet! That’s a good size! Smart Connections is similar but breaks files into blocks. You’ll pull several related references from within the context of some files which performs better but may take things out of context. Seems like it would perform better This problem is actually generally not so much about what AI is capable of today, it’s that general (solve all) solutions often don’t perfectly fit the problem space or some specific domain. Early days, and everyone is still figuring it out. Some new models can handle a lot of context but no infinite search over your notes yet but one solution could exist that might make you not care about it much

  • @CitizenWarwick
    @CitizenWarwick 4 месяца назад

    We had a well crafted GPT4 prompt with many tests covering our desired outputs. We took gpt35 and fine tuned it and now it's performing the same. Worked well for our use case!

    • @YanMaosmart
      @YanMaosmart 3 месяца назад

      Can you share how many datasets have you used to finetune? Used arounds 200 examples but finetuned model still not work quite well

    • @CitizenWarwick
      @CitizenWarwick 3 месяца назад

      @@YanMaosmart around 600 though I guess success depends on expected output, we output JSON and our prompt is conversational

  • @NLPprompter
    @NLPprompter 5 месяцев назад

    dude try obsidian copilot can use ollama, with that can use dolphin mistral = it's uncensored AI so there is no guardrail, this way we can be anything creative with texts.

    • @StableDiscussion
      @StableDiscussion 5 месяцев назад

      Thanks for the suggestion! I like the look of that but don’t see a lot of activity on the GitHub. Still seems nice to be able to bring in other models, especially local models Dolphin has been pretty fun to play with too

  • @NDnf84
    @NDnf84 5 месяцев назад

    Almost none of this is AI.

    • @StableDiscussion
      @StableDiscussion 5 месяцев назад

      Right?! We make our own stuff here with very little generation of content. Thanks for noticing ❤️

  • @AnimusOG
    @AnimusOG 5 месяцев назад

    Well done my man. Keep it up. Your content is valuable because your explainations are excellent!

  • @yongyu2032
    @yongyu2032 5 месяцев назад

    Hi! Good video! I am wondering if you are using any of the openai service essentially you are not running the model locally right? Is there a local embedding and llm available for this, such as llama2 or mistral. Thanks! Awesome video.

    • @StableDiscussion
      @StableDiscussion 5 месяцев назад

      Thanks! Glad you enjoyed it! Local embedding are definitely available. Local models currently don’t seem supported but you can proxy requests in these plugins and that could make something like that work too

  • @marketfarm
    @marketfarm 5 месяцев назад

    Yes! I’m ready to upload my decades of original notes and content into Obsidian and instruct my LLM to cull through them to create new material based on my own source material. Come on, pick my brain.

  • @needsmoreghosts
    @needsmoreghosts 5 месяцев назад

    Aye, very insightful, and was something I also picked up from using Stable Diffusion a lot. One thing that's helped a lot with prompting ChatGPT is the '-' dash seperator. It's a good way to make sure words are often not linked together as tokens, as 'Space Dash Space' is often just 1 seperate token in and of itself. If there are two seemingly odd words that I want to seperate in a prompt, it seems to give decent indication. I would say with json formatting, this can actually be pretty decent, as it's a clean well understood set of characters with clear meanings, [1,2,3] or whatever value is probably easily interpritable.

  • @needsmoreghosts
    @needsmoreghosts 5 месяцев назад

    Oh this is just what I was looking for! A really great breakdown of this, thank you bud. Didn't even realise that something like Obsidian existed... Here I am doing api calls through vscode like a pleb, definitely time to move!

    • @StableDiscussion
      @StableDiscussion 5 месяцев назад

      Glad to hear! Check back in when you've had a chance to try it out!

  • @rfilms9310
    @rfilms9310 5 месяцев назад

    "you to curate data to feed AI"

  • @korbendallasmultipass1524
    @korbendallasmultipass1524 5 месяцев назад

    I would say you are actually looking for Embeddings. You can set up a database with Embeddings based on our specific data which will be checked for similarities. The matches would then be used to create the context for the completions api. Fine tuning is more to modify the way how it answers. This was my understanding.

  • @MrAhsan99
    @MrAhsan99 5 месяцев назад

    thanks for the insight

  • @kingturtle6742
    @kingturtle6742 6 месяцев назад

    Can the content for training be collected from ChatGPT-4? For example, after chatting with ChatGPT-4, can the desired content be filtered and integrated into ChatGPT-3.5 for fine-tuning? Is this approach feasible and effective? Are there any considerations to keep in mind?

    • @dawoodnaderi
      @dawoodnaderi 4 месяца назад

      all you need for fine-tuning is samples of "very" desirable outcome/response. that's it. doesn't matter where you get it from.

  • @Charles-Darwin
    @Charles-Darwin 6 месяцев назад

    This struck me too, it seems not many quite noticed the gravity of the massive context. Not sure if you saw it, but there's a paper on arxiv with the title "World Model on Million-Length Video And Language With RingAttention" published a day or two before Sora and google's gemma 1.5 were announced. They show results with near perfect retrieval.

  • @chrismann1916
    @chrismann1916 6 месяцев назад

    Question - as it relates to the needle in the haystack performance the research you quote is very recent work but unfortunately this damn space moves so freaking fast. Or is this even a question or maybe a statement. From what I understand in the Google blog post Gemini uses a new Mixture-of-Experts (MoE) architecture, which apparently delivers much improved processing/understanding of long context. They are quoting crazy needle in the haystack results. At the same time, GPT4 has a hard time processing moderately complex prompts (well within the token range in which it preforms well at in the needle test) so as a builder with road-rash I have a raised eyebrow! What am I missing?! :-)

    • @StableDiscussion
      @StableDiscussion 5 месяцев назад

      You’re quite right this space moves fast! There are a lot of things we’re finding now we have access to Gemini. Specifically, the way it handles controllability and context is very interesting. I think the way these AI are measured is going to continue to evolve. The needle in a haystack is a great test but we’re going to have to see what the cost of that optimization is going to have.

  • @christinawhisler
    @christinawhisler 6 месяцев назад

    Is it a waste of time for novelist too?

  • @MaxA-wd3qo
    @MaxA-wd3qo 6 месяцев назад

    why, why so tiny amount of subscribers. Very much needed approach to problems, to tell 'wait a minute... here are the stones on the road"

  • @chrismann1916
    @chrismann1916 6 месяцев назад

    Brother, one of the best breakdowns of this topic I've found yet.

  • @DigitalLibrarian
    @DigitalLibrarian 6 месяцев назад

    The kung fu panda movies are pretty good.

  • @anonymeforliberty4387
    @anonymeforliberty4387 6 месяцев назад

    i didnt get your point. You showed us a diagram with two system prompts, one before and one after the user prompts to control them. But in your code example, we didn't see that in action.

  • @tecnopadre
    @tecnopadre 6 месяцев назад

    Sorry but why then it's a waste of time? It wasn't clear or finally mention as far as I've listened

  • @Hex0dus
    @Hex0dus 7 месяцев назад

    I really like the video and your style so count me in as a subscriber. But showcasing Deno inside of Jupyter notebook and trying it out for myself was a bug disappointment as Deno can't track the values between cells and therefore there is no autocomplete or intellisense at all. I see in your video that it's the same in your installation.

    • @StableDiscussion
      @StableDiscussion 7 месяцев назад

      Thanks for the kind words! It’s definitely a mixed bag. The ability to have a consistent running environment in which to execute scripts makes iteration on AI solutions much more efficient. I found that often times I would be re-executing costly generations because I needed to return to a desired state to test some aspect of my solution. Notebooks are better at this task. But the integration still has much to be desired and I totally think the lack of type consistency, autocomplete, and other core dev features makes it difficult to appreciate or enjoy

  • @Bboreal88
    @Bboreal88 7 месяцев назад

    Hi, new sub here. Do you have any video teaching how to build an small language model or using RAG to get a very simple conversation project demo going?

    • @StableDiscussion
      @StableDiscussion 7 месяцев назад

      Thanks for the subscription! We probably won’t be doing a video creating a small language model but we may look at adding RAG videos in the future. If you’re looking for something simple, you may refer to the code I referenced in the video. That’s a good starter for RAG concepts and you can look to optimize and break down later

  • @markburton5318
    @markburton5318 7 месяцев назад

    Very timely. I was just experimenting earlier with generating RFP responses based on past winning bids and corporate knowledge bases. The RAG is returning irrelevant docs even when there are very relevant docs. On other use cases, RAG seemed to work fairly well. I think it is to do with obscure concepts. In the RFP questions throwing off the vector distances. The buyers are after all trying to separate the good from the best. I will try some of these options as quick experiments but I’m going to have to dive deep into the vector distances, etc, and properly diagnose, build solid evaluation criteria, unit tests and monitoring KPIs for this stage of the pipeline. And try fine tuning for this use case.

    • @StableDiscussion
      @StableDiscussion 7 месяцев назад

      Sounds like it! Glad you found value in the overview! I’d imagine RFP generation wholesale is a very tricky space due to hallucinations. RAG will definitely help you in some cases but I’d imagine it would have a hard time keeping track of a firm’s competencies in relation to winning bid competencies. I’d look to reframe towards overall strategic direction over diving into the details

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g 7 месяцев назад

    Tinkering builds skills. Most people search for knowledge but its skills that they want, and skills will never be learned by watching an endless stream of social media, tutorials, lectures. It's doing vs knowing. Skills = applied knowledge = ability to do/create something

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g 7 месяцев назад

    AI hype is big business. After watching a lot of AI dedicated channels for almost a year, I have learned to mainly view them with distrust. They hype everything AI to death, never honestly discussing limitations or problems, nor the solutions to the problems. Sources of good information are not mentioned, moreover they rarely have insight into what works and doesn't. Just superficial regurgitation of press announcements. This isn't true of everyone, but many.

  • @bigbadallybaby
    @bigbadallybaby 7 месяцев назад

    Using chat gpt 4 its a odd mix of mid blowing capabilities, nuance, depth, subtlety to its answers but then often just really dumb predictive text where it has zero understanding

    • @NDnf84
      @NDnf84 5 месяцев назад

      It's not capable of truly 'understanding' anything.