Pydantic AI + DeepSeek V3 - The BEST AI Agent Combo

Поделиться
HTML-код
  • Опубликовано: 21 янв 2025

Комментарии • 92

  • @ColeMedin
    @ColeMedin  16 дней назад +8

    Think you have what it takes to build an amazing AI agent? I'm currently hosting an AI Agent Hackathon competition for you to prove it and win some cash prizes! Register now for the oTTomator AI Agent Hackathon with a $6,000 prize pool! It's absolutely free to participate and it's your chance to showcase your AI mastery to the world:
    studio.ottomator.ai/hackathon/register

  • @davidjourno6929
    @davidjourno6929 15 дней назад +6

    PydanticAI is such a breeze ! Great content

    • @ColeMedin
      @ColeMedin  14 дней назад

      It sure is! Thanks David!

  • @danielgarnierfernandez8277
    @danielgarnierfernandez8277 8 дней назад +1

    Great to see such a detailed walkthrough on building AI agents. Have you considered exploring frameworks like KaibanJS for managing multi-agent workflows? Its Kanban-style visualization could complement the setup you're building, especially for organizing agent tasks across repositories

    • @ColeMedin
      @ColeMedin  7 дней назад

      Thank you! I haven't tried KaibanJS before, I'll have to check it out!

  • @copilotai
    @copilotai 16 дней назад +6

    Hi Cole, how do you make cursor/windsurf learn the pydantic AI framework? Where do you rpovide that info?

    • @ColeMedin
      @ColeMedin  14 дней назад +1

      Good question! Typically I follow the framework documentation and code that part myself and then use the AI coding assistant to make the tools because Cursor/Windsurf doesn't know these frameworks very much.
      However, you can try to teach it the framework (or at least the small part you are working with) but pasting documentation into the prompt and then following up with what you want to build.

    • @philroxxx
      @philroxxx 13 дней назад +1

      @@ColeMedin why not add the docs to the IDE?

    • @ColeMedin
      @ColeMedin  13 дней назад

      You definitely can! And I've tried that to an extent before. But typically the LLM starts to get confused when the conversation history gets really long as it analyzes all of these files + the LLM doesn't always look at the right docs.

    • @philroxxx
      @philroxxx 12 дней назад

      @@ColeMedin sounds like as if we need a VScode extension that does that and gives the output to cursor/windsurf/continue
      an agent system withing IDEs to get the docs right

  • @TheMrAi_com
    @TheMrAi_com 10 дней назад +1

    It is one of the best builds out there! Amazing!

    • @ColeMedin
      @ColeMedin  8 дней назад

      Wow, thanks! That means a lot!

  • @kenneththompson4450
    @kenneththompson4450 16 дней назад +1

    Can’t wait for the next video!

  • @andrewandreas5795
    @andrewandreas5795 День назад +1

    Nice video, thank you. How does Pydantic compare to swarm? Have you tried Smolagents or Phidata?

    • @ColeMedin
      @ColeMedin  21 час назад

      Thank you, you bet! Pydantic gives you a lot more control than other frameworks like Swarm, CrewAI, Phidata, etc. That's why I prefer it in general since I don't ever hit a wall of wanting to customize something I can't. I haven't tried Smolagents yet but I want to soon!

  • @kaifeinberg6056
    @kaifeinberg6056 14 дней назад +1

    This looks sick Cole. Any chance you could include a dockerfile in the next one for us?

    • @ColeMedin
      @ColeMedin  13 дней назад

      Thank you! Already have the next video uploaded - so a bit late haha. But I do plan on doing this for my agents in the near future! Already took my "Sample Python Agent" you can find in the repo and created a dockerfile for that!

  • @BilBini
    @BilBini 6 дней назад +1

    How did you get such presentation, I wanna use it for my University project 😭😭

    • @ColeMedin
      @ColeMedin  5 дней назад

      I use a platform called Prezi!

  • @DesignMitho
    @DesignMitho 16 дней назад +3

    I'm looking to build AI Agents to be used by Attorneys. Would that be possible with pydantic AI as it is production-grade?

    • @ColeMedin
      @ColeMedin  16 дней назад +3

      Good question! I'm thoroughly impressed with Pydantic AI recently and I'd be comfortable using it for any production grade agent. So yes!

    • @patruff
      @patruff 16 дней назад

      What would you want the agents to do? I'm guessing summarize documents. My guess is you don't want an agent per se but a workflow. Like Cole mentioned you probably want a low/no code tool for your workflow.

    • @patruff
      @patruff 16 дней назад +1

      Pydantic I think would be too advanced for what you want but I'm just guessing

    • @DesignMitho
      @DesignMitho 16 дней назад +2

      @@patruff i‘m trying to build an agentic RAG that lawyers can use to summarise, compare, draft documents etc. I‘ve used bolt to develop something I was never able to make it run.

    • @patruff
      @patruff 16 дней назад +3

      @DesignMitho yeah like Cole mentioned you could use n8n hook up some tools, maybe store relevant variables from the text in a database. It's more of a workflow where you are putting your documents in (drive is already hooked up) and you want the metadata and structured output out. So I'd say Pydantic would be more if you really want to dig in and customize things but for now try to use n8n and copy the workflow similar to what you want.

  • @AsdrubalRVelasquezLagrave
    @AsdrubalRVelasquezLagrave 6 дней назад +1

    This is a Great Video!!, Thanks

  • @PyJu80
    @PyJu80 6 дней назад +1

    Please excuse my naivety, but would it be possible to use the pydantic ai agent (crawl4 video you made) as a tool for a n8n RAG agent, so for instance you have a RAG (n8n) agent that creates code, but will call the pydantic ai crawl4ai tool to assist in creating pydantic ai code to create custom agents . I might be wrong or got your video wrong but how it would works would be to open a chat in n8n, ask the RAG to build me a custom agent, the RAG then uses the pydantic ai agent to assits with the code generation.
    I think ive confused myself. But essentially it will be a coding agent that has access to my crawl4 agent tool and will get the documentation from the crawl4 agent. But then saying that, I think that is what you have built, as the crawl4 agent webpages do go to supabase and is stored in the knowledgebase to answer questions about the pages it has crawled. The only thing is that when I ask the RAG agent (in the studio) to build me a custom pydantic agent, it cant, it can only answer questions based on the crawled data.
    If that confuses you I'm sorry, but why can we not get the agent to code custom pydanticai agents using the documentation from the pydantic ai agent. Or maybe have the pydanticai agent as a 'model' in bolt.diy. Im lost so if you are its cool. 😅😅
    PS: On reflection, if the crawled pages are going to my supabase anyway, surely my n8n RAG would have access to it? Then customise the prompt for the agent to always reference the pages before gernating the code.

    • @ColeMedin
      @ColeMedin  5 дней назад

      Love where your head is at with this! Going from just being a documentation expert to actually being able to create the agents with Pydantic AI is certainly the next evolution for this agent.
      Your "PS" at the end speaks to what I was going to say mostly. You are correct - you can use Python with Crawl4AI to create the knowledge and store it in Supabase, and then create an n8n agent that leverages that knowledge with RAG. And include in the system prompt to always perform RAG to reference the documentation before creating any code like you said.
      Also, the agent currently is prompted very much to be a documentation expert and not a coder, which is why you aren't able to get it to generate agents at this point. But that all comes down to changing up the system prompt mostly!

    • @PyJu80
      @PyJu80 5 дней назад

      @@ColeMedin I will never do this again (C&P code), but this is sort of working. It called the openrouter (verifies in activity) and deepseeker created me a pydantic agent using the pydantic doc I crawled. I did go to the source code for the github expert (from your studio), but it took me to the n8n version (check the link in your studio for the source).
      I did find the proper pydantic one though and now im going to try and get a pydantic ai RAG agent that goes to the pydantic ai docs (supabase) Then uses a repos of my choice to generate pydantic ai code with deepseeker. I am going to test it with freqtrade repo and see if it can create me a trading RAG pydantic agent.
      Sorry......
      from __future__ import annotations as _annotations
      from dataclasses import dataclass
      from dotenv import load_dotenv
      import logfire
      import asyncio
      import httpx
      import os
      import git
      from pathlib import Path
      from typing import List, Dict
      from pydantic_ai import Agent, ModelRetry, RunContext
      from pydantic_ai.models.openai import OpenAIModel
      from openai import AsyncOpenAI
      from supabase import Client
      # Load environment variables
      load_dotenv()
      # Constants for configuration (loaded from .env)
      OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
      SUPABASE_URL = os.getenv("SUPABASE_URL")
      SUPABASE_SERVICE_KEY = os.getenv("SUPABASE_SERVICE_KEY")
      LLM_MODEL = os.getenv("LLM_MODEL", "gpt-4o-mini")
      OPEN_ROUTER_API_KEY = os.getenv("OPEN_ROUTER_API_KEY")
      OPENROUTER_BASE_URL = os.getenv("OPENROUTER_BASE_URL", "openrouter.ai/api/v1")
      DEEPSEEKER_MODEL = os.getenv("DEEPSEEKER_MODEL", "deepseek/deepseek-chat")
      GITHUB_REPO_URL = os.getenv("GITHUB_REPO_URL") # GitHub repository URL
      LOCAL_REPO_PATH = Path(os.getenv("LOCAL_REPO_PATH", "./repo"))
      # Configure OpenAI model
      model = OpenAIModel(LLM_MODEL)
      # Set up Logfire for logging
      logfire.configure(send_to_logfire='if-token-present')
      # Initialize Supabase client
      supabase = Client(SUPABASE_URL, SUPABASE_SERVICE_KEY)
      @dataclass
      class PydanticAIDeps:
      """Dependencies for the Pydantic AI agent."""
      supabase: Client
      openai_client: AsyncOpenAI
      # System prompt for the Pydantic AI expert
      system_prompt = """
      You are an expert in Pydantic AI, a Python framework for building AI agents.
      You have access to all documentation, examples, and an API reference to help users build Pydantic AI agents.
      Your primary functions are:
      1. Retrieve relevant documentation using RAG for given user queries.
      2. Use OpenRouter (DeepSeeker) to generate Python code based on user requirements and the retrieved documentation.
      Always prioritize accuracy and clarity.
      """
      # Instantiate the Pydantic AI expert agent
      pydantic_ai_expert = Agent(
      model,
      system_prompt=system_prompt,
      deps_type=PydanticAIDeps,
      retries=2
      )
      # Utility: Get embedding for RAG
      async def get_embedding(text: str, openai_client: AsyncOpenAI) -> List[float]:
      """Get embedding vector from OpenAI."""
      try:
      response = await openai_client.embeddings.create(
      model="text-embedding-3-small",
      input=text
      )
      return response.data[0].embedding
      except Exception as e:
      print(f"Error getting embedding: {e}")
      return [0] * 1536
      # Tool: Retrieve relevant documentation
      @pydantic_ai_expert.tool
      async def retrieve_relevant_documentation(ctx: RunContext[PydanticAIDeps], user_query: str) -> str:
      """Retrieve relevant documentation chunks using RAG."""
      try:
      query_embedding = await get_embedding(user_query, ctx.deps.openai_client)
      result = ctx.deps.supabase.rpc(
      'match_site_pages',
      {
      'query_embedding': query_embedding,
      'match_count': 5,
      'filter': {'source': 'pydantic_ai_docs'}
      }
      ).execute()
      if not result.data:
      return "No relevant documentation found."
      formatted_chunks = []
      for doc in result.data:
      chunk_text = f"""
      # {doc['title']}
      {doc['content']}
      """
      formatted_chunks.append(chunk_text)
      return "

      ---

      ".join(formatted_chunks)
      except Exception as e:
      print(f"Error retrieving documentation: {e}")
      return f"Error retrieving documentation: {str(e)}"
      # Tool: List documentation pages
      @pydantic_ai_expert.tool
      async def list_documentation_pages(ctx: RunContext[PydanticAIDeps]) -> List[str]:
      """Retrieve a list of all available Pydantic AI documentation pages."""
      try:
      result = ctx.deps.supabase.from_('site_pages') \
      .select('url') \
      .eq('metadata->>source', 'pydantic_ai_docs') \
      .execute()
      if not result.data:
      return []
      urls = sorted(set(doc['url'] for doc in result.data))
      return urls
      except Exception as e:
      print(f"Error retrieving documentation pages: {e}")
      return []
      # Tool: Get content of a specific documentation page
      @pydantic_ai_expert.tool
      async def get_page_content(ctx: RunContext[PydanticAIDeps], url: str) -> str:
      """Retrieve the full content of a specific documentation page."""
      try:
      result = ctx.deps.supabase.from_('site_pages') \
      .select('title, content, chunk_number') \
      .eq('url', url) \
      .eq('metadata->>source', 'pydantic_ai_docs') \
      .order('chunk_number') \
      .execute()
      if not result.data:
      return f"No content found for URL: {url}"
      page_title = result.data[0]['title'].split(' - ')[0]
      formatted_content = [f"# {page_title}
      "]
      for chunk in result.data:
      formatted_content.append(chunk['content'])
      return "

      ".join(formatted_content)
      except Exception as e:
      print(f"Error retrieving page content: {e}")
      return f"Error retrieving page content: {str(e)}"
      # Tool: Generate code using DeepSeeker via OpenRouter
      @pydantic_ai_expert.tool
      async def generate_code(ctx: RunContext[PydanticAIDeps], user_requirements: str, documentation_context: str) -> Dict[str, str]:
      """Generate code using DeepSeeker based on requirements and documentation."""
      try:
      headers = {
      "Authorization": f"Bearer {OPEN_ROUTER_API_KEY}",
      "HTTP-Referer": "github.com/your-repo", # Replace with your actual referer
      "X-Title": "Pydantic AI Agent"
      }
      deepseeker_client = AsyncOpenAI(
      api_key=OPEN_ROUTER_API_KEY,
      base_url=OPENROUTER_BASE_URL,
      default_headers=headers
      )
      code_prompt = f"""
      Based on the following Pydantic AI documentation and requirements, generate Python code:
      Documentation Context:
      {documentation_context}
      Requirements:
      {user_requirements}
      Provide:
      1. Complete implementation code
      2. Brief explanation of the implementation
      """
      response = await deepseeker_client.chat.completions.create(
      model=DEEPSEEKER_MODEL,
      messages=[{"role": "user", "content": code_prompt}]
      )
      await deepseeker_client.close()
      return {
      "code": response.choices[0].message.content,
      "explanation": response.choices[0].message.content.split("Explanation:", 1)[-1] if "Explanation" in response.choices[0].message.content else "No explanation provided."
      }
      except Exception as e:
      print(f"Error generating code: {e}")
      return {"error": str(e)}
      # Tool: Retrieve GitHub repository content
      @pydantic_ai_expert.tool
      async def retrieve_github_repo(ctx: RunContext[PydanticAIDeps], github_url: str) -> str:
      """Retrieve and clone the content of a GitHub repository."""
      try:
      repo_name = github_url.split('/')[-1]
      repo_dir = LOCAL_REPO_PATH / repo_name
      if not repo_dir.exists():
      # Clone the GitHub repository if not already cloned
      os.system(f"git clone {github_url} {repo_dir}")

      # Read and return content from the repository
      repo_files = list(repo_dir.glob("**/*.py")) # Adjust file pattern if needed
      repo_content = ""
      for file in repo_files:
      with open(file, "r") as f:
      repo_content += f.read() + "

      "

      return repo_content
      except Exception as e:
      print(f"Error retrieving GitHub repository: {e}")
      return f"Error retrieving repository: {str(e)}"

    • @PyJu80
      @PyJu80 5 дней назад +1

      Hopefully If I use your pydantic github expert folder in windsurf, then the code above, I should get the final RAG agent. Maybe I might submit it in the hackathon, but to me thats not fair as ive used your templates to achieve it. I owe it to you. But the pydantic RAG defo uses the docs in supabase and deepseeker defo generates the code for it based on those docs.

    • @ColeMedin
      @ColeMedin  5 дней назад +1

      That would be sweet if it all works out - good luck! And please feel free to submit it for the Hackathon, my videos are meant to be a guide for people to compete more easily so don't feel bad about that!

  • @prabhic
    @prabhic 16 дней назад +1

    Very useful thank you once again 👍

  • @AOA_social
    @AOA_social 8 дней назад +1

    Hi Cole . Great work on the content. Query -- If you have started an app and ai agent with Langchain would you swap over to pydantic AI? Taking into account you are a beginner and using nocode tools, (cursor / replit / claude) . Cheers mate.

    • @ColeMedin
      @ColeMedin  7 дней назад

      Thank you very much! I like Pydantic AI a lot more than Langchain - it's simpler and less abstraction depending on what parts of the framework you are using. But if you've already started with Langchain I would just stick with that! It's still a fantastic framework.

    • @AOA_social
      @AOA_social 7 дней назад +1

      @@ColeMedin thanks Cole. keep it up.

    • @ColeMedin
      @ColeMedin  5 дней назад

      Of course!

  • @EdilsonLima
    @EdilsonLima 15 дней назад +1

    Great video! I'm implementing some agents using smolagents - perhaps you could make another video exploring this alternative approach? What do you think?

    • @ColeMedin
      @ColeMedin  14 дней назад

      Yeah I am planning on making a video on smolagents!

  • @patoescl
    @patoescl 14 дней назад +1

    Hi, I like your content, but why did you choose Supabase instead of Qdrant, Weaviate, Pinecone, or Milvus? I would like to understand your criteria for choosing one over the others.
    Or even Redis as a Vector Database. Thanks in advance for your help and good content

    • @ColeMedin
      @ColeMedin  14 дней назад

      Thank you and good question! All of these platforms are pretty similar in capabilities from my experience, so I chose Supabase with PGVector for RAG because that way I have RAG and my SQL database all in one platform. I also really like having my knowledgebase be within a SQL table because that way I can query my knowledge base using both classic vector retrieval and SQL queries based on the structure of my knowledge.
      Redis is a good option as well, it's just that Redis is usually more for temporary storage (caching, in-memory database) and I want my knowledgebase to be very permanent.

    • @nicholas_ruest
      @nicholas_ruest 13 дней назад +1

      I like it because it's easy to use

    • @patoescl
      @patoescl 8 дней назад

      @@ColeMedin thank for your answers, you are doing an excelent job, educating us

  • @CasparM
    @CasparM 15 дней назад +1

    Could you please share your opinion about PydanticAI vs LangGraph? Is this comparison even valid?

    • @ColeMedin
      @ColeMedin  14 дней назад +3

      Great question! So Pydantic AI and LangGraph aren't super comparable. Pydantic AI is a framework for building AI agents from the ground up, more similar to what you can do with LangChain (same ecosystem as LangGraph but very different). LangGraph on the other hand is a framework for building workflows around AI agents. So actually using Pydantic AI to build the agents and then LangGraph to connect multiple agents together in a single workflow is a super powerful combo!

  • @qtptnqtptnable
    @qtptnqtptnable 9 дней назад +1

    Thanks a lot for such great content. Is deep seek safe in terms of alignment and deception?

    • @ColeMedin
      @ColeMedin  8 дней назад +1

      You are so welcome! Good question - there is a bit of censorship that the Chinese government usually has in their models for things related to current events in that area/with their government, but other than that I haven't seen any issues!

  • @lewisguapo
    @lewisguapo 16 дней назад +1

    what are your thoughts on smolagents? is there a way to implement that on the system?

    • @ColeMedin
      @ColeMedin  14 дней назад +1

      I think smolagents is a fantastic way to get started with building AI agents, though maybe not the most customizable. I'm planning on doing a video for it soon though!

  • @kovlabs
    @kovlabs 14 дней назад +1

    Hey Cole . Have you taken a look at Semantic Kernel by any chance?

    • @ColeMedin
      @ColeMedin  13 дней назад +1

      I have not! But it's actually come up a few times this week already so I'm thinking I'll have to take a look

  • @Ben-cg8qk
    @Ben-cg8qk 16 дней назад +1

    great stuff! right now im on a mission to find out how i can extract docs from a framework and feed it to my AI to use as context in hopes that they are better for building. Like how you said they are not that good at specific frameworks like pydantic and langchain. Would be cool to feed the AI their full documentation to help with coding and projects! If you have any ideas and stuff please let me know!

    • @ColeMedin
      @ColeMedin  14 дней назад +1

      Thanks Ben!
      Next month I'm actually going to be working on an agent for exactly that - making it an expert at a specific framework so it can actually create agents with it unlike general LLMs like Claude/GPT.

    • @Ben-cg8qk
      @Ben-cg8qk 13 дней назад +1

      @@ColeMedin amazing! will definitely be following along

    • @ColeMedin
      @ColeMedin  13 дней назад +1

      I appreciate it!

  • @SHIVAM.M.S
    @SHIVAM.M.S 16 дней назад +1

    I'm trying to build an AI agent which can answer any questions and problems thrown at it, and can give real-time information to any query + im trying to give it emotions like humans is it possible?

    • @ColeMedin
      @ColeMedin  16 дней назад

      Are there specific problems you are looking to solve that wouldn't be solvable by a general LLM like Claude or GPT? Like a custom knowledgebase you are looking to build up?
      For emulating human emotions LLMs are never perfect but you can get pretty good results with the right prompting!

  • @BilBini
    @BilBini 6 дней назад +1

    Impressive 😬

  • @theorderofz
    @theorderofz 16 дней назад +1

    Awesome content. Where can I find the first two videos/live stream?

    • @ColeMedin
      @ColeMedin  14 дней назад +1

      Thank you! I've got the playlist here:
      ruclips.net/p/PLyrg3m7Ei-MrSXWv90oXXbuSsbdOP9j2n

  • @PyJu80
    @PyJu80 16 дней назад +1

    Hi Cole. Just a quick one. I copied and pasted a n8n workflow into Claude and asked it to convert it to pydantic ai code. Which it had no problem doing. Would a hack be to build the agent in n8n (for those who like visual building) download the json workflow and get it converted to pydantic ai code and run it. Because it was so easy and seamless, I didn't test it. But I sent the code to chatgpt and asked what the code did. I explained it exactly like my n8n workflow. Then I sent the json n8n workflow to chatgpt and apparently the pydantic code was optimized better with pydantic.
    Would I be right in thinking this would work?

    • @ColeMedin
      @ColeMedin  14 дней назад

      Good question! I'm actually surprised it was able to spit out good Pydantic AI code, generally even the best LLMs don't have great knowledge for AI agent frameworks, especially the newer ones.
      But assuming it actually understands the framework then yes this is the best way to go about it!

    • @PyJu80
      @PyJu80 14 дней назад +1

      @@ColeMedin essentially it's just json to Python or am I off topic. Please if you get a chance to test this, let me know if I am right. Maybe I might win the hackathon by building a agent that converts n8n workflows to pydantic-ai. I don't have the time to build but, I'll take 10% if anyone builds it and wins.... 🤣🤣😅🤣 Love your work and passion for others. That's why when it comes to ai I only follow you. Your energy is real. You actually want people to succeed and you're willing to help them to achieve their goals, whilst avoiding the mistakes and time consumption you have already been through, and lets be real, im sure you dont need likes and subscriptions (don't think ive ever heard you ask anyone for a like. 👊🏾🫡 🐐🐐

    • @ColeMedin
      @ColeMedin  13 дней назад

      That would definitely be an agent that could win the Hackathon!
      I appreciate the kind words - that means the world to me! I do ask for likes at the end of my videos haha, but yeah AI is just my passion that I want to share with the world :)

    • @PyJu80
      @PyJu80 13 дней назад

      @@ColeMedin maybe I just don't make it to the end. Straight github or studio after your announcements. 🫡

  • @larsverwaters3423
    @larsverwaters3423 12 дней назад

    Hey man is it possible to create a ai agent that can help me with daily private task like wich groserys to buys if my energy contracts ends it seeks new alteratives etc. To save time money?

    • @ColeMedin
      @ColeMedin  11 дней назад +1

      You sure can! You'd just have to get your AI agent connected to services like your email to watch for things like an energy contract ending. All of that can be done in tools you set up for your agent!

  • @mikemansour4634
    @mikemansour4634 16 дней назад +1

    Great work Cole , is it possible to connect it to an interface ? Or run it on the cloud ?

    • @ColeMedin
      @ColeMedin  16 дней назад +2

      Thanks Mike! Connecting the agent to an interface will be the next video for the series! Then deploying to the cloud later on.

    • @mikemansour4634
      @mikemansour4634 16 дней назад +1

      @ I am so excited for this, thank you so much

    • @ColeMedin
      @ColeMedin  14 дней назад

      You bet!

  • @Chippo-e3p
    @Chippo-e3p 15 дней назад

    What is the big picture for converting from n8n to python ... more stable or efficient?

    • @ColeMedin
      @ColeMedin  14 дней назад

      Good question! The big picture is custom coding your agent gives you a lot more control and flexibility in the long run for most use cases. It isn't always necessary, but I find it is for a lot of the more production-grade agents I'm building.

  • @jawadmansoor6064
    @jawadmansoor6064 16 дней назад

    what compute is required to run this? it is over 600 B model? If you need a model this large for pydantic to work right then I doubt pydantic is any good.

    • @solutionalwebdesignantwerp6122
      @solutionalwebdesignantwerp6122 15 дней назад

      PydanticAI is only a framework, so you can run it on any computer , the LLMs you use in your workflow can be local / OpenAi/ openRouter ,your choice

    • @ColeMedin
      @ColeMedin  14 дней назад

      DeepSeek V3 is over 600B parameters so you couldn't realistically run it yourself. That's why I use OpenRouter here! But you can certainly use smaller models like Pydantic AI like Llama 3.2, Qwen, Mistral, etc. and get great performance. Pydantic isn't tied to specific LLMs and it doesn't rely on you using large ones!

  • @arulgandhi9126
    @arulgandhi9126 16 дней назад +2

    🔥🔥🔥

  • @RalphKrausse
    @RalphKrausse День назад +1

    Great stuff .... but "Dear 8n8" Please add "Export to Python" option...

    • @ColeMedin
      @ColeMedin  21 час назад

      Yeah that would be awesome!

  • @ZuulsIb
    @ZuulsIb 13 дней назад

    Please what is the best ai automated generate videos that can generate automated quality videos completely for my ai niche tutorial, i don't want to do manual editing

  • @faizywinkle42
    @faizywinkle42 16 дней назад +1

    what about an ai agent that can control unity and build games?? lol