Patrick Devaney
Patrick Devaney
  • Видео 18
  • Просмотров 907
AI Agents Improve Your Code Step-by-Step | Groq + Gradio Demo
🚀 Building Better Code with Sequential Agent Chat UI using Groq and Gradio
In this video, we showcase an innovative Sequential Agent Chat UI powered by Groq hardware and the intuitive Gradio interface. This system redefines collaborative coding by leveraging a series of specialized agents to iteratively improve the code based on your input prompt.
💡 What’s Inside?
How it Works: Discover how the system processes your coding task through sequential agents:
1️⃣ Code Writer Agent: Drafts the initial implementation based on the task description.
2️⃣ Code Reviser Agent 1: Reviews the draft for functionality, correctness, and adherence to best practices.
3️⃣ Code Reviser Agent 2: Optimizes the code fo...
Просмотров: 14

Видео

Rustifying My Repo With Swarms
Просмотров 912 часов назад
🚀 Python to Rust Code Conversion: Automating Performance Upgrades! 🦀 In this video, we dive into an innovative Python-to-Rust agent that automates the conversion of Python files into Rust for improved execution speed in performance-critical parts of the Swarms framework. Here's how it works: 🔍 Directory Traversal The agent scans a Python codebase, traversing the entire directory structure to id...
groq swarms demo
Просмотров 443 месяца назад
https:/github.com/patrickbdevaney/Cookbook/tree/main/cookbook/enterprise/finance/multi_agent/groq test run of analyzing a congressional spending bill using GroqCloud's api and the Python framework Swarms. Three financial analyst agents give independent parallel analyses. You can find out more about swarms at the following link: https:/docs.swarms.world/en/latest/
Local InstantMesh Tiger
Просмотров 164 месяца назад
Following the instructions in this repo github.com/TencentARC/InstantMesh very closely, I created and activated a conda environment using minconda prompt in windows. Then separately cloned the repo in C:// using git bash. I changed directory to the root of the cloned repo in miniconda and did the following: conda create name instantmesh python=3.10 conda activate instantmesh pip install -U pip ...
Finetune LLaMa 7b on RTX 3090 GPU - Tutorial
Просмотров 2494 месяца назад
Here is a step-by-step tutorial on how to fine-tune a Llama 7B Large Language Model locally using an RTX 3090 GPU. This comprehensive guide is perfect for those who are interested in enhancing their machine learning projects with the power of Llama 7B. In this tutorial, I briefly walk through the entire process,setting up a Python virtual environment on your Ubuntu OS, launching a Jupyter Lab s...
Initializing a Hyperledger Fabric Blockchain with Docker and Ubuntu
Просмотров 389 месяцев назад
Just a CLI demo of Hyperledger Fabric testnet initializing locally, assuming you have Ubuntu/Ubuntu for Windows and store the blockchain image with docker correctly.
SQLV2 Q4 demo
Просмотров 89 месяцев назад
Demoing a 4bit quantized LLM trained towards writing SQL queries. With adequate hardware, this LLM is arguably faster and better than GPT-4 for its domain. It could help data scientists and web developers scaffold complex or repetitive sequences of queries based on natural language requests.
biomistral q2k q3km q8 comparison
Просмотров 689 месяцев назад
Demo of performance for a medical scenario prompt with biomistral7b-Q_2_K, Q_K_M, and Q8. Q3_K_M is best for time and output quality in this test. In a production environment a hospital might use GPT-4, BLOOM, or a larger parameter Mistral model. In the near future text gen, computer vision, and multi-modal models will approach 100% accuracy and instantaneous response time. Speed and accuracy w...
mixtral 2x7b Quantized 2 K prompt on machine learning
Просмотров 959 месяцев назад
2bit quantized 2x7b Mixtral model, less than 3_M but with a similar response time and quality
wizardLM 1b solidity smart contract prompt
Просмотров 369 месяцев назад
Testing the same prompt I gave to 2x-7b.
deepseek coder6.7b and WizardLM 1B a side by side comparison
Просмотров 259 месяцев назад
same prompt for both LLMs. I think in structure and presentation, deepseek is better. WizardLM is only slightly less though, and manages to present a lot of the same information in half of the time.
laser dolphin mixtral 2x7b dpo Q3 K M
Просмотров 1229 месяцев назад
Prompted Mixtral 2x7b with a prompt to write a smart contract. Much of the logic looked sound, except it mistook the type of time for gwei when it should have equated it to block.timestamp. Slower higher quality output should leverage batched prompting.
WizardCoder-1B Demo: Powerful Responsive Coding LLM at Home
Просмотров 539 месяцев назад
Find GGUF file ready to run here: https:/huggingface.co/patrickbdevaney/WizardLM-1b-GGUF/tree/main
Demoing a Large Language Model running locally on my laptop
Просмотров 319 месяцев назад
Model: laser-dolphin-mixtral-2x7b-dpo.Q3_K_M Front-End App: https:/github.com/oobabooga/text-generation-webui Device: hp pavilion laptop 15-cs3xxx Ram:12gb CPU: i5-1035g1 cpu @1ghz Download the model from: https:/huggingface.co/TheBloke/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q3_K_M.GGUF Download the latest release of the text-generation-webui. Extract it to...
Aleo: Zero Knowledge Dapps - Blockchain at FIU
Просмотров 69Год назад
zkWorkshop: A Deep Dive into Zero-Knowledge Technology Join us for an exclusive exploration into the world of zero-knowledge technology at the Aleo zkWorkshop. What: An exclusive opportunity to dive deep into the privacy-centric realm of blockchain with Aleo's DevRel Ambassador, Brian Seong, and learn how to craft private applications that redefine the landscape of decentralized apps. Who: Deve...
Arb Smart Contract and Front End Dapp
Просмотров 13Год назад
Arb Smart Contract and Front End Dapp
Demo Lionhacks NFT Based Content Authentication
Просмотров 6Год назад
Demo Lionhacks NFT Based Content Authentication
Penn Blockchain Hackathon Demo Oracle NFT Minter
Просмотров 13Год назад
Penn Blockchain Hackathon Demo Oracle NFT Minter

Комментарии

  • @rahulverlekar
    @rahulverlekar 5 месяцев назад

    Thanks for the quick and informative video. I was wondering what are the specs of the pc you are running this on ? Can you please let me know.

    • @patrickdevaney3361
      @patrickdevaney3361 5 месяцев назад

      Sure, no problem. CPU: AMD Ryzen 9 5900x (3.7ghz/12 core/24 threads) RAM: Corsair DDR4-3200 (64GB) SSD: Samsung 980 pro nvme (2TB) Motherboard: Gigabyte B550 Aorus Elite AX V2 Fan: Noctua NH-U12S GPU: Zotac Gaming Trinity OC GeForce RTX 3090 Power Supply: Corsair RM1000e Case: Montech AIR 903 MAX

    • @rahulverlekar
      @rahulverlekar 5 месяцев назад

      @@patrickdevaney3361 Thanks man. Gives me good leads to build my own system.

  • @giannisanrochman
    @giannisanrochman 9 месяцев назад

    What webui are you using for the chat interface?

    • @patrickdevaney3361
      @patrickdevaney3361 9 месяцев назад

      github.com/oobabooga/text-generation-webui It is a popular and versatile webui for LLMs. Supports these model backends: "Transformers, llama.cpp (through llama-cpp-python), ExLlamaV2, AutoGPTQ, AutoAWQ, GPTQ-for-LLaMa, CTransformers, QuIP#". and quantizing with bitsandbytes: "Transformers library integration: load models in 4-bit or 8-bit precision through bitsandbytes, use llama.cpp with transformers samplers (llamacpp_HF loader), CPU inference in 32-bit precision using PyTorch". Running models from hf with colab free tier is flexible though.

    • @jason_v12345
      @jason_v12345 6 месяцев назад

      @@patrickdevaney3361 Woah, that thing is written entirely with the native DOM API. That's hardcore. Lol.

  • @drmetroyt
    @drmetroyt 9 месяцев назад

    Is this good model for RAG application ? Does this model give information from only the pdfs uploaded or uses its own data ? Im searching for a good/ best model for RAG application 13b one .

    • @patrickdevaney3361
      @patrickdevaney3361 9 месяцев назад

      www.databricks.com/blog/introducing-mixtral-8x7b-databricks-model-serving github.com/dhivyeshrk/Retrieval-Augmented-Generation-for-news It’s based on Mixtral 8x-7b, which is considered to be one of the best open source transformer models currently. All of the information shown is from the model itself. Based on these articles, it should be decent. Not sure though on 13b.