Get started Gemma 2 Locally on Mac using MLX

Поделиться
HTML-код
  • Опубликовано: 6 сен 2024
  • In this video, we'll explore how to convert and run Google's Gemma 2 language model locally on your Mac using the MLX framework. You'll learn:
    - What Google Gemma 2 is and its variants
    - How to convert a Hugging Face/PyTorch model to MLX
    - Steps to run Gemma 2 on your local machine
    What is Google Gemma 2?
    Gemma 2 is a family of lightweight, state-of-the-art open-source language models developed using the same technology behind Google's Gemini models. It comes in three sizes:
    1. Gemma 2.6B
    2. Gemma 9B
    3. Gemma 27B
    Each size is available in pre-trained and instruction-tuned variants.
    Is Google Gemma 2 free?
    Yes, Gemma 2 is completely open-source and accessible through the Hugging Face Hub.
    Model Weights
    - Quantized for MLX: huggingface.co...
    - Full Precision: huggingface.co...
    Additional Resources
    - Gemma 2 MLX Conversion Script:github.com/Bla...
    - Gemma 2 Transformers Implementation: github.com/hug...
    - Gemma 2 PyTorch Implementation: github.com/goo...
    - Fine-tuning Gemma Guide: unsloth.ai/blo...
    - Gradio App for MLX: github.com/SOS...
    Connect with Me
    - LinkedIn: / prince-canuma
    - Twitter: / prince_canuma
    - Medium: / prince-canuma

Комментарии • 6

  • @MaziyarPanahi
    @MaziyarPanahi Месяц назад +2

    A complete walk through! Thank you. king!

  • @skanderbegvictor6487
    @skanderbegvictor6487 Месяц назад +1

    Subscribed, been following you on twitter, I am currently trying to write custom kernels for graph machine learning in mlx and am stuck.

    • @princecanuma
      @princecanuma  Месяц назад

      Great to hear 👌🏽 keep up the good work

  • @gokayfem
    @gokayfem Месяц назад +1

    lets go king!!