Mei Chen
Mei Chen
  • Видео 6
  • Просмотров 1 963
ElevenLabs Agent Builder Full Review
00:00 - Intro
00:10 - TL;DR
02:27 - Deep dive into features
10:42 - Calling a dental receptionist agent
13:03 - Logs
14:00 - Cost
-------
Hook Agent up to Twillio - elevenlabs.io/docs/developer-guides/integrating-with-twilio
Get Conversation (one ID at a time) - elevenlabs.io/docs/conversational-ai/api-reference/get-conversational-ai-conversations
Create Dynamic conversations (i.e. put a variable in the first message) - elevenlabs.io/docs/conversational-ai/customization/conversation-configuration
-------
Insta @meigustas
Просмотров: 48

Видео

LLM for data extraction
Просмотров 18119 часов назад
Tea Time w/ Mei - Episode 1 00:00 - Intro 00:29 - What I'm working on 02:32 - What I learned the hard way: RAG in prod
Want to get things done? Call this AI
Просмотров 73День назад
need an accountability buddy? 00:00 - intro 00:40 - the idea 01:05 - the demo #AI #llm #vocie #accountability
claude AI uses the computer 💻 demo flaw, 4 tests, cost $$$
Просмотров 492Месяц назад
00:00 - Intro 00:29 - Pointing out flaw in the demo vid 01:42 - How to run it yourself 03:03 - Test #1 generate a mandala and sketch it in SVG 04:31 - Test #2 find properties and put in spreadsheet 05:54 - Test #3 create an ASCII donut and run in terminal 06:44 - Test #4 create a pacman game 08:36 - Cost
openai realtime API ☎️
Просмотров 463Месяц назад
Timecodes 0:00 - intro 0:50 - cost $$$ 2:07 - playground 5:10 - run demo code with function calling and memory 8:08 - twilio integration demo Codebases OpenAI Demo: github.com/openai/openai-realtime-console Twilio Demo: github.com/twilio-samples/speech-assistant-openai-realtime-api-node Reach me @meigustas on insta
LLAMA 3.2 🦙
Просмотров 723Месяц назад
Timecodes 0:00 - intro 0:09 - history llama 3 models 0:45 - llama 3.2 1:11 - what can you do with llama 3.2? 3:37 - model evaluation 5:00 - test1: 3D brain teaser 5:17 - test 2: roast / identify a person in the picture 6:58 - test 3: screen navigation Reach me at @meigustas on insta

Комментарии

  • @marouane53
    @marouane53 5 дней назад

    Very insightful

  • @og_23yg54
    @og_23yg54 6 дней назад

    Fan #1 here 😅

  • @timmyotoole5
    @timmyotoole5 12 дней назад

    Loved the "blague glacée" bit!

  • @errorNotFound-xh7qv
    @errorNotFound-xh7qv Месяц назад

    Great explaining video, Mei Chen! Really nice breakdown and clear steps. Keep it up!

  • @b2brish
    @b2brish Месяц назад

    Great video, Mei!

  • @aplanetexpress
    @aplanetexpress Месяц назад

    喜欢这种风格❤❤

  • @dongpoyao4576
    @dongpoyao4576 Месяц назад

    才貌双全呐

  • @iwillbesoon
    @iwillbesoon Месяц назад

    牛逼

  • @resonanceofambition
    @resonanceofambition Месяц назад

    Plot twist: Mei chen is Llama 3.2

  • @shuoyuanchen7800
    @shuoyuanchen7800 Месяц назад

    idk outsourcing to india or southeast asia sounds cheaper

    • @meigustas
      @meigustas Месяц назад

      sure, you will also get the indian accent along with it 😂

    • @Satyam1010-N
      @Satyam1010-N 7 дней назад

      ​@@meigustas depends what part of india 😅 ,mother tongue influence btw I tried. Typical accent i as indian i couldn't speak probably old generation hv that since they were not that good in english

  • @KevinxJin
    @KevinxJin Месяц назад

    Great video Mei! I've been eyeing on realtime API for quite some time. I came across a reddit post saying it costs way more than it's supposed to though. I will have to build a prototype around realtime API anyways regardless of the price.

    • @KevinxJin
      @KevinxJin Месяц назад

      Do you happen to know any cheaper alternatives?

    • @meigustas
      @meigustas Месяц назад

      @@KevinxJin If you're on a tight timeline for product delivery, just build out the working solution first with openai. The cost will likely come down in future release cycles, as this is only the beta version. Here are other alternatives: - VAPI (this is probably the easiest way to get started with building a prototype) - Telnyx as phone call orchestrator + LLM (I built this out last year, a lot of hassle and lots of latency) - Amazon Lex (good for on prem deployments for data paranoid companies but from the demos it sounds very robotic) - Deepgram (there's a waitlist for this, latency is still present from what I've seen)

  • @airjoshua
    @airjoshua Месяц назад

    hey! we met in sf at cohere. nice work i’ll like and subscribe 😊

    • @meigustas
      @meigustas Месяц назад

      hey! small world :D

  • @stardebrisx9
    @stardebrisx9 Месяц назад

    Sound mixing requires some work but nice video. Amazing to see how live demos always break discovering weird things like that.

  • @chete4479
    @chete4479 Месяц назад

    _CLUB de Roma= Club Bilderberg = Otro Nombre, OTRA FRANQUICIA DE LOS MASÓNES GENOCIDAS DE SIEMPRE_ *REDUCIR LA POBLACIÓN MUNDIAL Á 500 MILLONES DE ESCLAVOS PARA SERVIR Á LA ÉLITE ILLUMINATI*

  • @mahakleung6992
    @mahakleung6992 Месяц назад

    test

    • @meigustas
      @meigustas Месяц назад

      @@mahakleung6992 test test

    • @mahakleung6992
      @mahakleung6992 Месяц назад

      @@meigustas I will do this in sections. Let's find out what YT's big problem.

    • @mahakleung6992
      @mahakleung6992 Месяц назад

      @@meigustas 妳是什麼 "Mei"; 美國的美? 妳的視頻不錯! 1 of 6

    • @mahakleung6992
      @mahakleung6992 Месяц назад

      @@meigustas First video. Well done and informative. I haven't played around with Llama 3.2 yet. My three main models were ORCA1 13B, ORCA2 13B, and MIXTRAL 8x7B. I am pretty happy with the MIXTRAL as it is well suited to NVidia RTX 4090 ... 46B in all with a 32K context. Given the MOE architecture, there is very low latency to the first token with a throughput of around 30-40 T/S. I constructed a memory the MIXTRAL using ChromaDB. I need to post process the MIXTRAL's memories before ingestion; which I do very simply now. I have considered using Llama 3.2 for post processing given its small foot print and very large context. I believe the best way to improve VDB model memories is by improving the quality of the memories themselves. I am planning to give Llama 3.2 a try for that. 2 of 6

    • @mahakleung6992
      @mahakleung6992 Месяц назад

      First video. Well done and informative. I haven't played around with Llama 3.2 yet. My three main models were ORCA1 13B, ORCA2 13B, and MIXTRAL 8x7B. I am pretty happy with the MIXTRAL as it is well suited to NVidia RTX 4090 ... 46B in all with a 32K context. Given the MOE architecture, there is very low latency to the first token with a throughput of around 30-40 T/S. I constructed a memory the MIXTRAL using ChromaDB. I need to post process the MIXTRAL's memories before ingestion; which I do very simply now. I have considered using Llama 3.2 for post processing given its small foot print and very large context. I believe the best way to improve VDB model memories is by improving the quality of the memories themselves. I am planning to give Llama 3.2 a try for that. 2 of 6

  • @brabbbus
    @brabbbus Месяц назад

    Please no background music ...

    • @meigustas
      @meigustas Месяц назад

      @@brabbbus got it thanks for the feedback

    • @candyts-sj7zh
      @candyts-sj7zh Месяц назад

      I don't mind it much actually​@@meigustas