Это видео недоступно.
Сожалеем об этом.

Fastest speech to text transcription, 100% offline - Whisper.cpp | Zero latency

Поделиться
HTML-код
  • Опубликовано: 25 май 2024
  • Today we will see how to download and use whisper offline.
    Whisper from openai: github.com/ope...
    Whisper.cpp: github.com/gge...
    Models: github.com/gge...
    - - - - - - - - - - - - - - - - - - - - - -
    Follow us on social networks:
    Instagram: / codewithbro_
    ---
    Support us on patreon: / codewithbro
    #whisper #openai #whispercpp #speechtotext #programming #softwaredeveloper #softwareengineer #transcription #developer #iosdeveloper #mobiledevelopment #coding #coder #javascript #developer #computerscience #computersciencestudent #100daysofcode #html #css #programmer #vue #npmpackage #npm #package #CodeNewbies #Code_with_bro #code_withbro #youtubechannel #youtube #youtuber #youtubers #subscribe #youtubevideos #sub #youtubevideo #like #instagram #follow #video #vlog #subscribetomychannel #gaming #music #explorepage #love #smallyoutuber #vlogger #youtubegaming #instagood #gamer #youtubecommunity #likes #explore #youtubelife #youtubecreator #ps #bhfyp #fotiecodes

Комментарии • 37

  • @codewithbro95
    @codewithbro95  2 месяца назад +4

    If you have any questions please feel free to drop them below!
    Please don't forget to like and subscribe for more interesting content like this🔥

  • @endresbielefeldt2050
    @endresbielefeldt2050 2 месяца назад +3

    thank you for the amazing content!

  • @mentalview8703
    @mentalview8703 2 месяца назад +1

    Great video bro. Keep it up 👍

    • @codewithbro95
      @codewithbro95  2 месяца назад +1

      Thanks, really appreciate 🙌🏾

  • @edmondgoddy
    @edmondgoddy Месяц назад +1

    1K Subs. Congrats bro

    • @codewithbro95
      @codewithbro95  Месяц назад +2

      @@edmondgoddy thanks man, really appreciate the support 🙌🏾🙌🏾

  • @mbegangsylvain1076
    @mbegangsylvain1076 2 месяца назад +1

    love it !!!

    • @codewithbro95
      @codewithbro95  2 месяца назад +1

      Glad you love it... Please, don't forget to like and subscribe for more interesting content like this one🔥😎

  • @theMonkeyMonkey
    @theMonkeyMonkey 2 месяца назад +2

    Your english is excellent. may i make a suggestion - python is not pronounced pie-ton but pie-thon - with the 'th' being the same as the 'th' in 'this'

  • @DenzilSheldon
    @DenzilSheldon Месяц назад +1

    Wow amazing!
    Question: how much faster is it estimated working faster then Python?
    Thanks a lot!

    • @codewithbro95
      @codewithbro95  29 дней назад +1

      No specific data on that but after trying both I’d say it’s just about 5x faster in transcription

  • @JackieUUU
    @JackieUUU 2 месяца назад +3

    amazing! what gpu are you running? or it’s on cpu?

    • @codewithbro95
      @codewithbro95  2 месяца назад +4

      Running on macOS M1 chip with 8 core GPU, I believe whisper.cpp makes use of metal on mac

  • @Jeka476
    @Jeka476 9 дней назад +1

    Why is there black screen in middle of the video?

    • @codewithbro95
      @codewithbro95  9 дней назад +1

      Hey man, apologies for this, that should have been spotted before publishing.
      Sorry!

  • @RoarStaze
    @RoarStaze Месяц назад +1

    How do you get the make command to work on windows?, i got the make command but i just get error saying cc not found and someone said gcc=cc but i dont know how to do anything from there

    • @codewithbro95
      @codewithbro95  Месяц назад +1

      @@RoarStaze not tried it yet on windows but from the error you got, I believe you have to install gcc on your windows machine

    • @RoarStaze
      @RoarStaze Месяц назад

      @@codewithbro95 i do have gcc someone said i need to make it gcc=cc but ive no idea how to do that

  • @gnosisdg8497
    @gnosisdg8497 2 месяца назад +4

    can you put this offline whisper with a local llm model lets say phi3 to get reply based on whisper? i mean lets see how fast it can actually put out what the llm model will reply, this way you can make an offline ai assistant with no latency in responses and local 100 %

    • @codewithbro95
      @codewithbro95  2 месяца назад +5

      i am actually working on something like this, check out my recent videos on Jarvis. I am building Jarvis so you don't have to

    • @gnosisdg8497
      @gnosisdg8497 2 месяца назад +2

      @@codewithbro95 cool nice job keep it up, can you also add a way to use phi3 llm with phidata as well for Local RAG and also options for reading csv , pdf ,word documents as well ? this will give you a lot of views also, we are talking for an actual use of an ai assistant with this abilities !!!

    • @codewithbro95
      @codewithbro95  2 месяца назад +1

      ​@@gnosisdg8497 definitely something i am looking to work on, stay tuned!!!

  • @siddharthchadha3930
    @siddharthchadha3930 Месяц назад +1

    Thanks your video goes blank in the middle for a little bit

    • @codewithbro95
      @codewithbro95  Месяц назад +1

      @@siddharthchadha3930 really? Didn’t notice that. Apologies nonetheless

    • @HimanshuChanda
      @HimanshuChanda 21 день назад

      @@codewithbro95@ 06:13 onwards

  • @snatvb
    @snatvb 2 месяца назад +1

    I wait same speed TTS(text to speech), it would be great to have

    • @codewithbro95
      @codewithbro95  2 месяца назад +1

      Not sure i understand what you mean!

    • @snatvb
      @snatvb 2 месяца назад +1

      @@codewithbro95 we have option recognize speech to text in realtime, but text to speech is really slow now

    • @codewithbro95
      @codewithbro95  2 месяца назад +1

      @@snatvb definitely agree with you, inferencing with TTS is very bad at the moment, though I recently stumbled on a really promising project called ChatTTS apparently it’s being built specifically for this purpose, I haven’t tried it though, maybe I will and make a video on it.

    • @snatvb
      @snatvb 2 месяца назад

      @@codewithbro95 yep, I've seen recently. I tried "bark" from suno and it work pretty slow (I have rtx 3070) and sometimes it voices llm imagination text instad of I gave :D

  • @ToMooNoT
    @ToMooNoT 2 месяца назад +1

    Hi, noob here.. Trying to figure out how to get the `make` working from VSCode terminal, on windows so far I installed MSYS2 added C:\msys64\usr\bin and C:\msys64\mingw64\bin to PATH env variables but... still says command not recognized..

    • @RoarStaze
      @RoarStaze Месяц назад +1

      same did u find a fix?

    • @codewithbro95
      @codewithbro95  Месяц назад +1

      @@ToMooNoT does it work outside of vscode ? That’s the normal terminal

    • @ToMooNoT
      @ToMooNoT Месяц назад

      @@codewithbro95 I had to install Visual Studio and build the C code from there or something, but it didn't build the microphone one, and I don't know how to add it to the build step, so kinda gave up, also was trying to get my AMD GPU to work with ZLUDA which is a library that should make CUDA code work on AyyMD, but no luck there either even with AI helping with troubleshooting..