Tips: You can transform your device's audio output into a "microphone" on Windows, so you don't need to place your headphones over your microphone. 1. Press Windows key + R -> type "mmsys.cpl" 2. In the Recording tab, enable the Stereo Mix option. Now, "Stereo Mix" is an available microphone option! You can select it as the audio input.
This is amazing and inspiring. I love the ending of the video and can’t wait for Wednesday. As a dyslexic person I think you unlocked a new use case for learning.
Pulling in people with a flashy thumbnail of a Python code that works and then trying to monetize your code based on a library that is already supposed to be open source is in my opinion bs. it is not fair for beginners that might not know Python or whisper very well. for that I give you a thumbs down!
I have tried to get this to run on M1 MacBook. No joy. The CPU maxes out even with the tiny model. But then I tried with the Whisper.cpp implementation which is compiled for apple silicon. I found a whisper-cpp-python wrapper for that library. That actually runs and is far less CPU bound. It has a bit of a stutter, it is not as clean, it misses words between the chunk processing but you can see that with just a little bit more power it could work.
There is a product for Live video Transcription there. Live text services are expensive and does not work on many current languages.. Set up a server/service that will ingest a RTMP video source, delay the video and overlay text on video in perfect sync. then offer RTMP output with burned in Live text. :) There is need for this service.
Hey man this is really cool! I'd like to know if you: 1) used the whisper v3 model? or the v2? 2) If you have seen the demos from gpt4, they also showed that gpt ASR is better than whisper v3, wonder if it will be open like whisper.
Hello and great to see this kind of contents. I actually have a question about speech to text in another language and for example Swedish.. and passing it throw llama for correction ,.. maybe for a meeting conference or something like that .. what do you suggest ?
Interesting stuff on the image creation at the end while talking, not sure if you are taking into consideration puctuation in you sentences? Im pretty sure this would have to do with something cool, maby keeping an overview of all the text that has been moving out of the "buffer" for style ? Looks like something I could have a lot of fun with, do not have the GPU though :/ Colab however.
Faster whisper and Insanely Fast Whisper don't seem to have AMD gpu support yet. So I had to go with an alternative for the 7900xt. I used wishper.cpp with cuda/HIP + distilled whisper model. Seriously this combination is kinda real-time too, even when using the distil large v2. Though there is a downside to that, the TTS and Whisper on the GPU gobble up like 8GB or vram. This put some limit to the LLM model I can use at same time.
@Kris : I already joined as an Adept member on Jan 18th 2024 and requested access to the Github Repo via email and also via Discord but have not had any response from you yet ?
running fully local is one thing ... doing this via webaudio api towards a backend is a different topic - is there any implementation for that as well foreseen?
Thanks for sharing your knowledge/experience. I'm bit perplexed. The description here mentions 45+ prompts in the PDF book, the newsletter website says 40+, and the PDF doc says 35+. Which number is correct?
Microsoft co-pilot in a teams call recording transcription. Cant simply call, needs to he a meeting call... subtle difference. Try 'meet now' in teams calender view, or make calendar event.
Anyone please lemme help how to run this code im trying it to but it doesn't work like the way how it is mentioned with zero latency any GitHub link access
Hi Kris! I love what you do, I would like to become a member of your channel, but I can't access the page to subscribe, do you have a direct link? the one in description doesn't work for me.. have a good day!
wow !! great video !!! Thank you for being so generous and teaching this to us, this is epic stuff! I can already start see all kinds of use cases, I cant wait to get it running, I'm really looking forward to Wednesday's video . Thanks again from Canada
that is awesome. Sooo i am trying to do something like this. My sister is deaf and i want something that can also just label the who is speaking. So for a small group it will say user 1 user 2 user 3. and who ever is speaking it will let person know. Do you think that is possible.. How could i do that. I got everything but that last part.
I might be jaded but... I mean really, how about an AI that calculates the probability of drone attacks or artillery attacks? How about an AI that calculates the probability of soldiers hiding in terrain? I mean, there are already good search algorithms out there, that one may-or-may-not use to carry out artillery strikes. I'm just thinking aloud here. Probably nothing.
The sentiment analysis really scares me. I mean, there's absolutely no chance that'll be abused by big tech in terms of political marketing. I mean, like, there's no way in hell right?
Kris, you are a genius. Real-time speech transcription can do a lot of things. The last example is great. I can’t wait to watch the video released on Wednesday. My computer is a Mac M chip computer. I found the code in your github and changed it to run on the CPU. Later, some problems occurred, such as incomplete transcribed content and OSError. Can you release a version suitable for Mac computers? grateful
Hey, it's in your video description, therefore easily fixed: the word is "transcription". Why not avoid the irony of a video that extols modern AI voice to text ... transcription ... in which the AI engine will surely avoid this mistake, and at the speed of light.
@@naczelnyh8rpolskiegoyt167 Yes I did. I went to Sound Recorder and made a test to see what was actually being recorded and playing it back. There was No Sound. Windows wasn't recording anything for some reason. I guess when nothing is recorded, Whisper hallucinates "Thank You" or sometimes just "You." So weird. But anyway, had to find a way to get the mic that this app was working with to record sound. So I would investigate that route, find out if the mic that this app is accessing is actually hearing anything at all.
Zero latency? I have been check your video timeline. terminal output and audio is not correspond. you must be living a world 1-2 second ahead our timeline. 😅
What a disgusting practice of hiding very basic and poorly written code behind a paywall. No effort, no skill, GPT generated based on million dollar investments shared for free - slamming behind a paywall is as low as you can get as a RUclipsr. But you don't care.
can you tell me the solution of this error : Could not load library cudnn_ops_infer64_8.dll. Error code 126 Please make sure cudnn_ops_infer64_8.dll is in your library path!
Tips: You can transform your device's audio output into a "microphone" on Windows, so you don't need to place your headphones over your microphone.
1. Press Windows key + R -> type "mmsys.cpl"
2. In the Recording tab, enable the Stereo Mix option. Now, "Stereo Mix" is an available microphone option! You can select it as the audio input.
this really helped me! Thank you!
this a grewt idea, i was using voice meeter as a virtual audio thingy and its complicated to use
I enabled the microphone but I don't know how to select it in the code. It doesn't hear anything when I start the app.
Epic! - These videos are some of the best stuff on RUclips - love the idea with the image generation at the end
This is amazing and inspiring. I love the ending of the video and can’t wait for Wednesday. As a dyslexic person I think you unlocked a new use case for learning.
how to get the code for this?
Pulling in people with a flashy thumbnail of a Python code that works and then trying to monetize your code based on a library that is already supposed to be open source is in my opinion bs. it is not fair for beginners that might not know Python or whisper very well. for that I give you a thumbs down!
Wow, an AI channel scamming people? Who would’ve ever heard of such a thing!
Tired of the fucking grifternet man, how did this happen?
for real this is a fking scam, the code is in gifthuf free
I have tried to get this to run on M1 MacBook. No joy. The CPU maxes out even with the tiny model. But then I tried with the Whisper.cpp implementation which is compiled for apple silicon. I found a whisper-cpp-python wrapper for that library. That actually runs and is far less CPU bound. It has a bit of a stutter, it is not as clean, it misses words between the chunk processing but you can see that with just a little bit more power it could work.
Hi Seven, could you please share your code with me? Thank you very much!
There is a product for Live video Transcription there. Live text services are expensive and does not work on many current languages.. Set up a server/service that will ingest a RTMP video source, delay the video and overlay text on video in perfect sync. then offer RTMP output with burned in Live text. :) There is need for this service.
Fantastic !!! A bit fast in explaining and showing, but I can always pause!
Good to see transcription and generate responses as audio in real-time for phone call
What if, instead of putting the headphones over the mic, to receive the signal, you want to send the voice from another app, like RUclips or Zoom?
Nice video!! thanks for your help in this topics!!
Hey man this is really cool! I'd like to know if you:
1) used the whisper v3 model? or the v2?
2) If you have seen the demos from gpt4, they also showed that gpt ASR is better than whisper v3, wonder if it will be open like whisper.
Amazing and inspiring work! Kris what about something less powerful but better accessible in terms of hardware?
Excellent! Thank you so much for sharing!
Hello and great to see this kind of contents.
I actually have a question about speech to text in another language and for example Swedish.. and passing it throw llama for correction ,.. maybe for a meeting conference or something like that .. what do you suggest ?
Great video! Thanks for going through this in such an easy-to-understand way! Can you share the python scripts?
I have been looking where to start, fantastic work, where can I have the code for testing
5:51 Neutral = I'm gonna go troll now. Funny stuff, great video! Thanks
Awesome bro! ❤
thanks this is great! Where can I find the actual code you have on your screen? Struggling to find it on the github
Heja Sverige ! Bra jobbat
Do you think this could be used to transcribe, for example, phone calls made through the browser? I would greatly appreciate your response :)
Interesting stuff on the image creation at the end while talking, not sure if you are taking into consideration puctuation in you sentences? Im pretty sure this would have to do with something cool, maby keeping an overview of all the text that has been moving out of the "buffer" for style ? Looks like something I could have a lot of fun with, do not have the GPU though :/ Colab however.
Gerçekten çok iyisiniz.
how can i get this code which you used in this videos same code i need.
can it translate?
faster whisper or whisper turbo?
This will be a good tool for language immersion chinese / japanese / indonesian along with the deepl clipboard tool, edge browsers tts engine.
Faster whisper and Insanely Fast Whisper don't seem to have AMD gpu support yet. So I had to go with an alternative for the 7900xt. I used wishper.cpp with cuda/HIP + distilled whisper model. Seriously this combination is kinda real-time too, even when using the distil large v2. Though there is a downside to that, the TTS and Whisper on the GPU gobble up like 8GB or vram. This put some limit to the LLM model I can use at same time.
@Kris : I already joined as an Adept member on Jan 18th 2024 and requested access to the Github Repo via email and also via Discord but have not had any response from you yet ?
I want to do speech to text Audio from the browser speaker and not from the mic , how can we do that in real time ?
How does the transcription performance compare to assemblyAI?
running fully local is one thing ... doing this via webaudio api towards a backend is a different topic - is there any implementation for that as well foreseen?
The accuracy sucks. Many words are incorrect which you can see in the image itself.
This isn't usable in the real world.
I think there's an even faster whisper module but I forget what it's called
did you find out?
i love your videos man , please video about fastwhisper on docker api please
Thanks for sharing your knowledge/experience.
I'm bit perplexed. The description here mentions 45+ prompts in the PDF book, the newsletter website says 40+, and the PDF doc says 35+. Which number is correct?
none, its a scam.
If there any way to translate this text to another languages it will be awesome
where do I get the code sir?
I became a member how do I get access to the code and the github for this
hello :D send me a e-mail at kris@allabtai.com
Hi,
Can get the github repo of the above code ?
Thanks
how can we identify different speakers?
Microsoft co-pilot in a teams call recording transcription. Cant simply call, needs to he a meeting call... subtle difference. Try 'meet now' in teams calender view, or make calendar event.
Which gpu are you using ?
could you do another demo to see how it can translate in real time?
yes! there are no really good or fast translation apps available. RUclips auto translate is horrible!
Can we get the code used in this video that would be really helpful
confused can you please create step by step video and provide the code as well.
Would it be possible to do speaker recognition then pipe it into translation
Anyone please lemme help how to run this code im trying it to but it doesn't work like the way how it is mentioned with zero latency
any GitHub link access
Hi Kris! I love what you do, I would like to become a member of your channel, but I can't access the page to subscribe, do you have a direct link? the one in description doesn't work for me.. have a good day!
wow !! great video !!! Thank you for being so generous and teaching this to us, this is epic stuff! I can already start see all kinds of use cases, I cant wait to get it running, I'm really looking forward to Wednesday's video . Thanks again from Canada
what is a transcribe_chunk function in the code? Seems that it's not from faster_whisper?
Can I did this with javascript?
Now make it translate and do phone-cals
Noooo…pls nooo. We got plenty auto callers already.
@@rne1223
Where?
Hello ur computer has a virus
i cant find the script of the realtime translation pls help me finding it :((
that is awesome. Sooo i am trying to do something like this. My sister is deaf and i want something that can also just label the who is speaking. So for a small group it will say user 1 user 2 user 3. and who ever is speaking it will let person know. Do you think that is possible.. How could i do that. I got everything but that last part.
I might be jaded but... I mean really, how about an AI that calculates the probability of drone attacks or artillery attacks? How about an AI that calculates the probability of soldiers hiding in terrain? I mean, there are already good search algorithms out there, that one may-or-may-not use to carry out artillery strikes. I'm just thinking aloud here. Probably nothing.
Can i convert this code to cpp and implement using Arduino without api
where do i get the setup/python code
how to access your sourcecode as a paid channel member?
Is there a way to connect a live streaming url?
Hello. I’m beginner in this major. How can I get your code to refer? Thank you
Can I get the code of it?
where can we get the code
github linked in the description.
How do we join your community?
Link in desc :) youtube member
@@AllAboutAI just subscribed to your channel but not getting GitHub code..
The sentiment analysis really scares me. I mean, there's absolutely no chance that'll be abused by big tech in terms of political marketing. I mean, like, there's no way in hell right?
Has anyone updated the code from the previous video to use this recording method instead?
Hi, I'm a subscriber but I do not have access to your github ,can you helpme please?
Can this run on raspberry pi?
cannot find the code in github
Kris, you are a genius. Real-time speech transcription can do a lot of things. The last example is great. I can’t wait to watch the video released on Wednesday. My computer is a Mac M chip computer. I found the code in your github and changed it to run on the CPU. Later, some problems occurred, such as incomplete transcribed content and OSError. Can you release a version suitable for Mac computers? grateful
Hey, it's in your video description, therefore easily fixed: the word is "transcription". Why not avoid the irony of a video that extols modern AI voice to text ... transcription ... in which the AI engine will surely avoid this mistake, and at the speed of light.
How can i get acess to this code?
just joined. would be good to get my grubby paws on the files for this.
how can we download this script?
Bro can you put th video about live streaming voice to text
where can we find the code that you used?
github linked in the description
Can you pls share you code?
does it support speaker diairzation?
I want to do speech to text Audio from the browser speaker and not from the mic , how can we do that in real time ?
Can we take source code ?
All I get is "Thank you! Thank You! Thank you! as my transcribed output....so weird
hey, same problem here, actually exact same problem, have you figured it out?
@@naczelnyh8rpolskiegoyt167 Yes I did. I went to Sound Recorder and made a test to see what was actually being recorded and playing it back. There was No Sound. Windows wasn't recording anything for some reason. I guess when nothing is recorded, Whisper hallucinates "Thank You" or sometimes just "You." So weird. But anyway, had to find a way to get the mic that this app was working with to record sound. So I would investigate that route, find out if the mic that this app is accessing is actually hearing anything at all.
Impresario thank you
welldone
iam a member but i cant acces the github pls HELP
this i my github
maxaxaxaxxaxaxaax
Waiting for the in deep video :) Btw your discord invite link is expired.
This is Presentation not tutorial
Please provode the code
how do i get access to the github?? TAKE MY MONEY!
lol no but seriously how
ma è gratuito?
Zero latency? I have been check your video timeline. terminal output and audio is not correspond. you must be living a world 1-2 second ahead our timeline. 😅
Where is the link to this source code ? Thanks amazing
did you get the code
no @@nafila5084
@@nafila5084 Can share the code to me as well?
pip install patience and kindness
transcriPtion
🧡
BROOOO 🎉 FIRST
🎈
What a disgusting practice of hiding very basic and poorly written code behind a paywall. No effort, no skill, GPT generated based on million dollar investments shared for free - slamming behind a paywall is as low as you can get as a RUclipsr. But you don't care.
It is bs to make an open source code monetized! So sorry for you and your kinds... unsubs.
Can you use different languages?
can you tell me the solution of this error : Could not load library cudnn_ops_infer64_8.dll. Error code 126
Please make sure cudnn_ops_infer64_8.dll is in your library path!
try "pip install nvidia-cudnn-cu12"
its didnt work@@劉育安
I have registered as a member, please check your email