Thanks a lot! This was very easy to follow along indeed. In case anyone's teachable machine page isn't training, try using it on a browser with no added extensions, and clear cache.
python[27190:320561] WARNING: AVCaptureDeviceTypeExternal is deprecated for Continuity Cameras. Please use AVCaptureDeviceTypeContinuityCamera and add NSCameraUseContinuityCameraDeviceType to your Info.plist. WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1712824494.653097 320561 gl_context.cc:357] GL version: 2.1 (2.1 Metal - 86), renderer: Apple M1 INFO: Created TensorFlow Lite XNNPACK delegate for CPU. Traceback (most recent call last): File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/ops/operation.py", line 208, in from_config return cls(**config) File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/layers/convolutional/depthwise_conv2d.py", line 118, in _init_ super().__init__( File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/layers/convolutional/base_depthwise_conv.py", line 106, in _init_ super().__init__( File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/layers/layer.py", line 263, in _init_ raise ValueError( ValueError: Unrecognized keyword arguments passed to DepthwiseConv2D: {'groups': 1} During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/oorjamishra/Desktop/hand ASL tracker/test.py", line 12, in classifier = Classifier("model/keras_model.h5", "model/labels.txt") File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/cvzone/ClassificationModule.py", line 29, in _init_ self.model = tensorflow.keras.models.load_model(self.model_path) File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/saving/saving_api.py", line 183, in load_model return legacy_h5_format.load_model_from_hdf5(filepath) File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/legacy/saving/legacy_h5_format.py", line 133, in load_model_from_hdf5 model = saving_utils.model_from_config( File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/legacy/saving/saving_utils.py", line 85, in model_from_config return serialization.deserialize_keras_object( File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/legacy/saving/serialization.py", line 495, in deserialize_keras_object deserialized_obj = cls.from_config( File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/models/sequential.py", line 326, in from_config layer = saving_utils.model_from_config(
Absolutely marvelous! Everything worked as demonstrated, even the errors, and your resolutions on your errors worked for me too. Thank you very much sir. Could you show us how to do the learning on PCs instead of Teachable Machine? Also, what is the difference in the different formats you can save your model in, where and how are they used and how to convert from one format (say h5) to another, say tfjs. Thanks again for taking your time and putting in a lot of effort to make this kind of very quality video! Cheers from Nairobi, Kenya!
This is without a doubt the best sign language recognition tutorial out there. Believe me when I say this because I have seen so many. Thank you for saving my career. God bless you.
why my test code is not working its giving output like train code and still saving while i press "s" and not making prediction. still training output like imgwhite is appearing with "Right " lable not "A" or "B"
Great video! Very clear on how to solve this problem. Is it possible to give an example on how to train for video classification to identify movements, like PySlowFast? Tks
Your next tutorial should be how to program AI to understand ASL words, phrases, and sentences. It'd be multithreaded where one thread understands letters, while another is always keeping a log of the last few gestures in real life to understand how to connect them together as a phrase.. Something like that. I'm not an expert with AI. I believe that is kind of how live closed captioning works on RUclips.
This video is my big wish,thank you for that. But you used prepared functions from cvzone. Can you do this manuel model that you composed for maybe eye directions that a human stared.
hello sir i have seen the entire tutorial that was great and the way u explain every step is awsome but i am facing a problem in sabing image what should i do
Never saw Chinese drama this was 1st one soo well made and especially this part is soo emotionally attached....chai xaoqi and Fang leng wonderful love from India❤..
The system predicts the value correctly but multiple times for the same sign due to the speed in recognition. But how to take only one value for a gesture? or how to slowdown the recognition process or how to recognize only ones at equal intervals of time store the recognized values some where?
Do we have to install some other libraries before tensorflow . I use python 3.7 and pycharm doesnt recognize tensorflow i guess i installed tensorflow but it doesnt detect the letters after that . It only identify the hand but not the trained characters Does anyone know how to fix this?
@Sahil Sawant yeah that part is easy, that's just how you detect 2 hands. But how do you get one bounding box for both hands? And then cropping it and everything
TNice tutorials was super helpful, dude! I got the tutorial version of soft soft just to get a taste, and after figuring it out I decided to purchase the
sir why you don't use jupyter notebook or google colab to make projects most of the students do not have gpu in their computers for projects so they can't use pycharm so please can you use google notebook or jupyter notebook. beacause sometime project fails when we do the same project you taught in pycharm whe n we try to implement it in googlecolab.
For people who want to fix error at ruclips.net/video/wa2ARoUUdU8/видео.html ``` if (x > 0 + offset and y > 0 + offset and w > 0 + offset and h > 0 + offset): cv2.imshow("Croped", imgCrop) ``` Juste add this when you showing your cropped image
Hi thanks for your input Did you install any other libraries or dependencies before installing tensorflow (like CUDA ORcudnn ) my system doesnt recognize tensorflow Appreciate if you can help me
I’ve cracked soft soft before twice as a kid. Never learned anytNice tutorialng to make anytNice tutorialng useful. Sad tNice tutorialng is a lot of my peers back then
if ur facing this issue "mediapipe.python._framework_bindings import resource_util ImportError: DLL load failed: The specified module could not be found."" solution type ""pip install msvc-runtime"" on ur terminal
it took me 6-7 days to run this program fully ........and thankyu so much for this video.....this videos is easy to understand for beginners
can you please send me the github link
did u get the link bro@@mounikatelagamsetti4788
Bro...can you send github link
Hii Mam/Sir I too started this project but it is not working so kindly guide me to complete this project Thank you
I started an identical project a while ago, but couldn't finish it ... After seeing this video, I finally finished my project.
Thank You
He brother i need your help regarding the project as i have same related project
How can I contact you? I also have the same project.
@@HELLO-cz4vm Through Linkedin
Bhai source code de do plz yr
hello, i need your help with this project, I am having an error, how can I contact you?
Thanks a lot! This was very easy to follow along indeed.
In case anyone's teachable machine page isn't training, try using it on a browser with no added extensions, and clear cache.
python[27190:320561] WARNING: AVCaptureDeviceTypeExternal is deprecated for Continuity Cameras. Please use AVCaptureDeviceTypeContinuityCamera and add NSCameraUseContinuityCameraDeviceType to your Info.plist. WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1712824494.653097 320561 gl_context.cc:357] GL version: 2.1 (2.1 Metal - 86), renderer: Apple M1 INFO: Created TensorFlow Lite XNNPACK delegate for CPU. Traceback (most recent call last): File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/ops/operation.py", line 208, in from_config return cls(**config) File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/layers/convolutional/depthwise_conv2d.py", line 118, in _init_ super().__init__( File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/layers/convolutional/base_depthwise_conv.py", line 106, in _init_ super().__init__( File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/layers/layer.py", line 263, in _init_ raise ValueError( ValueError: Unrecognized keyword arguments passed to DepthwiseConv2D: {'groups': 1} During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/oorjamishra/Desktop/hand ASL tracker/test.py", line 12, in classifier = Classifier("model/keras_model.h5", "model/labels.txt") File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/cvzone/ClassificationModule.py", line 29, in _init_ self.model = tensorflow.keras.models.load_model(self.model_path) File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/saving/saving_api.py", line 183, in load_model return legacy_h5_format.load_model_from_hdf5(filepath) File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/legacy/saving/legacy_h5_format.py", line 133, in load_model_from_hdf5 model = saving_utils.model_from_config( File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/legacy/saving/saving_utils.py", line 85, in model_from_config return serialization.deserialize_keras_object( File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/legacy/saving/serialization.py", line 495, in deserialize_keras_object deserialized_obj = cls.from_config( File "/Users/oorjamishra/miniforge3/envs/moon/lib/python3.10/site-packages/keras/src/models/sequential.py", line 326, in from_config layer = saving_utils.model_from_config(
Please help get this type of error
Absolutely marvelous! Everything worked as demonstrated, even the errors, and your resolutions on your errors worked for me too. Thank you very much sir. Could you show us how to do the learning on PCs instead of Teachable Machine? Also, what is the difference in the different formats you can save your model in, where and how are they used and how to convert from one format (say h5) to another, say tfjs. Thanks again for taking your time and putting in a lot of effort to make this kind of very quality video! Cheers from Nairobi, Kenya!
This is without a doubt the best sign language recognition tutorial out there. Believe me when I say this because I have seen so many. Thank you for saving my career. God bless you.
hello, sorry for asking, but did you manage to get it to work? :C
Well, I did not try with C but I used other signs and they worked well @@kiaraagustinaalegreencinas9458
why my test code is not working its giving output like train code and still saving while i press "s" and not making prediction. still training output like imgwhite is appearing with "Right " lable not "A" or "B"
Great video! Very clear on how to solve this problem. Is it possible to give an example on how to train for video classification to identify movements, like PySlowFast? Tks
Your next tutorial should be how to program AI to understand ASL words, phrases, and sentences. It'd be multithreaded where one thread understands letters, while another is always keeping a log of the last few gestures in real life to understand how to connect them together as a phrase.. Something like that. I'm not an expert with AI. I believe that is kind of how live closed captioning works on RUclips.
This video is my big wish,thank you for that. But you used prepared functions from cvzone. Can you do this manuel model that you composed for maybe eye directions that a human stared.
hello sir i have seen the entire tutorial that was great and the way u explain every step is awsome but i am facing a problem in sabing image what should i do
I am having issue with the keras model? Can anyone tell me in which version of keras and tensorflow the model is trained?
Love your handtracking tutorials. Always tend to fascinate me and inspire my work in my job. I really love your videos Mr. Murtaza!
Very helpful video ,what if we want 2 hand images. what will be the code for it sir
Please make a video on sign language detection that converts ASL to sentences.
Hi sir ! I saw the entire tutorial ,you not only taught us the concepts of sign language detection but also u taught the perfection. Thank you sir :)
Never saw Chinese drama this was 1st one soo well made and especially this part is soo emotionally attached....chai xaoqi and Fang leng wonderful love from India❤..
bro what
@@albsjalbs2251exactly 😂
know where to start a new soft and I didn't know how to switch from soft to . You are the best THANK YOU FOR NOT
This is the video I've been looking for! Thank you for your work!
Great tutorial, Just wondering does that detector module detect at larger distance ?
Very helpful, and surprisingly therapeutic
Thank you for using automatic code formatter. Btw, what is cvzone? Is it your own library?
It's his website name
Yes, I think it's his own library. We have to install that package.
Thanks a lot sir. It's really easy to code and learn at the same time.
Which part of the code should i modify for data collection and testing for create a project using 2 hands
What system is this running on? is there a link to where pycharm and supporting software are installed?
The system predicts the value correctly but multiple times for the same sign due to the speed in recognition. But how to take only one value for a gesture? or how to slowdown the recognition process or how to recognize only ones at equal intervals of time store the recognized values some where?
Thanks. Tried kasp last time.. so iobit is a better app it seems..
Ty I got my first divine because of you
Sir my tensorflow version is 2.16.1 and the code is not running what should I do.
try running it in pycharm if you are running it in some other IDE
Do we have to install some other libraries before tensorflow . I use python 3.7 and pycharm doesnt recognize tensorflow i guess i installed tensorflow but it doesnt detect the letters after that .
It only identify the hand but not the trained characters
Does anyone know how to fix this?
I have the same issue... so frustrating.
i am unable to install tensorflow. i think my pythn version is old. pls share a way to install tensorflow!
Will stay tuned to your channal! Cheers!
This is very helpful for me.
Please what is the laptop requirements for this.
Thank you so much
Do we need to resize the bounding box? cause our image on train data have same size.
hi sir ,the algorithm of this is style augmentation? right
Sir, what method does this project use?
what should I add to detect two hands together?
Thanks man, really easy and interesting project.
What should I do if I want to use 2 hands instead of 1 hand?
@Sahil Sawant yeah that part is easy, that's just how you detect 2 hands. But how do you get one bounding box for both hands? And then cropping it and everything
I once did this with Google's nocode ML kit environment but it's better to have your own model.
Thank You For TeacNice tutorialng Us Brother
Can you please make one for recognizing old video games and giving a price? Some old video games are worth alot
How about linking some of your video into Home Assistant for home automation!
You are the best! Thank you!
Sir, how to display the accuracy
What algorithm do you use?
Thanks a lot
how i can remove points on hand that is detected in order to pass it to model without lines on it ??
Sir you have used teachable machine to train the model. If i want to train the model in the code itself how can i do that? can someone help?
TNice tutorials was super helpful, dude! I got the tutorial version of soft soft just to get a taste, and after figuring it out I decided to purchase the
thank you sir osum
Hi, did everything go well for you?
Perfect 🤩 thanks
How can we check the accuracy percentage of this algorithm?
During the Croping of the image I keep on getting this error "(-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'"
Same, could you solve it ?
@@leticiabatista122 noo, I'm still searching to get it done...but not very lucky
change the camera to (0)
THANK YOU FOR THIS!!!
sir why you don't use jupyter notebook or google colab to make projects most of the students do not have gpu in their computers for projects so they can't use pycharm so please can you use google notebook or jupyter notebook. beacause sometime project fails when we do the same project you taught in pycharm whe n we try to implement it in googlecolab.
You don't actually need a GPU for this project.
All the computation can be done by the CPU, however depending on your CPU it may be slower.
@@B1NT0N I am talking about all the other projects not only this.
can i do this as a deep learning project
For people who want to fix error at ruclips.net/video/wa2ARoUUdU8/видео.html
```
if (x > 0 + offset and y > 0 + offset and w > 0 + offset and h > 0 + offset):
cv2.imshow("Croped", imgCrop)
```
Juste add this when you showing your cropped image
Thank You very much
Hi thanks for your input
Did you install any other libraries or dependencies before installing tensorflow (like CUDA ORcudnn ) my system doesnt recognize tensorflow
Appreciate if you can help me
Which part of the code should I put this? need help
@@dibyajyotigoswami2151 How did you fix it sir??
thank u ur an amazing programer
Hello sir I want to do in two hands
una pregunta usas pycharm profesional o edition??
Thank you, sir!
هل يمكن ان توضح طريقة معرفة الألوان بحيث يمكن ان معرفة الالوان وتسميتها
Can you do the letter like typing into sentence
great.. good job keep it up
But , if I utilize 26 letters. It's not working properly. I trained 26 letters and it's giving the wrong output
Hello I purchased your computer game development advanced course. But it does not come with a github link. Can you add a github link to that?
Can someone say what algorithm is used here?And can we have the code for the model that we download?
I don't know why when teaching b it doesn't detect it in the program, so it is with many letters
THANK YOU SO MUCH
please make a tutorial for beginners to learn programming
Sir, The version of Tensorflow u used in the video 2.9.1 isnt available anymore , due to which the model is throwing errors , What should i Do ?
did u find a solutionsame problem here
@@crazyworld5124 Not till now !
My code is predicting the output but it always gives 5, r,x or z only in the output. What should i do?
in which app all this code is done can u please tell
Amazing ✨✨
Can you guide me on how to recognize copyright from videos?
what are the prerequisites?
Sir which python version you are using
what methode is used in this project?
Thanks brother
How to use classificationModule in this project
i went to your github could not find code for this code in this video
Thank you Boss
i have run this code but not working - window open and closed automatically after 5-6 sec
when i try to train model in teachable machine, it's always page unresponsive. please help me
Can you make it translate this? 🙏✌️👌💁
Thanks Bruu
Hi, how to crop two hands image
I am using vs code but it's showing error while importing might try pycharm
what is tools u use? im using visual studio code
pycharm
The off set is not working please help
It is relatively easy to do hand sign detection. How about hand language ?
why do I get so many errors in Mac m1 for the same simple code
How to solve this error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize'
Could you solve it?
@@pedrovasquezmontilla679 bro i solved it
@@pedrovasquezmontilla679 if imgCrop.size >0:
ImgResize = cv2.resize(imgCrop, (wCal, imgSize))
else:
Continue
And also if that doesn't work ..try to use your smartphone as webcam because that will input more clear frames.
@@MuhammadUsman-hb6zkyes it's worked .
i'm finding it difficult to save images....i press the s button over and over again but its not saving
how to make a website for it? how to deploy this ML model into a website?
I’ve cracked soft soft before twice as a kid. Never learned anytNice tutorialng to make anytNice tutorialng useful. Sad tNice tutorialng is a lot of my peers back then
if ur facing this issue "mediapipe.python._framework_bindings import resource_util ImportError: DLL load failed: The specified module could not be found.""
solution type ""pip install msvc-runtime"" on ur terminal
HOW TO CONVERT THAT TO AN ANDROID APPLICATION?
parabéns. ey sou surdo. brazil. deaf
I am getting a value error for shape
Hi)
AWESOME