Working for months on your java code, translating it to kotling and testing it on skin moles, I finally realized that my model was the true problem, so I had to redesign it and insert it back into the Android Studio. Your code was OK. Thanks for the inspiration. You are great!
thanks for the video, I was applying image classification on a different kind of model but was getting very low accuracy, then I got to know that preprocess was not done inside my network so I replaced 1.f/1 to 1.f/255. really grateful !!
I have created my own model using Jupyter Notebook, and it is saved in the .pb format. I have successfully converted the model to TFLite to connect it with my Flutter app. Additionally, I manually created a labels.txt file that corresponds to the labels in the TFLite model. The issue is that when I run the model on the app with an image, it only shows the last label in the labels.txt file with a confidence of 99%, regardless of the image. Even if I change the images, the last label in the labels.txt file is always shown
So your TensorBuffer should be an int array 1, 31, 200, 1 For ByteBuffer.allocateDirect, it should be x * 31 * 200 * 1 where x is the number of bytes in the data type your model uses. so for example if it's a float 32 for the input, then x should be 4.
@@IJApps i have this line as a model TensorBuffer.createFixedSize(new int[]{1, 31, 200, 1}, DataType.FLOAT32); i calculated like this ByteBuffer.allocateDirect(4 * 31 * 200 * 1);...do i have to keep these 3 lines of byteBuffer.putFloat(....); for a black and white picture too?
Thank you this is very helpful, I was wondering if I could upload my result to cloud (let say firebase or azure) and then later the data could be used in other activity inside android studio?
Hello can you make a course on how to make a face verification model (face detection, alignment, recognition, and verification) using PyTorch / Tensorflow and also deploying it in android studio using java
One more doubt .. How can I convert this predicted label to audio .. In Colab we can use gtts library .. But when we convert that to tflite and construct an android app , how can we do that ? Can u pls do some voice related android apps videos ?
Hello, and thank you for this tutorial. Can I ask how to save the image even when changing fragments? I am building an image classification app on recyclable materials, and I created fragments with information about the image on a new fragment, but after changing the fragment, the image captured or imagined from the gallery dissapears.
Does tf lite only for image recognition? How about prediction like user enter some integer input and it will able to predict it based on requested input
Hi! Awesome tutorial! May I ask if you have a kotlin implementation of this. I am kinda stucked on implementing the code provided by the model on kotlin. Thank you for your time and have a great day!
Hi guys i have set my out put classes to 3 in my model and i have embeded the tflite file but when i make prediction i always get only the two classes i am not getting the third class please help me?
Hello, i have a problem. The app keeps on crashing after i picked a picture either using the gallery or the camera. Please teach how to fix this. Thank you
Hı I am new on machine learning and android application and first of all thanks a lot for your tutorial and I have a problem. Whenever I take picture or choose a picture from gallery, it always says banana! How can I solve that ? Please respond
When i tried this i got errors: Cannot resolve method 'newInstance' in 'Model' Cannot resolve symbol 'Outputs' Cannot resolve method 'process' in 'Model' Cannot resolve method 'getOutputFeature0AsTensorBuffer()'
You need to make sure that "Model" matches your file name (whatever you named your tflite file) It'll be helpful to rewatch the part in the video where we import the model to Android Studio. If you double click on your tflite file in Android Studio it even provides you with the code to use.
I don't know why but i implemented the same model in kotlin using jetpack compose everything is working fine but the model is giving wrong pridiction of banana whatever the fruit is..
Hi, Thank you for your great and helpful tutorial but when I run the app and click on the gallery button, the gallery is empty. Any idea why I got this issue?
Hello, my image classification app is not working as every time I try to load a photo from the gallery, it gives me the message "error getting selected files". I ran the debugger and I got errors saying "Source code does not match the bytecode". Do you know why this might be or what I could do to fix this?
@@IJApps Thanks for the response. It turns out that the tensorflow lite model I am using is not working, as yours seems to work fine in the code. I converted the tensorflow lite model from the tensorflow model i made in Jupyter Notebook, and I know you didn't make yours in Jupyter Notebook. Would this affect the functionality of the TF Lite model?
java.lang.NullPointerException: Attempt to invoke virtual method 'android.content.Context android.content.Context.getApplicationContext()' on a null object reference. Could you please help me?
I had 3 problems when using this code in my project 1. Camera streaming Quality was poor 2. Thumbnail Quality of image view was too bad 3. It does not use my default camera features Please someone help me
You should make sure that the multiplication when allocating the byte buffer size is whatever input size you had when you were training your model in Python.
Hi, the app crashed and logcat showed: "caused by: java.lang.ArrayIndexOutOfBoundsException: length=4; index=4" at MainActivity.java:110. I have 13 classes inside my String[] classes. I changed it to 4 and it works. What do i need to do to make it work with all 13 classes? Im not sure what to change since it seems to me like it isnt specified anywhere that it needs to be 4 classes. Any help would be much appreciated.
It depends on the tflite model that you coded and are using. If your model has four outputs, the length should only go up to four. But if you coded it so that it has 13 outputs for the last layer, then you can use 13 instead.
thank you for your answer, it was a stupid mistake on my part. My model was trained for 13 classes, but your answer still helped me find the root of the problem :)
Thank you for your video ! Is it possible to get confidences values here too? Because I keep getting weird values if I use the code from teachable machine with this tutorial :/ Thank you for your time :)
If I understand well I have to apply a softmax to the output "outputFeature0.getFloatArray();" to have a value instead of a vector ? Is it correct? and how could i apply it in android studio with TF lite ? A bit of help would make my month !!
At 9:37, line 79 would have to be 1, 32, 32, 1. Basically wherever you refer to there being 3 channels (for R, G, and B), you have to have just 1 channel because it's greyscale.
Hi, I tried to run the code on my own and for some reason the app keeps on crashing after I either choose an image from the gallery and take a photo. Can you help me out? Thanks!
Hi. I provided the full code for the app in the video description. Maybe that can help. Also if you check Logcat in Android Studio while your Android device is connected to your laptop, you can see on what line the error is occuring and what the error is. This can help you pinpoint why the error's happening. If you can't figure it out, paste the error that Logcat shows into a RUclips comment.
Hi, everything you explain was clear as day but what if I have many inferences unlike the one you wrote which has only one look around 13:05. BTW this is a customized TFLITE model and I didn't use teachable machine for training. The application doesn't seem to run when using the model I made I think its because Im using a customized one? here is what the model looks like after I imported it to my project. try { Tflite model = Tflite.newInstance(context); // Creates inputs for reference. TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 320, 320, 3}, DataType.FLOAT32); inputFeature0.loadBuffer(byteBuffer); // Runs model inference and gets result. Tflite.Outputs outputs = model.process(inputFeature0); TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer(); TensorBuffer outputFeature1 = outputs.getOutputFeature1AsTensorBuffer(); TensorBuffer outputFeature2 = outputs.getOutputFeature2AsTensorBuffer(); TensorBuffer outputFeature3 = outputs.getOutputFeature3AsTensorBuffer(); // Releases model resources if no longer used. model.close(); } catch (IOException e) { // TODO Handle the exception } Can you type how will the float confidences would look like when there's a lot of inferences? Or make a solution video? It would we grateful for me since I love your videos.
Hey. I followed the same steps given in the tutorial. But it is giving an error on "camera", in file "MainActivity.java" on the lines: if (checkSelfPermission(Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED){ and requestPermissions(new String[]{Manifest.permission.CAMERA}, 100); I can't understand what the problem is, It says "cannot resolve the symbol camera". Android studio is giving option for "rename reference" in the form of "DYNAMIC_RECEIVER_NOT_EXPORTED_PERMISSION" instead of writing "CAMERA". What does that mean. That would not access the camera feature of the phone? It is important that I change the reference here and then I would have to change it in every other place too? Also, my model.tflite takes the input as : (new int[]{1, 3, 800, 800}, DataType.FLOAT32). It cannot be 3x800 pixels. Right? Should it be 800x800. Considering yours as 32 this is wayy tooo much bigger. Is that why my model is model was not working in VSCode? Please tell me I have a project due this week.
This is an important step you can look at again: ruclips.net/video/yV9nrRIC_R0/видео.html You should use whatever you called your file. I called my file "model.tflite" so in my code it's "Model". The full code is available here: github.com/IJ-Apps/Image-Classification-App-with-Custom-TensorFlow-Model
sir i run your code and it running proper but there is no constraints if i take a picture of random things the model still giving output bana,apple, and orange. and do not predicting half piece of orange.
@@zee4654 You can try playing around with the Python code for training the model to make it more accurate. The model trained to classify an image into 3 classes: banana, orange, and apple. therefore it will always give one of those results, even if a random image is shown.
Hi, the complete code for the app is found here: github.com/IJ-Apps/Image-Classification-App-with-Custom-TensorFlow-Model I also have 2 tutorials on getting images from the camera and gallery: - Camera: ruclips.net/video/7Qwur4xKh-c/видео.html - Gallery: ruclips.net/video/H1ja8gvTtBE/видео.html
Thanks for the tutorial! I have a small problem however, I've tried running the code with my own model (I followed the first tutorial in order to create it) and it is giving me the same result no matter what picture i use. This is the code: Best model = Best.newInstance(getApplicationContext()); // Creates inputs for reference. TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 32, 32, 3}, DataType.FLOAT32); ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * imageSize * imageSize * 3); byteBuffer.order(ByteOrder.nativeOrder()); int[] intValues = new int[imageSize * imageSize]; image.getPixels(intValues, 0, image.getWidth(), 0, 0, image.getWidth(), image.getHeight()); int pixel = 0; //iterate over each pixel and extract R, G, and B values. Add those values individually to the byte buffer. for(int i = 0; i < imageSize; i ++){ for(int j = 0; j < imageSize; j++){ int val = intValues[pixel++]; // RGB byteBuffer.putFloat(((val >> 16) & 0xFF) * (1.f / 1)); byteBuffer.putFloat(((val >> 8) & 0xFF) * (1.f / 1)); byteBuffer.putFloat((val & 0xFF) * (1.f / 1)); } } inputFeature0.loadBuffer(byteBuffer); // Runs model inference and gets result. Best.Outputs outputs = model.process(inputFeature0); TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer(); float[] confidences = outputFeature0.getFloatArray(); // find the index of the class with the biggest confidence. int maxPos = 0; float maxConfidence = 0; for (int i = 0; i < confidences.length; i++) { if (confidences[i] > maxConfidence) { maxConfidence = confidences[i]; maxPos = i; } } String[] classes = {"0", "1", "2", "3", "4", "5", "6", "7", "8", "9"}; result.setText(classes[maxPos]); Log.d("Main", String.valueOf(maxPos)); // Releases model resources if no longer used. model.close();
Thank you for the tutorial, IJ. I am new in Android apps and ML. I applied your code to an ML binary classification model and have a problem with inference and getting resuts. I always get "Benign" when testing skin moles, never "Malignant". When I check the confidence array, I get something like [F@928d693, different everytime. Below you will find the code involved. Can you help? Thank you. // Runs model inference and gets result. MolesAcc98.Outputs outputs = model.process(inputFeature0); TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer(); float[] confidences = outputFeature0.getFloatArray(); // Let's check what confidences gives us Toast.makeText(getApplicationContext(), String.valueOf(confidences), Toast.LENGTH_LONG).show(); // find the index of the class with the biggest confidence. int maxPos = 0; float maxConfidence = 0; for (int i = 0; i < confidences.length; i++) { if (confidences[i] > maxConfidence) { maxConfidence = confidences[i]; maxPos = i; } } String[] classes = {"Benign", "Malignant"}; result.setText(classes[maxPos]); // Releases model resources if no longer used. model.close(); } catch (IOException e) { // TODO Handle the exception }
You're using String. Value of on an array. To get an array as a string, you should use Arrays. toString(confidences) Additionally, since you're doing binary classification, check how many elements are in the confidences array. If it's just one element that means if the confidence number is closer to 0 it's one class, if its closer to 1 it's the other class
@@IJApps Thank you IJApps. I corrected the Toast.makeText and saw that I get a number almost close to zero, 4.156035E-39, which is always the same regardless of the mole I check. Also, length of confidences is one (1).
@@GeorgeTrialonis One other thing I can think of is whether you are providing your model with the right inputs. I don't know what the Python code for your classification model looks like, but this is an important part to pay attention to: ruclips.net/video/yV9nrRIC_R0/видео.html You should decide whether you have to divide by 255 or 1. some other things to check for are if your model takes in rgb or grayscale images, etc.
@@IJApps Thank you for your assistance. Following your suggestions I was inspired to use a pretrained model from Tensorflow Hub. This offered a better performance but when converted to .tflite and incorporated to your Java code for Android deployment, I was faced with same problems. However, following experimentation, the problem seems to have gone away after I had changed line maxPos = i to maxPos = 1. Here is the snippet: for (int i = 0; i < confidences.length; i++) { if (confidences[i] > maxConfidence) { maxConfidence = confidences[i]; maxPos = 1; } } The app seems to work now. I have tested it with pictures of benign and malignant moles from the internet. Of course, the predictions are not perfect (acc.= 83%) Thank you.
at com.example.myapplication.MainActivity.classifyImage(MainActivity.java:96) at com.example.myapplication.MainActivity.onActivityResult(MainActivity.java:141) after run code on my device. the app is crashing and i found these above errors after using logcat sir please tell me how to fix these error\
Hi, I tried to run the code on my own and for some reason the app keeps on crashing after I either choose an image from the gallery and take a photo. Can you help me out? Thanks
Working for months on your java code, translating it to kotling and testing it on skin moles, I finally realized that my model was the true problem, so I had to redesign it and insert it back into the Android Studio. Your code was OK. Thanks for the inspiration. You are great!
Same experience here.
Only this one youtube video that explains the concept... Thank you
I had struggled for five hours but your video helped me solve in 20 minutes.Thanks bro
thanks for the video, I was applying image classification on a different kind of model but was getting very low accuracy, then I got to know that preprocess was not done inside my network so I replaced 1.f/1 to 1.f/255. really grateful !!
A wonderful, clear and fully understandable tutorial. Plus for using Java!
Hi, your courses are helpful to me~~~
Can you please provide a real time object detection course?
I have created my own model using Jupyter Notebook, and it is saved in the .pb format. I have successfully converted the model to TFLite to connect it with my Flutter app. Additionally, I manually created a labels.txt file that corresponds to the labels in the TFLite model.
The issue is that when I run the model on the app with an image, it only shows the last label in the labels.txt file with a confidence of 99%, regardless of the image. Even if I change the images, the last label in the labels.txt file is always shown
App only display the first name in class, it doesn't even detect objects
best tutorial im gonna use this for my capstone project.
My model has w= 31,h=200, chanels=1, the memory always runs out in the loop i don't know why...i wonder if i calculated my buffer correctly
So your TensorBuffer should be an int array 1, 31, 200, 1
For ByteBuffer.allocateDirect, it should be x * 31 * 200 * 1 where x is the number of bytes in the data type your model uses. so for example if it's a float 32 for the input, then x should be 4.
@@IJApps i have this line as a model TensorBuffer.createFixedSize(new int[]{1, 31, 200, 1}, DataType.FLOAT32); i calculated like this ByteBuffer.allocateDirect(4 * 31 * 200 * 1);...do i have to keep these 3 lines of byteBuffer.putFloat(....); for a black and white picture too?
I was wondering how would you deploy the app on android, if you have a huge dataset do you need firebase?
Thank you this is very helpful, I was wondering if I could upload my result to cloud (let say firebase or azure) and then later the data could be used in other activity inside android studio?
Hello can you make a course on how to make a face verification model (face detection, alignment, recognition, and verification) using PyTorch / Tensorflow and also deploying it in android studio using java
Hello, I want to ask what should I do if the model can't be imported? It said 'no package found'. In your video, it's on 9:38 line 76
One more doubt .. How can I convert this predicted label to audio .. In Colab we can use gtts library .. But when we convert that to tflite and construct an android app , how can we do that ?
Can u pls do some voice related android apps videos ?
hello dude, please help me
how about if i use ur code for classify more class, anything I need to change frome the code???
i have a problem, my model import line of code in the MainActivity is not being highlighted and is a causing me errors
Hello, and thank you for this tutorial. Can I ask how to save the image even when changing fragments? I am building an image classification app on recyclable materials, and I created fragments with information about the image on a new fragment, but after changing the fragment, the image captured or imagined from the gallery dissapears.
The Best explanation... Thank you sir...❤️❤️
If I want to work on a different model with a different mode.tflite file, then what parameters do I have to change on the mobile application?
Whooa, Realy good exercise. Thanks for sharing your knowledge.
How do I make it display the confidence? I tried displaying confidence but it goes over 1 and sometimes even negative
Does tf lite only for image recognition? How about prediction like user enter some integer input and it will able to predict it based on requested input
Can you do this in jetpack compose?
sorry to ask. Can i know what did you use for cnn model? is it inception or mobile net?
wonderful ad clear explanation
Hi! Awesome tutorial! May I ask if you have a kotlin implementation of this. I am kinda stucked on implementing the code provided by the model on kotlin. Thank you for your time and have a great day!
Hi , on which version of android studio, this one is taking place ?
Sir thank you for your video, can you update this code since click on method not working any moreü
Hi guys i have set my out put classes to 3 in my model and i have embeded the tflite file but when i make prediction i always get only the two classes i am not getting the third class please help me?
what if we take a picture of an unkown fruit? we would like to say unknown instead of saying a mistake. How could we do that?
Hi other than display the result, I want to display something else. How can i do that?
Thank you for the video.Can you please make a video food detection and calculating nutrition value
Hello, i have a problem. The app keeps on crashing after i picked a picture either using the gallery or the camera. Please teach how to fix this. Thank you
did you figure out this issue? I'm getting the same issue
Can it be used for facial recognition?
Hı
I am new on machine learning and android application and first of all thanks a lot for your tutorial and I have a problem. Whenever I take picture or choose a picture from gallery, it always says banana! How can I solve that ? Please respond
When i tried this i got errors:
Cannot resolve method 'newInstance' in 'Model'
Cannot resolve symbol 'Outputs'
Cannot resolve method 'process' in 'Model'
Cannot resolve method 'getOutputFeature0AsTensorBuffer()'
You need to make sure that "Model" matches your file name (whatever you named your tflite file)
It'll be helpful to rewatch the part in the video where we import the model to Android Studio. If you double click on your tflite file in Android Studio it even provides you with the code to use.
@@IJApps yes sir model and tflite file has same name ..(model)
@@Athulyanklife package com.example.myapplication; import com.example.myapplication.ml.Model; add these line in your code
I don't know why but i implemented the same model in kotlin using jetpack compose everything is working fine but the model is giving wrong pridiction of banana whatever the fruit is..
Hi, did you have tutorial for kotlin language? Thanks
Here which Android studio version is used ?
Very good and helpful content. Thank you
Can you please suggest mobile net v2 algorithm model deploy in Android app in java?
Your videos are very helpful... Thank you :)
How can add sound when displaying the result؟؟
Good Videos Series IJAPPS. thanks a lot. Learning a lot on AI and Android coding
Hi, Thank you for your great and helpful tutorial but when I run the app and click on the gallery button, the gallery is empty. Any idea why I got this issue?
Are you running it on a virtual device? If so, yes the gallery will be empty unless you already have images there.
@@IJApps Thank you so much for your quick response, I will try it on a real device👍
Hello, my image classification app is not working as every time I try to load a photo from the gallery, it gives me the message "error getting selected files". I ran the debugger and I got errors saying "Source code does not match the bytecode". Do you know why this might be or what I could do to fix this?
This might help: stackoverflow.com/questions/39990752/source-code-does-not-match-the-bytecode-when-debugging-on-a-device
@@IJApps Thanks for the response. It turns out that the tensorflow lite model I am using is not working, as yours seems to work fine in the code. I converted the tensorflow lite model from
the tensorflow model i made in Jupyter Notebook, and I know you didn't make yours in Jupyter Notebook. Would this affect the functionality of the TF Lite model?
How can we use this prediction in live?
hi~ Is it possible to detect even if I put a video instead of a picture?
java.lang.NullPointerException: Attempt to invoke virtual method 'android.content.Context android.content.Context.getApplicationContext()' on a null object reference. Could you please help me?
Which part of the code are you getting this error? I've also provided the link to my code in the video description
Dude, this is so cool, thanks for sharing the knowledge
I had 3 problems when using this code in my project
1. Camera streaming Quality was poor
2. Thumbnail Quality of image view was too bad
3. It does not use my default camera features
Please someone help me
when i use opencv to get bitmap it toast foe me error "The size of byte buffer and the shape do not match." do you know how to fix this
You should make sure that the multiplication when allocating the byte buffer size is whatever input size you had when you were training your model in Python.
Pls Make a project video on image processing brain MRI to detect Alzheimer detection.
Hello sir, how about label.txt ?
Which IDE did you use?
Hi, the app crashed and logcat showed: "caused by: java.lang.ArrayIndexOutOfBoundsException: length=4; index=4" at MainActivity.java:110. I have 13 classes inside my String[] classes. I changed it to 4 and it works. What do i need to do to make it work with all 13 classes? Im not sure what to change since it seems to me like it isnt specified anywhere that it needs to be 4 classes. Any help would be much appreciated.
It depends on the tflite model that you coded and are using. If your model has four outputs, the length should only go up to four. But if you coded it so that it has 13 outputs for the last layer, then you can use 13 instead.
thank you for your answer, it was a stupid mistake on my part. My model was trained for 13 classes, but your answer still helped me find the root of the problem :)
Thank you for your video ! Is it possible to get confidences values here too? Because I keep getting weird values if I use the code from teachable machine with this tutorial :/ Thank you for your time :)
If I understand well I have to apply a softmax to the output "outputFeature0.getFloatArray();" to have a value instead of a vector ? Is it correct? and how could i apply it in android studio with TF lite ? A bit of help would make my month !!
keep it up bro got what I
wanted to learn
Nice and Intuitive 🎉
What changes we made if we work with grayscale image ???
At 9:37, line 79 would have to be 1, 32, 32, 1. Basically wherever you refer to there being 3 channels (for R, G, and B), you have to have just 1 channel because it's greyscale.
Hi,
I tried to run the code on my own and for some reason the app keeps on crashing after I either choose an image from the gallery and take a photo. Can you help me out? Thanks!
Hi. I provided the full code for the app in the video description. Maybe that can help.
Also if you check Logcat in Android Studio while your Android device is connected to your laptop, you can see on what line the error is occuring and what the error is.
This can help you pinpoint why the error's happening. If you can't figure it out, paste the error that Logcat shows into a RUclips comment.
Hey man, got any idea how to implement this in kotlin language?
Hi, everything you explain was clear as day but what if I have many inferences unlike the one you wrote which has only one look around 13:05. BTW this is a customized TFLITE model and I didn't use teachable machine for training. The application doesn't seem to run when using the model I made I think its because Im using a customized one? here is what the model looks like after I imported it to my project.
try {
Tflite model = Tflite.newInstance(context);
// Creates inputs for reference.
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 320, 320, 3}, DataType.FLOAT32);
inputFeature0.loadBuffer(byteBuffer);
// Runs model inference and gets result.
Tflite.Outputs outputs = model.process(inputFeature0);
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
TensorBuffer outputFeature1 = outputs.getOutputFeature1AsTensorBuffer();
TensorBuffer outputFeature2 = outputs.getOutputFeature2AsTensorBuffer();
TensorBuffer outputFeature3 = outputs.getOutputFeature3AsTensorBuffer();
// Releases model resources if no longer used.
model.close();
} catch (IOException e) {
// TODO Handle the exception
}
Can you type how will the float confidences would look like when there's a lot of inferences? Or make a solution video? It would we grateful for me since I love your videos.
Hi. I have a tutorial ono a custom TFLite Model on Android: ruclips.net/video/ba42uYJd8nc/видео.html
Let me know if this helps.
Hey. I followed the same steps given in the tutorial. But it is giving an error on "camera", in file "MainActivity.java" on the lines:
if (checkSelfPermission(Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED){
and
requestPermissions(new String[]{Manifest.permission.CAMERA}, 100);
I can't understand what the problem is, It says "cannot resolve the symbol camera". Android studio is giving option for "rename reference" in the form of "DYNAMIC_RECEIVER_NOT_EXPORTED_PERMISSION" instead of writing "CAMERA". What does that mean. That would not access the camera feature of the phone? It is important that I change the reference here and then I would have to change it in every other place too?
Also, my model.tflite takes the input as : (new int[]{1, 3, 800, 800}, DataType.FLOAT32). It cannot be 3x800 pixels. Right? Should it be 800x800. Considering yours as 32 this is wayy tooo much bigger. Is that why my model is model was not working in VSCode?
Please tell me I have a project due this week.
i have a bug bro. can you explain me please?
Bro, In my system ther is no tensorflow lite in other...I mean in video you said in 2:21 pls help us
Try installing a newer version of Android Studio. Or going to the search bar for Android studio and typing it in.
I'm not sure what else you can do.
I have install new versions, but while running the program top right of the system there is a scroll box with no device what should I do
Also I cannot import Ml model if converted
Either app does npt run or this happens
why my model name is red, and i can't use the model.
Make sure you are importing the tflite model correctly: ruclips.net/video/yV9nrRIC_R0/видео.htmlsi=yhucLlRzbu3e8sh0&t=139
Amazing work AGAIN
sir code giving error
..."can not resolve symbol Model" ???
This is an important step you can look at again: ruclips.net/video/yV9nrRIC_R0/видео.html
You should use whatever you called your file. I called my file "model.tflite" so in my code it's "Model".
The full code is available here: github.com/IJ-Apps/Image-Classification-App-with-Custom-TensorFlow-Model
sir i run your code and it running proper but there is no constraints if i take a picture of random things the model still giving output bana,apple, and orange.
and do not predicting half piece of orange.
@@zee4654 You can try playing around with the Python code for training the model to make it more accurate.
The model trained to classify an image into 3 classes: banana, orange, and apple. therefore it will always give one of those results, even if a random image is shown.
@@IJAppsok Thankyou sir .
you have to change to your own package name
I am troubling with the code is there anyone who can help me in it!
i'm struggle. how to display camera and gallery feature?
Hi, the complete code for the app is found here: github.com/IJ-Apps/Image-Classification-App-with-Custom-TensorFlow-Model
I also have 2 tutorials on getting images from the camera and gallery:
- Camera: ruclips.net/video/7Qwur4xKh-c/видео.html
- Gallery: ruclips.net/video/H1ja8gvTtBE/видео.html
Halooo imelll
Thank you~
Thank you, you're awesome
Thank you for the video. Youndeserve a subsribe
Anyone send the apk file of the app
How much I thank you
Helpfull
Cool 👍
love your video
Love from 💯🙌❤
again thanks
12:20
💯 PЯӨMӨƧM
Thanks for the tutorial! I have a small problem however, I've tried running the code with my own model (I followed the first tutorial in order to create it) and it is giving me the same result no matter what picture i use. This is the code:
Best model = Best.newInstance(getApplicationContext());
// Creates inputs for reference.
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 32, 32, 3}, DataType.FLOAT32);
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * imageSize * imageSize * 3);
byteBuffer.order(ByteOrder.nativeOrder());
int[] intValues = new int[imageSize * imageSize];
image.getPixels(intValues, 0, image.getWidth(), 0, 0, image.getWidth(), image.getHeight());
int pixel = 0;
//iterate over each pixel and extract R, G, and B values. Add those values individually to the byte buffer.
for(int i = 0; i < imageSize; i ++){
for(int j = 0; j < imageSize; j++){
int val = intValues[pixel++]; // RGB
byteBuffer.putFloat(((val >> 16) & 0xFF) * (1.f / 1));
byteBuffer.putFloat(((val >> 8) & 0xFF) * (1.f / 1));
byteBuffer.putFloat((val & 0xFF) * (1.f / 1));
}
}
inputFeature0.loadBuffer(byteBuffer);
// Runs model inference and gets result.
Best.Outputs outputs = model.process(inputFeature0);
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
float[] confidences = outputFeature0.getFloatArray();
// find the index of the class with the biggest confidence.
int maxPos = 0;
float maxConfidence = 0;
for (int i = 0; i < confidences.length; i++) {
if (confidences[i] > maxConfidence) {
maxConfidence = confidences[i];
maxPos = i;
}
}
String[] classes = {"0", "1", "2", "3", "4", "5", "6", "7", "8", "9"};
result.setText(classes[maxPos]);
Log.d("Main", String.valueOf(maxPos));
// Releases model resources if no longer used.
model.close();
Thank you for the tutorial, IJ. I am new in Android apps and ML. I applied your code to an ML binary classification model and have a problem with inference and getting resuts. I always get "Benign" when testing skin moles, never "Malignant". When I check the confidence array, I get something like [F@928d693, different everytime. Below you will find the code involved. Can you help? Thank you.
// Runs model inference and gets result.
MolesAcc98.Outputs outputs = model.process(inputFeature0);
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
float[] confidences = outputFeature0.getFloatArray();
// Let's check what confidences gives us
Toast.makeText(getApplicationContext(), String.valueOf(confidences), Toast.LENGTH_LONG).show();
// find the index of the class with the biggest confidence.
int maxPos = 0;
float maxConfidence = 0;
for (int i = 0; i < confidences.length; i++) {
if (confidences[i] > maxConfidence) {
maxConfidence = confidences[i];
maxPos = i;
}
}
String[] classes = {"Benign", "Malignant"};
result.setText(classes[maxPos]);
// Releases model resources if no longer used.
model.close();
} catch (IOException e) {
// TODO Handle the exception
}
You're using String. Value of on an array. To get an array as a string, you should use Arrays. toString(confidences)
Additionally, since you're doing binary classification, check how many elements are in the confidences array. If it's just one element that means if the confidence number is closer to 0 it's one class, if its closer to 1 it's the other class
@@IJApps Thank you IJApps. I corrected the Toast.makeText and saw that I get a number almost close to zero, 4.156035E-39, which is always the same regardless of the mole I check. Also, length of confidences is one (1).
@@GeorgeTrialonis One other thing I can think of is whether you are providing your model with the right inputs. I don't know what the Python code for your classification model looks like, but this is an important part to pay attention to: ruclips.net/video/yV9nrRIC_R0/видео.html
You should decide whether you have to divide by 255 or 1. some other things to check for are if your model takes in rgb or grayscale images, etc.
@@IJApps Thank you for your assistance. Following your suggestions I was inspired to use a pretrained model from Tensorflow Hub. This offered a better performance but when converted to .tflite and incorporated to your Java code for Android deployment, I was faced with same problems. However, following experimentation, the problem seems to have gone away after I had changed line maxPos = i to maxPos = 1. Here is the snippet:
for (int i = 0; i < confidences.length; i++) {
if (confidences[i] > maxConfidence) {
maxConfidence = confidences[i];
maxPos = 1;
}
}
The app seems to work now. I have tested it with pictures of benign and malignant moles from the internet. Of course, the predictions are not perfect (acc.= 83%) Thank you.
@@GeorgeTrialonis I am glad you found a solution!
at com.example.myapplication.MainActivity.classifyImage(MainActivity.java:96)
at com.example.myapplication.MainActivity.onActivityResult(MainActivity.java:141)
after run code on my device. the app is crashing and i found these above errors after using logcat sir please tell me how to fix these error\
line 96 :Model.Outputs outputs = model.process(inputFeature0);
line 141 : classifyImage(image);
@@skyearth3557 What is the error you are getting?
Did you name your file "model.tflite" or is it called something else?
Hi,
I tried to run the code on my own and for some reason the app keeps on crashing after I either choose an image from the gallery and take a photo. Can you help me out? Thanks
same issue
UP !