Image Classification App | Deploy TensorFlow model on Android | #2

Поделиться
HTML-код
  • Опубликовано: 29 ноя 2024

Комментарии • 137

  • @GeorgeTrialonis
    @GeorgeTrialonis Год назад +5

    Working for months on your java code, translating it to kotling and testing it on skin moles, I finally realized that my model was the true problem, so I had to redesign it and insert it back into the Android Studio. Your code was OK. Thanks for the inspiration. You are great!

    • @c.p.1090
      @c.p.1090 Год назад +1

      Same experience here.

  • @moshimoshi_04
    @moshimoshi_04 Год назад +7

    Only this one youtube video that explains the concept... Thank you

  • @zeroth7982
    @zeroth7982 2 года назад +3

    I had struggled for five hours but your video helped me solve in 20 minutes.Thanks bro

  • @parikshitsinghrathore6130
    @parikshitsinghrathore6130 2 года назад +3

    thanks for the video, I was applying image classification on a different kind of model but was getting very low accuracy, then I got to know that preprocess was not done inside my network so I replaced 1.f/1 to 1.f/255. really grateful !!

  • @KonstantinosKalogirou
    @KonstantinosKalogirou Год назад +4

    A wonderful, clear and fully understandable tutorial. Plus for using Java!

  • @丁泓元
    @丁泓元 2 года назад +8

    Hi, your courses are helpful to me~~~
    Can you please provide a real time object detection course?

  • @normanlangtonzipingani8387
    @normanlangtonzipingani8387 6 месяцев назад +1

    I have created my own model using Jupyter Notebook, and it is saved in the .pb format. I have successfully converted the model to TFLite to connect it with my Flutter app. Additionally, I manually created a labels.txt file that corresponds to the labels in the TFLite model.
    The issue is that when I run the model on the app with an image, it only shows the last label in the labels.txt file with a confidence of 99%, regardless of the image. Even if I change the images, the last label in the labels.txt file is always shown

  • @dokter5179
    @dokter5179 Год назад +1

    App only display the first name in class, it doesn't even detect objects

  • @espinorobertcharlc.6663
    @espinorobertcharlc.6663 10 месяцев назад

    best tutorial im gonna use this for my capstone project.

  • @Liev04
    @Liev04 2 года назад +2

    My model has w= 31,h=200, chanels=1, the memory always runs out in the loop i don't know why...i wonder if i calculated my buffer correctly

    • @IJApps
      @IJApps  2 года назад

      So your TensorBuffer should be an int array 1, 31, 200, 1
      For ByteBuffer.allocateDirect, it should be x * 31 * 200 * 1 where x is the number of bytes in the data type your model uses. so for example if it's a float 32 for the input, then x should be 4.

    • @Liev04
      @Liev04 2 года назад

      @@IJApps i have this line as a model TensorBuffer.createFixedSize(new int[]{1, 31, 200, 1}, DataType.FLOAT32); i calculated like this ByteBuffer.allocateDirect(4 * 31 * 200 * 1);...do i have to keep these 3 lines of byteBuffer.putFloat(....); for a black and white picture too?

  • @yusefpersona82
    @yusefpersona82 Год назад +1

    I was wondering how would you deploy the app on android, if you have a huge dataset do you need firebase?

  • @firdysani
    @firdysani 2 года назад +1

    Thank you this is very helpful, I was wondering if I could upload my result to cloud (let say firebase or azure) and then later the data could be used in other activity inside android studio?

  • @Replcate
    @Replcate Год назад

    Hello can you make a course on how to make a face verification model (face detection, alignment, recognition, and verification) using PyTorch / Tensorflow and also deploying it in android studio using java

  • @kayoi9474
    @kayoi9474 Год назад

    Hello, I want to ask what should I do if the model can't be imported? It said 'no package found'. In your video, it's on 9:38 line 76

  • @Athulyanklife
    @Athulyanklife 2 года назад +1

    One more doubt .. How can I convert this predicted label to audio .. In Colab we can use gtts library .. But when we convert that to tflite and construct an android app , how can we do that ?
    Can u pls do some voice related android apps videos ?

  • @YUFRANSADIMAKITA
    @YUFRANSADIMAKITA 6 месяцев назад

    hello dude, please help me
    how about if i use ur code for classify more class, anything I need to change frome the code???

  • @limbikanimwanza5312
    @limbikanimwanza5312 Год назад

    i have a problem, my model import line of code in the MainActivity is not being highlighted and is a causing me errors

  • @adrianlim9186
    @adrianlim9186 Год назад +1

    Hello, and thank you for this tutorial. Can I ask how to save the image even when changing fragments? I am building an image classification app on recyclable materials, and I created fragments with information about the image on a new fragment, but after changing the fragment, the image captured or imagined from the gallery dissapears.

  • @AnantaAkash.Podder
    @AnantaAkash.Podder 7 месяцев назад

    The Best explanation... Thank you sir...❤️❤️

  • @arafatyt5
    @arafatyt5 10 месяцев назад

    If I want to work on a different model with a different mode.tflite file, then what parameters do I have to change on the mobile application?

  • @adolfocarrillo248
    @adolfocarrillo248 Год назад

    Whooa, Realy good exercise. Thanks for sharing your knowledge.

  • @michaelreal5308
    @michaelreal5308 2 года назад +1

    How do I make it display the confidence? I tried displaying confidence but it goes over 1 and sometimes even negative

  • @zeref3066
    @zeref3066 2 года назад

    Does tf lite only for image recognition? How about prediction like user enter some integer input and it will able to predict it based on requested input

  • @AmbitionPlayers
    @AmbitionPlayers 13 дней назад

    Can you do this in jetpack compose?

  • @herriz68
    @herriz68 Год назад

    sorry to ask. Can i know what did you use for cnn model? is it inception or mobile net?

  • @sudhakarm4573
    @sudhakarm4573 Год назад

    wonderful ad clear explanation

  • @KasanggaVlogs
    @KasanggaVlogs Год назад

    Hi! Awesome tutorial! May I ask if you have a kotlin implementation of this. I am kinda stucked on implementing the code provided by the model on kotlin. Thank you for your time and have a great day!

  • @talfraizler2497
    @talfraizler2497 Год назад

    Hi , on which version of android studio, this one is taking place ?

  • @krizantem277
    @krizantem277 2 года назад

    Sir thank you for your video, can you update this code since click on method not working any moreü

  • @user-ft8ur3yk1b
    @user-ft8ur3yk1b Год назад

    Hi guys i have set my out put classes to 3 in my model and i have embeded the tflite file but when i make prediction i always get only the two classes i am not getting the third class please help me?

  • @Giowzero
    @Giowzero Год назад

    what if we take a picture of an unkown fruit? we would like to say unknown instead of saying a mistake. How could we do that?

  • @danishahhana
    @danishahhana Год назад

    Hi other than display the result, I want to display something else. How can i do that?

  • @marsham7796
    @marsham7796 2 года назад

    Thank you for the video.Can you please make a video food detection and calculating nutrition value

  • @johndavidojascastro6516
    @johndavidojascastro6516 Год назад

    Hello, i have a problem. The app keeps on crashing after i picked a picture either using the gallery or the camera. Please teach how to fix this. Thank you

    • @takomensei3710
      @takomensei3710 8 месяцев назад

      did you figure out this issue? I'm getting the same issue

  • @RCSA-MohammadNazmibinRosli
    @RCSA-MohammadNazmibinRosli Год назад

    Can it be used for facial recognition?

  • @doruktunc1750
    @doruktunc1750 2 года назад


    I am new on machine learning and android application and first of all thanks a lot for your tutorial and I have a problem. Whenever I take picture or choose a picture from gallery, it always says banana! How can I solve that ? Please respond

  • @Athulyanklife
    @Athulyanklife 2 года назад +1

    When i tried this i got errors:
    Cannot resolve method 'newInstance' in 'Model'
    Cannot resolve symbol 'Outputs'
    Cannot resolve method 'process' in 'Model'
    Cannot resolve method 'getOutputFeature0AsTensorBuffer()'

    • @IJApps
      @IJApps  2 года назад +2

      You need to make sure that "Model" matches your file name (whatever you named your tflite file)
      It'll be helpful to rewatch the part in the video where we import the model to Android Studio. If you double click on your tflite file in Android Studio it even provides you with the code to use.

    • @Athulyanklife
      @Athulyanklife 2 года назад

      @@IJApps yes sir model and tflite file has same name ..(model)

    • @skyearth3557
      @skyearth3557 2 года назад

      @@Athulyanklife package com.example.myapplication; import com.example.myapplication.ml.Model; add these line in your code

  • @yatingarg1509
    @yatingarg1509 Год назад

    I don't know why but i implemented the same model in kotlin using jetpack compose everything is working fine but the model is giving wrong pridiction of banana whatever the fruit is..

  • @mecanerdika5536
    @mecanerdika5536 2 года назад

    Hi, did you have tutorial for kotlin language? Thanks

  • @zeroxia3642
    @zeroxia3642 8 месяцев назад

    Here which Android studio version is used ?

  • @anthonymburu5392
    @anthonymburu5392 2 года назад +1

    Very good and helpful content. Thank you

  • @mohana4179
    @mohana4179 2 года назад

    Can you please suggest mobile net v2 algorithm model deploy in Android app in java?

  • @holywords3339
    @holywords3339 3 месяца назад

    Your videos are very helpful... Thank you :)

  • @sajaata2006
    @sajaata2006 Год назад

    How can add sound when displaying the result؟؟

  • @parthkumar19
    @parthkumar19 2 года назад

    Good Videos Series IJAPPS. thanks a lot. Learning a lot on AI and Android coding

  • @z.muhsin1911
    @z.muhsin1911 2 года назад

    Hi, Thank you for your great and helpful tutorial but when I run the app and click on the gallery button, the gallery is empty. Any idea why I got this issue?

    • @IJApps
      @IJApps  2 года назад

      Are you running it on a virtual device? If so, yes the gallery will be empty unless you already have images there.

    • @z.muhsin1911
      @z.muhsin1911 2 года назад

      @@IJApps Thank you so much for your quick response, I will try it on a real device👍

  • @teekadadi403
    @teekadadi403 Год назад

    Hello, my image classification app is not working as every time I try to load a photo from the gallery, it gives me the message "error getting selected files". I ran the debugger and I got errors saying "Source code does not match the bytecode". Do you know why this might be or what I could do to fix this?

    • @IJApps
      @IJApps  Год назад

      This might help: stackoverflow.com/questions/39990752/source-code-does-not-match-the-bytecode-when-debugging-on-a-device

    • @teekadadi403
      @teekadadi403 Год назад

      @@IJApps Thanks for the response. It turns out that the tensorflow lite model I am using is not working, as yours seems to work fine in the code. I converted the tensorflow lite model from
      the tensorflow model i made in Jupyter Notebook, and I know you didn't make yours in Jupyter Notebook. Would this affect the functionality of the TF Lite model?

  • @karthickk10398
    @karthickk10398 Год назад

    How can we use this prediction in live?

  • @spectrum8200
    @spectrum8200 Год назад

    hi~ Is it possible to detect even if I put a video instead of a picture?

  • @totonyolandaputranosatoton3833
    @totonyolandaputranosatoton3833 2 года назад

    java.lang.NullPointerException: Attempt to invoke virtual method 'android.content.Context android.content.Context.getApplicationContext()' on a null object reference. Could you please help me?

    • @IJApps
      @IJApps  2 года назад

      Which part of the code are you getting this error? I've also provided the link to my code in the video description

  • @KakashiSensei-pu1nc
    @KakashiSensei-pu1nc 2 года назад +1

    Dude, this is so cool, thanks for sharing the knowledge

  • @alostsoul9594
    @alostsoul9594 Год назад

    I had 3 problems when using this code in my project
    1. Camera streaming Quality was poor
    2. Thumbnail Quality of image view was too bad
    3. It does not use my default camera features
    Please someone help me

  • @quangninh8904
    @quangninh8904 2 года назад

    when i use opencv to get bitmap it toast foe me error "The size of byte buffer and the shape do not match." do you know how to fix this

    • @IJApps
      @IJApps  2 года назад

      You should make sure that the multiplication when allocating the byte buffer size is whatever input size you had when you were training your model in Python.

  • @meg33333
    @meg33333 2 года назад

    Pls Make a project video on image processing brain MRI to detect Alzheimer detection.

  • @imambilqisthi5928
    @imambilqisthi5928 Год назад

    Hello sir, how about label.txt ?

  • @farhanaaullyjane4782
    @farhanaaullyjane4782 Год назад

    Which IDE did you use?

  • @happyhippo757
    @happyhippo757 2 года назад

    Hi, the app crashed and logcat showed: "caused by: java.lang.ArrayIndexOutOfBoundsException: length=4; index=4" at MainActivity.java:110. I have 13 classes inside my String[] classes. I changed it to 4 and it works. What do i need to do to make it work with all 13 classes? Im not sure what to change since it seems to me like it isnt specified anywhere that it needs to be 4 classes. Any help would be much appreciated.

    • @IJApps
      @IJApps  2 года назад

      It depends on the tflite model that you coded and are using. If your model has four outputs, the length should only go up to four. But if you coded it so that it has 13 outputs for the last layer, then you can use 13 instead.

    • @happyhippo757
      @happyhippo757 2 года назад +1

      thank you for your answer, it was a stupid mistake on my part. My model was trained for 13 classes, but your answer still helped me find the root of the problem :)

  • @123me456meme
    @123me456meme 2 года назад

    Thank you for your video ! Is it possible to get confidences values here too? Because I keep getting weird values if I use the code from teachable machine with this tutorial :/ Thank you for your time :)

    • @123me456meme
      @123me456meme 2 года назад

      If I understand well I have to apply a softmax to the output "outputFeature0.getFloatArray();" to have a value instead of a vector ? Is it correct? and how could i apply it in android studio with TF lite ? A bit of help would make my month !!

  • @jyotichetry7653
    @jyotichetry7653 2 года назад

    keep it up bro got what I
    wanted to learn

  • @umeshanweerasekara8241
    @umeshanweerasekara8241 7 месяцев назад

    Nice and Intuitive 🎉

  • @medamine985
    @medamine985 2 года назад

    What changes we made if we work with grayscale image ???

    • @IJApps
      @IJApps  2 года назад

      At 9:37, line 79 would have to be 1, 32, 32, 1. Basically wherever you refer to there being 3 channels (for R, G, and B), you have to have just 1 channel because it's greyscale.

  • @gokulkrishnan7984
    @gokulkrishnan7984 2 года назад

    Hi,
    I tried to run the code on my own and for some reason the app keeps on crashing after I either choose an image from the gallery and take a photo. Can you help me out? Thanks!

    • @IJApps
      @IJApps  2 года назад

      Hi. I provided the full code for the app in the video description. Maybe that can help.
      Also if you check Logcat in Android Studio while your Android device is connected to your laptop, you can see on what line the error is occuring and what the error is.
      This can help you pinpoint why the error's happening. If you can't figure it out, paste the error that Logcat shows into a RUclips comment.

  • @dani7662
    @dani7662 2 года назад

    Hey man, got any idea how to implement this in kotlin language?

  • @healersage2281
    @healersage2281 2 года назад

    Hi, everything you explain was clear as day but what if I have many inferences unlike the one you wrote which has only one look around 13:05. BTW this is a customized TFLITE model and I didn't use teachable machine for training. The application doesn't seem to run when using the model I made I think its because Im using a customized one? here is what the model looks like after I imported it to my project.
    try {
    Tflite model = Tflite.newInstance(context);
    // Creates inputs for reference.
    TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 320, 320, 3}, DataType.FLOAT32);
    inputFeature0.loadBuffer(byteBuffer);
    // Runs model inference and gets result.
    Tflite.Outputs outputs = model.process(inputFeature0);
    TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
    TensorBuffer outputFeature1 = outputs.getOutputFeature1AsTensorBuffer();
    TensorBuffer outputFeature2 = outputs.getOutputFeature2AsTensorBuffer();
    TensorBuffer outputFeature3 = outputs.getOutputFeature3AsTensorBuffer();
    // Releases model resources if no longer used.
    model.close();
    } catch (IOException e) {
    // TODO Handle the exception
    }
    Can you type how will the float confidences would look like when there's a lot of inferences? Or make a solution video? It would we grateful for me since I love your videos.

    • @IJApps
      @IJApps  2 года назад

      Hi. I have a tutorial ono a custom TFLite Model on Android: ruclips.net/video/ba42uYJd8nc/видео.html
      Let me know if this helps.

  • @mahrukhhafeez7398
    @mahrukhhafeez7398 9 месяцев назад

    Hey. I followed the same steps given in the tutorial. But it is giving an error on "camera", in file "MainActivity.java" on the lines:
    if (checkSelfPermission(Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED){
    and
    requestPermissions(new String[]{Manifest.permission.CAMERA}, 100);
    I can't understand what the problem is, It says "cannot resolve the symbol camera". Android studio is giving option for "rename reference" in the form of "DYNAMIC_RECEIVER_NOT_EXPORTED_PERMISSION" instead of writing "CAMERA". What does that mean. That would not access the camera feature of the phone? It is important that I change the reference here and then I would have to change it in every other place too?
    Also, my model.tflite takes the input as : (new int[]{1, 3, 800, 800}, DataType.FLOAT32). It cannot be 3x800 pixels. Right? Should it be 800x800. Considering yours as 32 this is wayy tooo much bigger. Is that why my model is model was not working in VSCode?
    Please tell me I have a project due this week.

  • @omenbarker2876
    @omenbarker2876 8 месяцев назад

    i have a bug bro. can you explain me please?

  • @divyasekar1156
    @divyasekar1156 2 года назад

    Bro, In my system ther is no tensorflow lite in other...I mean in video you said in 2:21 pls help us

    • @IJApps
      @IJApps  2 года назад

      Try installing a newer version of Android Studio. Or going to the search bar for Android studio and typing it in.
      I'm not sure what else you can do.

    • @divyasekar1156
      @divyasekar1156 2 года назад

      I have install new versions, but while running the program top right of the system there is a scroll box with no device what should I do

  • @alostsoul9594
    @alostsoul9594 Год назад

    Also I cannot import Ml model if converted

    • @alostsoul9594
      @alostsoul9594 Год назад

      Either app does npt run or this happens

  • @albertaldemita2604
    @albertaldemita2604 Год назад

    why my model name is red, and i can't use the model.

    • @IJApps
      @IJApps  Год назад

      Make sure you are importing the tflite model correctly: ruclips.net/video/yV9nrRIC_R0/видео.htmlsi=yhucLlRzbu3e8sh0&t=139

  • @Liev04
    @Liev04 2 года назад +1

    Amazing work AGAIN

  • @zee4654
    @zee4654 2 года назад

    sir code giving error
    ..."can not resolve symbol Model" ???

    • @IJApps
      @IJApps  2 года назад +2

      This is an important step you can look at again: ruclips.net/video/yV9nrRIC_R0/видео.html
      You should use whatever you called your file. I called my file "model.tflite" so in my code it's "Model".
      The full code is available here: github.com/IJ-Apps/Image-Classification-App-with-Custom-TensorFlow-Model

    • @zee4654
      @zee4654 2 года назад

      sir i run your code and it running proper but there is no constraints if i take a picture of random things the model still giving output bana,apple, and orange.
      and do not predicting half piece of orange.

    • @IJApps
      @IJApps  2 года назад +1

      ​@@zee4654 You can try playing around with the Python code for training the model to make it more accurate.
      The model trained to classify an image into 3 classes: banana, orange, and apple. therefore it will always give one of those results, even if a random image is shown.

    • @zee4654
      @zee4654 2 года назад

      @@IJAppsok Thankyou sir .

    • @김동우-z6c7h
      @김동우-z6c7h 2 года назад

      you have to change to your own package name

  • @amirmasood2320
    @amirmasood2320 Год назад

    I am troubling with the code is there anyone who can help me in it!

  • @vanilladayo
    @vanilladayo Год назад

    i'm struggle. how to display camera and gallery feature?

    • @IJApps
      @IJApps  Год назад

      Hi, the complete code for the app is found here: github.com/IJ-Apps/Image-Classification-App-with-Custom-TensorFlow-Model
      I also have 2 tutorials on getting images from the camera and gallery:
      - Camera: ruclips.net/video/7Qwur4xKh-c/видео.html
      - Gallery: ruclips.net/video/H1ja8gvTtBE/видео.html

  • @Nadim-qk4sh
    @Nadim-qk4sh Год назад

    Halooo imelll

  • @DokebiAgent
    @DokebiAgent 2 года назад +1

    Thank you~

  • @trangvo195
    @trangvo195 Год назад

    Thank you, you're awesome

  • @ryanazmi2067
    @ryanazmi2067 Год назад

    Thank you for the video. Youndeserve a subsribe

  • @amartyavisen1983
    @amartyavisen1983 2 года назад

    Anyone send the apk file of the app

  • @CareerHirings
    @CareerHirings Год назад

    How much I thank you

  • @218amalsebastian4
    @218amalsebastian4 2 года назад

    Helpfull

  • @Hgrewssauujdkhvcjjipp
    @Hgrewssauujdkhvcjjipp Год назад

    Cool 👍

  • @matdon1261
    @matdon1261 2 года назад

    love your video

  • @JasamritRahala
    @JasamritRahala 8 месяцев назад

    Love from 💯🙌❤

  • @experienceY
    @experienceY Год назад

    again thanks

  • @waleedijaz6843
    @waleedijaz6843 Год назад

    12:20

  • @heidilinnea313
    @heidilinnea313 2 года назад

    💯 PЯӨMӨƧM

  • @alestrauss304
    @alestrauss304 Год назад

    Thanks for the tutorial! I have a small problem however, I've tried running the code with my own model (I followed the first tutorial in order to create it) and it is giving me the same result no matter what picture i use. This is the code:
    Best model = Best.newInstance(getApplicationContext());
    // Creates inputs for reference.
    TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 32, 32, 3}, DataType.FLOAT32);
    ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * imageSize * imageSize * 3);
    byteBuffer.order(ByteOrder.nativeOrder());
    int[] intValues = new int[imageSize * imageSize];
    image.getPixels(intValues, 0, image.getWidth(), 0, 0, image.getWidth(), image.getHeight());
    int pixel = 0;
    //iterate over each pixel and extract R, G, and B values. Add those values individually to the byte buffer.
    for(int i = 0; i < imageSize; i ++){
    for(int j = 0; j < imageSize; j++){
    int val = intValues[pixel++]; // RGB
    byteBuffer.putFloat(((val >> 16) & 0xFF) * (1.f / 1));
    byteBuffer.putFloat(((val >> 8) & 0xFF) * (1.f / 1));
    byteBuffer.putFloat((val & 0xFF) * (1.f / 1));
    }
    }
    inputFeature0.loadBuffer(byteBuffer);
    // Runs model inference and gets result.
    Best.Outputs outputs = model.process(inputFeature0);
    TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
    float[] confidences = outputFeature0.getFloatArray();
    // find the index of the class with the biggest confidence.
    int maxPos = 0;
    float maxConfidence = 0;
    for (int i = 0; i < confidences.length; i++) {
    if (confidences[i] > maxConfidence) {
    maxConfidence = confidences[i];
    maxPos = i;
    }
    }
    String[] classes = {"0", "1", "2", "3", "4", "5", "6", "7", "8", "9"};
    result.setText(classes[maxPos]);
    Log.d("Main", String.valueOf(maxPos));
    // Releases model resources if no longer used.
    model.close();

  • @GeorgeTrialonis
    @GeorgeTrialonis 2 года назад +1

    Thank you for the tutorial, IJ. I am new in Android apps and ML. I applied your code to an ML binary classification model and have a problem with inference and getting resuts. I always get "Benign" when testing skin moles, never "Malignant". When I check the confidence array, I get something like [F@928d693, different everytime. Below you will find the code involved. Can you help? Thank you.
    // Runs model inference and gets result.
    MolesAcc98.Outputs outputs = model.process(inputFeature0);
    TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
    float[] confidences = outputFeature0.getFloatArray();
    // Let's check what confidences gives us
    Toast.makeText(getApplicationContext(), String.valueOf(confidences), Toast.LENGTH_LONG).show();
    // find the index of the class with the biggest confidence.
    int maxPos = 0;
    float maxConfidence = 0;
    for (int i = 0; i < confidences.length; i++) {
    if (confidences[i] > maxConfidence) {
    maxConfidence = confidences[i];
    maxPos = i;
    }
    }
    String[] classes = {"Benign", "Malignant"};
    result.setText(classes[maxPos]);
    // Releases model resources if no longer used.
    model.close();
    } catch (IOException e) {
    // TODO Handle the exception
    }

    • @IJApps
      @IJApps  2 года назад

      You're using String. Value of on an array. To get an array as a string, you should use Arrays. toString(confidences)
      Additionally, since you're doing binary classification, check how many elements are in the confidences array. If it's just one element that means if the confidence number is closer to 0 it's one class, if its closer to 1 it's the other class

    • @GeorgeTrialonis
      @GeorgeTrialonis 2 года назад

      @@IJApps Thank you IJApps. I corrected the Toast.makeText and saw that I get a number almost close to zero, 4.156035E-39, which is always the same regardless of the mole I check. Also, length of confidences is one (1).

    • @IJApps
      @IJApps  2 года назад

      ​@@GeorgeTrialonis One other thing I can think of is whether you are providing your model with the right inputs. I don't know what the Python code for your classification model looks like, but this is an important part to pay attention to: ruclips.net/video/yV9nrRIC_R0/видео.html
      You should decide whether you have to divide by 255 or 1. some other things to check for are if your model takes in rgb or grayscale images, etc.

    • @GeorgeTrialonis
      @GeorgeTrialonis 2 года назад

      @@IJApps Thank you for your assistance. Following your suggestions I was inspired to use a pretrained model from Tensorflow Hub. This offered a better performance but when converted to .tflite and incorporated to your Java code for Android deployment, I was faced with same problems. However, following experimentation, the problem seems to have gone away after I had changed line maxPos = i to maxPos = 1. Here is the snippet:
      for (int i = 0; i < confidences.length; i++) {
      if (confidences[i] > maxConfidence) {
      maxConfidence = confidences[i];
      maxPos = 1;
      }
      }
      The app seems to work now. I have tested it with pictures of benign and malignant moles from the internet. Of course, the predictions are not perfect (acc.= 83%) Thank you.

    • @IJApps
      @IJApps  2 года назад

      ​@@GeorgeTrialonis I am glad you found a solution!

  • @skyearth3557
    @skyearth3557 2 года назад

    at com.example.myapplication.MainActivity.classifyImage(MainActivity.java:96)
    at com.example.myapplication.MainActivity.onActivityResult(MainActivity.java:141)
    after run code on my device. the app is crashing and i found these above errors after using logcat sir please tell me how to fix these error\

    • @skyearth3557
      @skyearth3557 2 года назад

      line 96 :Model.Outputs outputs = model.process(inputFeature0);
      line 141 : classifyImage(image);

    • @IJApps
      @IJApps  2 года назад

      @@skyearth3557 What is the error you are getting?
      Did you name your file "model.tflite" or is it called something else?

  • @skyearth3557
    @skyearth3557 2 года назад

    Hi,
    I tried to run the code on my own and for some reason the app keeps on crashing after I either choose an image from the gallery and take a photo. Can you help me out? Thanks