could you respond on my query in your video for paper with core on mvtec detection , the question is how to get the label for multi class anamoly detection ruclips.net/video/cb64EyefDuA/видео.htmlsi=627RTBgcTXRbmtQH
Hi, I saw your query. And if you have multiple classes to detect, it becomes a classification problem. And that video was for anomaly detection, it is only useful if the you want to detect whether the new image belongs to the known distribution or not. You can also use the anomaly detection model as classifier too. But then you should have N number of anomaly detection model each representing images belonging to specific class, here N is total number of classes. Or the better approach will be to use a classifier instead, like shown in this video. I hope that helps.
@@Mohankumardash Thanks for the response and I actually tried multiple anamoly detection models but when I tried to predict on test image and run it through multiple anamoly detection models atleast 2 to 3 anamoly detection models are matching to the known distribution with low anamoly score and the diff between the score is
@@Mohankumardash I saw the video and it is using mobile net pre-trained model, for mvtec dataset can we use the similar idea of using hooks(mentioned in anamoly detection video) and aggregate the layers(1536 dimensions) and avoid last layer(2048) and then train the model for better performance. btw Great explaination on this video
@@kidsstories-cw2yc Yes, that is definitely possible. Just like for the mobile scratch case, I had 3 classes, you can first define the classes you have for your dataset and divided them into subfolders. Then you can easily fine-tune the pre-trained model (mobile net or anything else) for your dataset. There is another way, too. You can use the pre-trained models as feature extractors and get the 1536-dimensional vector embedding for each image. Then, train a basic classifier like SVM, decision tree, or Random forest for the classification task. However, in my opinion, this approach tends to give subpar results compared to fine-tuning the whole model.
@@Mohankumardash ok Thanks and do you have the tutorial video on above suggested approach? , any video which approximates the above suggested approach is also fine to me
could you respond on my query in your video for paper with core on mvtec detection , the question is how to get the label for multi class anamoly detection
ruclips.net/video/cb64EyefDuA/видео.htmlsi=627RTBgcTXRbmtQH
Hi, I saw your query. And if you have multiple classes to detect, it becomes a classification problem. And that video was for anomaly detection, it is only useful if the you want to detect whether the new image belongs to the known distribution or not.
You can also use the anomaly detection model as classifier too. But then you should have N number of anomaly detection model each representing images belonging to specific class, here N is total number of classes.
Or the better approach will be to use a classifier instead, like shown in this video. I hope that helps.
@@Mohankumardash Thanks for the response and I actually tried multiple anamoly detection models but when I tried to predict on test image and run it through multiple anamoly detection models atleast 2 to 3 anamoly detection models are matching to the known distribution with low anamoly score and the diff between the score is
@@Mohankumardash I saw the video and it is using mobile net pre-trained model, for mvtec dataset can we use the similar idea of using hooks(mentioned in anamoly detection video) and aggregate the layers(1536 dimensions) and avoid last layer(2048) and then train the model for better performance. btw Great explaination on this video
@@kidsstories-cw2yc Yes, that is definitely possible. Just like for the mobile scratch case, I had 3 classes, you can first define the classes you have for your dataset and divided them into subfolders. Then you can easily fine-tune the pre-trained model (mobile net or anything else) for your dataset.
There is another way, too. You can use the pre-trained models as feature extractors and get the 1536-dimensional vector embedding for each image. Then, train a basic classifier like SVM, decision tree, or Random forest for the classification task. However, in my opinion, this approach tends to give subpar results compared to fine-tuning the whole model.
@@Mohankumardash ok Thanks and do you have the tutorial video on above suggested approach? , any video which approximates the above suggested approach is also fine to me