What an informative lecture! This has really opened my eyes to the potential of ML inside the world of microscopy, I'll certainly do some more reading into this subject.
In semantic segmentation every pixel is classified so we engineer the features for every pixel (the dataframe shows a value for each filter for each pixel). If we were doing classification of the images, then how would we engineer the features using the filters (instead of for each pixel)? Would the dataframe shows a value for each filter for each image?
Thank you for your nice video. At the 10:29s, lines 83 and 84 are the link to the dataset to use to train the model, right?. They are .tif style. I think it is one only image. Why can we split it to train set and test set as the line 104? Please explain to me can understand clearly. Thank you very much. (The line 104: 10X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.4, random_state=20)).
Thanks! But how can we implement this same method on CSV data (not image) for anomaly detection purposes? There is no digital filters for features engineering and extraction for that type of dataset?
17:08 I don't understand the part where you say you are using multiple files, but calling one tiff image. Is it saved with one name but including somehow 50 z-stacks? I am trying to train a model with 100s of images which have different names.. How do I tran a model: Running 100 different neural networks in parallel or using all 100 images all together as the input layer of one neural network? and how does the coding part work then? Do I give the directory of the folder?
No formal study, just anecdotal observation on a few datasets. A study would be good but I am sure you'll find some published content on similar topics.
So assuming the limitations on data availability there is no other feasible way to surpass the score of a traditional feature engineering approach? I mean, student-teacher models, self supervision, semi supervision, adding synthetic data from a GAN etc., will all fail to segment it (substantially) better? I know you can abuse a dataset to death just to make a point that your 'new' architecture is better than everyone's else, but from your video and your experience i get that all this modern fanfare wont help you in a real setting (not academic) if you dont have a rather large dataset no matter what.
Hi professor, I have been contacting you on email, I am having a research idea, just with your personal help or your resources I think i can have that shot. I am currently in TE, persuing Computer Engineering and you may know this, I am following your videos from past 2 years or more. Your help will be much appreciated and will also be a strong foundation for my future. Thanks and regards. Aryan Sakhala
Always want to know this topic. Great. Keep on learning from your series.
What an informative lecture! This has really opened my eyes to the potential of ML inside the world of microscopy, I'll certainly do some more reading into this subject.
Thanks :)
Thank you for the video and congratulations!!!
really enjoying your course! would like to see a video on multifocus image fusion GAN next ;)
Quality content !
In semantic segmentation every pixel is classified so we engineer the features for every pixel (the dataframe shows a value for each filter for each pixel). If we were doing classification of the images, then how would we engineer the features using the filters (instead of for each pixel)? Would the dataframe shows a value for each filter for each image?
Great content! Are there plans to make a video tutorial about unsupervised segmentation? Maybe start with nuclear segmentation?
Thanks!
You are AWESOME!!!
You too!!
Great content .....thank you
Thank you for your nice video. At the 10:29s, lines 83 and 84 are the link to the dataset to use to train the model, right?. They are .tif style. I think it is one only image. Why can we split it to train set and test set as the line 104? Please explain to me can understand clearly. Thank you very much. (The line 104: 10X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.4, random_state=20)).
Thanks!
But how can we implement this same method on CSV data (not image) for anomaly detection purposes? There is no digital filters for features engineering and extraction for that type of dataset?
Teşekkürler.
Thank you very much.
good luck keep continue
Thanks
Great channel!
Thank you!
sir,How to combine U-net and Bi-lstm and how to use this model in signal processing(not in image processing)
17:08 I don't understand the part where you say you are using multiple files, but calling one tiff image. Is it saved with one name but including somehow 50 z-stacks? I am trying to train a model with 100s of images which have different names.. How do I tran a model: Running 100 different neural networks in parallel or using all 100 images all together as the input layer of one neural network? and how does the coding part work then? Do I give the directory of the folder?
Great content! I'm wondering if you have done any formal study on this topic and may have a paper to share with us?
No formal study, just anecdotal observation on a few datasets. A study would be good but I am sure you'll find some published content on similar topics.
So assuming the limitations on data availability there is no other feasible way to surpass the score of a traditional feature engineering approach? I mean, student-teacher models, self supervision, semi supervision, adding synthetic data from a GAN etc., will all fail to segment it (substantially) better? I know you can abuse a dataset to death just to make a point that your
'new' architecture is better than everyone's else, but from your video and your experience i get that all this modern fanfare wont help you in a real setting (not academic) if you dont have a rather large dataset no matter what.
Great content by the way
thankyou dear
Welcome 😊
Hi professor, I have been contacting you on email, I am having a research idea, just with your personal help or your resources I think i can have that shot. I am currently in TE, persuing Computer Engineering and you may know this, I am following your videos from past 2 years or more. Your help will be much appreciated and will also be a strong foundation for my future.
Thanks and regards.
Aryan Sakhala
Lol what a joke. Don’t do this domain if you are too lazy and stupid to do it on your own! Lol
Valeu!
Thank you very much, really appreciate the contribution.
If you use pytorch you will have a lot to make a video