Thank you so much for this wonderful content! I was wondering, can registration be used to "stitch" together object detections (such as from U-Net) to track the objects over time?
Thank you for the video. I have two questions if possible; first one, why you choose the macthing method(brute force with HAMMING and not use KNN with ratio test? Second one, you have applied RANSAC to reject Outlier points, are you able to count them?
Thank you very much for this amazing Video. I have one question ; are there any metrics to be calculated to evaluate the quality of registration? more specifically, i would like to conduct a comparative study between several registration techniques and justify the choice of the one that gives the best results. Thank you in advance @Python for Microscopists by Sreeni
Obviously the primary reliable metric would be accuracy. For that you need a reference image and an image with known distortion. I recommend taking the original image and translate/rotate it by a know amount so you know what the correct answer should be. You can also perform advanced distortion where you can quantify the amount, if you want to get a bit fancy.
Thanks a lot for the tutorials! Really useful. If I may ask, am I right that you are involved also in Apeer development/support? Could you tell few words what are the pros and cons of these two approaches to image analysis (1) using custom-made scripts vs (2) using platform like Apeer with ready-to-use modules?
Taras, you are absolutely right - I am part of the APEER team at ZEISS. In fact, as part of my APEER efforts I met with a lot of researchers and realized most of them cannot code. This is not acceptable anymore as coding is a key skill to any researcher & engineer in the 21st century. I started this channel to help these researchers overcome their fear of coding. With these videos, I hope to help at least one person get ahead in his/her professional life. Regarding your question about pros and cons of coding yourself vs using platforms like www.apeer.com: These two approaches are not mutually exclusive. APEER is designed to convert your code into a module using Docker technology. This enables you (also other non coders) to easily use the module to customize applications. If someone else built a module to count particles why do you need to reinvent it? You may as well write code to segment the particles in your image, convert it into a module and connect to existing particle counting module. In other words, think of APEER as github except on APEER the code is stored in a usable and connectable way as the user interacts with a UI and not with the raw code. I hope this answers your question.
Great work . Amazing video But If we have different images template and input images for this what we will be done for that images any method please tell me early Thank you
Thank you for the amazing video. Is this method suitable for registering multi-modal images? Basically registering one image belonging to one modality with another image of another modality. For eg. CT and MRI images. If not, could you refer some technique that does this? Thanks
Thank you so much for this wonderful content .Sir, If we want to use SIFT and SURF at the place of ORB . Would you please explain it how does it works?
Great video!! I was wondering if we can make sure it registers the image as an exact copy of the template. I used this code and tried to see the difference with image subtraction, the results showed a picture with noises and edges (which should not be the case if it is perfectly registered). Please suggest a way to address this problem. I have to use image registration for image subtraction and it is not possible if the 2 images are not registered properly.
You can use slideio or something similar to load whole slides. Then extract individual chunks to work on them. Installing slideio is a pain and I was unable to manage it despite repeated tries.
Sir, 1. why did you sorted out the matches is there any special benefit or reason in sorting out the matches? 2. Just for the sake of knowledge I am asking is this "registration of images" also used to find out the disfigured faces of people who went through accidents?
1. Sorting matches to find the best ones. Otherwise you will end up with a whole bunch pf matches that are probably not good. 2. Not sure of that application. This approach looks at features in two images, matches them and morphs one of the images to match the other.
Is it possible to register images which are non-geometrical and textured, and also register unstable images, that is , images that are in motion or images which are non static??
Not sure what you mean by 'non-geometrical and textured'. If there are features that can be detected using the key point detector then there is a good chance the registration works. If there are no features that can be detected then even a human will struggle to register images.
@@DigitalSreeni thank you for replying. This is for my research paper on image registrations in Augmented Reality. I read so many papers and found that there is problem of image registration in augmented reality. The virtual object only superimpose on flat surface and cannot be placed on uneven shaped object. If possible can you guide me through this. And thanks for the lectures , it really helped me to understand the concepts very well.
I don't see why not. This approach just identifies key points in two images, maps then and then aligns the distorted image. So, you should be able to use any image. For video, you will be dealing with a series of images so you need to be a bit clever wit your implementation.
we have seen many tutorials and they all convert images in gray scale.. because there are reasons. First, RGB has three layers which is in 3D which take three times more bytes then gray scale because of 3 layers(RGB). Where gray scale is in 2D. So, 2D image will take less space and time. we can go with color image but it will take more time.
RMSE calculation can be done if you have 2 images, one original and the other after transformation. It is tricky with image registration unless you have an image with known displacement. You can intentionally displace (translate) the image by a known amount and perform registration. Now you will have original (known) displacement and the calculated displacement. Then RMSE can be easily calculated. mse = sklearn.metrics.mean_squared_error(actual, predicted) rmse = math.sqrt(mse)
Thank you for a good video! Could you include how to alter the keypoints in a feature video? i have the problem of wrong matches between two dynamics in a kidney MRI. i've tried to limit the distance, but it makes no difference. So i've been trying to match the keypoints in 4 sections (instead of matching the whole image) to avoid matches between wrong section (the images are very simular and just need to be warped a llittle) and applying the matches on the whole image when warping, but without luck
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC) cv2.error: D:\Build\OpenCV\opencv-3.3.0\modules\calib3d\src\ptsetreg.cpp:180: error: (-215) count >= 0 && count2 == count in function cv::RANSACPointSetRegistrator::run How can i solve this problem sir?
Looks like no features were detected, as reported by count>==0. Also count2 may refer to the objects detected in second image which must be same number as the first one. Please make sure the images are properly read into Python and also that they have some features that can be detected.
The font of code in my early videos was small. You can download the code from the following link and follow along. github.com/bnsreenu/python_for_microscopists
When aligning multi-modal images with more complex scenes or large differences in image content and appearance, this method is relatively weak, and there are many key points that are wrongly matched.😒
You can try a few alternatives... Feature-based methods: Feature-based methods like SIFT (Scale-Invariant Feature Transform) and SURF (Speeded-Up Robust Features) are robust to changes in scale, orientation, and lighting conditions, and can be used to identify and match key points between images. Once the key points are matched, the transformation that aligns the images can be estimated using methods like RANSAC (Random Sample Consensus) or Least Squares. Optical flow-based methods: Optical flow-based methods estimate the motion of pixels between frames, and can be used to align multi-modal images. These methods work well for videos or image sequences, and can be used to estimate the transformation that aligns the images. There are several optical flow methods available, such as the Lucas-Kanade method and the Horn-Schunck method.
the way to explain each point makes me realize how good a teacher can be...
I have search alot in RUclips
But didn't find anyone providing that type of skillful explanation along with coding skills
Thanks alot sir♥️
Both the reason and intention is very much appreciated. Please keep up your good work bringing CV to the enthusiastic masses!!
Thank you mr. May God bless you in all the good you do. May he prosperous everything that your hand do.
Thank you too
I like your hard effort to teach and convey the concepts
Thank you very much for your excellent teaching, Professor.
Loved the video! Thanks for sharing this ! Looking forward to other stuff from your channel!
thank you :) your humbleness humbles me
Thank you so much for this wonderful content! I was wondering, can registration be used to "stitch" together object detections (such as from U-Net) to track the objects over time?
A big salute to you brother.
Great explanation...love the way you explain....
Thanks a lot 😊
Thank you for the video. I have two questions if possible; first one, why you choose the macthing method(brute force with HAMMING and not use KNN with ratio test?
Second one, you have applied RANSAC to reject Outlier points, are you able to count them?
I think I'll be creating an image stacking algorithm for astrophotography soon!
i have as issue appeared when i try to use it, the resulted image has some rotation and translation
Thank you very much for this amazing Video.
I have one question ; are there any metrics to be calculated to evaluate the quality of registration?
more specifically, i would like to conduct a comparative study between several registration techniques and justify the choice of the one that gives the best results.
Thank you in advance @Python for Microscopists by Sreeni
Obviously the primary reliable metric would be accuracy. For that you need a reference image and an image with known distortion. I recommend taking the original image and translate/rotate it by a know amount so you know what the correct answer should be. You can also perform advanced distortion where you can quantify the amount, if you want to get a bit fancy.
Thank you very much for this very informative video! I learnt a lot from your explanations :)
Thank you very much for all your work
Thanks for share this!!! It's amazing!!
Thank you Sir! A little expensive with details but very useful video!
Great job sir 👍
Thanks a lot for the tutorials! Really useful.
If I may ask, am I right that you are involved also in Apeer development/support? Could you tell few words what are the pros and cons of these two approaches to image analysis (1) using custom-made scripts vs (2) using platform like Apeer with ready-to-use modules?
Taras, you are absolutely right - I am part of the APEER team at ZEISS. In fact, as part of my APEER efforts I met with a lot of researchers and realized most of them cannot code. This is not acceptable anymore as coding is a key skill to any researcher & engineer in the 21st century. I started this channel to help these researchers overcome their fear of coding. With these videos, I hope to help at least one person get ahead in his/her professional life.
Regarding your question about pros and cons of coding yourself vs using platforms like www.apeer.com:
These two approaches are not mutually exclusive. APEER is designed to convert your code into a module using Docker technology. This enables you (also other non coders) to easily use the module to customize applications. If someone else built a module to count particles why do you need to reinvent it? You may as well write code to segment the particles in your image, convert it into a module and connect to existing particle counting module. In other words, think of APEER as github except on APEER the code is stored in a usable and connectable way as the user interacts with a UI and not with the raw code.
I hope this answers your question.
@@DigitalSreeni Sounds great, best of luck with both initiatives!
Great work . Amazing video
But If we have different images template and input images for this what we will be done for that images any method please tell me early
Thank you
Code for rigid (eucledian ) transform and similarity transform using opencv python ?
Thank you for the amazing video.
Is this method suitable for registering multi-modal images? Basically registering one image belonging to one modality with another image of another modality. For eg. CT and MRI images.
If not, could you refer some technique that does this? Thanks
Great videos!!! Thanks a lot, man.
Thank you so much for this wonderful content .Sir, If we want to use SIFT and SURF at the place of ORB . Would you please explain it how does it works?
clone cv2 using non_free features=ON
Great video!! I was wondering if we can make sure it registers the image as an exact copy of the template. I used this code and tried to see the difference with image subtraction, the results showed a picture with noises and edges (which should not be the case if it is perfectly registered).
Please suggest a way to address this problem. I have to use image registration for image subtraction and it is not possible if the 2 images are not registered properly.
Awesome video.
I would like to know how to apply this for the whole slide image at high resolution? as they are huge and unable to load at once.
You can use slideio or something similar to load whole slides. Then extract individual chunks to work on them. Installing slideio is a pain and I was unable to manage it despite repeated tries.
@@DigitalSreeni Alright. Thank You. I can try openslide also.
This works for non rigid images?
I have two images with similar objects but slights misaligned. What method do you recommend? I can send you the images on an email or something.
What is your goal? To align both images? If so, try this: ruclips.net/video/5FEr5SiXB1g/видео.html
great video, thank you sir!
Thanks a lot , appreciated effort 💙💙
Sir,
1. why did you sorted out the matches is there any special benefit or reason in sorting out the matches?
2. Just for the sake of knowledge I am asking is this "registration of images" also used to find out the disfigured faces of people who went through accidents?
1. Sorting matches to find the best ones. Otherwise you will end up with a whole bunch pf matches that are probably not good.
2. Not sure of that application. This approach looks at features in two images, matches them and morphs one of the images to match the other.
I have tried this code with a fliped distorted image and it doesn't work.
Fantastic tutorial! Thanks
You're very welcome!
Great work!! could you add the source code please?
Is it possible to register images which are non-geometrical and textured, and also register unstable images, that is , images that are in motion or images which are non static??
Not sure what you mean by 'non-geometrical and textured'. If there are features that can be detected using the key point detector then there is a good chance the registration works. If there are no features that can be detected then even a human will struggle to register images.
@@DigitalSreeni thank you for replying.
This is for my research paper on image registrations in Augmented Reality. I read so many papers and found that there is problem of image registration in augmented reality. The virtual object only superimpose on flat surface and cannot be placed on uneven shaped object. If possible can you guide me through this.
And thanks for the lectures , it really helped me to understand the concepts very well.
For the first, Thank You very much Sir. May i ask sir, how to reduce size of the result images, because in my pc it full filled my monitor.
Thank you very much sir for the video, learned a lot. Could you do a video on multi modal image registration using homography? it will be very useful.
Hi Sir...if any training is available for image processing..kindly let me know sir...iam happy to join.
Can i do the same with an image and a video feed from a webcam? assuming looking at the same object? thanks in advance
I don't see why not. This approach just identifies key points in two images, maps then and then aligns the distorted image. So, you should be able to use any image. For video, you will be dealing with a series of images so you need to be a bit clever wit your implementation.
Thank you for sharing!!!
Do you have a personal email that I can submit some quick technical questions?
why the images are converted to gray scale?
No specific reason. You can try working with color images.
we have seen many tutorials and they all convert images in gray scale.. because there are reasons. First, RGB has three layers which is in 3D which take three times more bytes then gray scale because of 3 layers(RGB). Where gray scale is in 2D. So, 2D image will take less space and time. we can go with color image but it will take more time.
how can i calculate root mean square error after all this process.....?
RMSE calculation can be done if you have 2 images, one original and the other after transformation. It is tricky with image registration unless you have an image with known displacement. You can intentionally displace (translate) the image by a known amount and perform registration. Now you will have original (known) displacement and the calculated displacement. Then RMSE can be easily calculated.
mse = sklearn.metrics.mean_squared_error(actual, predicted)
rmse = math.sqrt(mse)
@@DigitalSreeni thanks a lot 🙂
Thank you
i can't run your code
Thank you for a good video!
Could you include how to alter the keypoints in a feature video? i have the problem of wrong matches between two dynamics in a kidney MRI. i've tried to limit the distance, but it makes no difference. So i've been trying to match the keypoints in 4 sections (instead of matching the whole image) to avoid matches between wrong section (the images are very simular and just need to be warped a llittle) and applying the matches on the whole image when warping, but without luck
I never had to explore altering keypoints so sorry cannot help right away unless I explore the topic.
Nice tutorial
Thanks
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
cv2.error: D:\Build\OpenCV\opencv-3.3.0\modules\calib3d\src\ptsetreg.cpp:180: error: (-215) count >= 0 && count2 == count in function cv::RANSACPointSetRegistrator::run
How can i solve this problem sir?
Looks like no features were detected, as reported by count>==0. Also count2 may refer to the objects detected in second image which must be same number as the first one. Please make sure the images are properly read into Python and also that they have some features that can be detected.
You can also try other object detectors like Hessian.
I need this code. Please could you provide it sir ??
All my code is on my guthub repo. Please look under the video description for the link.
You explain in a graeat way. But code font ia too small i am not able to see it.
The font of code in my early videos was small. You can download the code from the following link and follow along. github.com/bnsreenu/python_for_microscopists
Thanks a lot sir, Please give me your code!! and also, can you feed me more about Non rigid image registration ?!!
Here is a great library for non rigid image registration: github.com/almarklein/pyelastix
thnx
You're welcome!
When aligning multi-modal images with more complex scenes or large differences in image content and appearance, this method is relatively weak, and there are many key points that are wrongly matched.😒
You can try a few alternatives...
Feature-based methods: Feature-based methods like SIFT (Scale-Invariant Feature Transform) and SURF (Speeded-Up Robust Features) are robust to changes in scale, orientation, and lighting conditions, and can be used to identify and match key points between images. Once the key points are matched, the transformation that aligns the images can be estimated using methods like RANSAC (Random Sample Consensus) or Least Squares.
Optical flow-based methods: Optical flow-based methods estimate the motion of pixels between frames, and can be used to align multi-modal images. These methods work well for videos or image sequences, and can be used to estimate the transformation that aligns the images. There are several optical flow methods available, such as the Lucas-Kanade method and the Horn-Schunck method.