Sorry for adding a bit of a critical comment here, but from a professional Image Analyst's point of view there are some things that are not quite correct and should not be left uncorrected for others who watch this. However, there is by far not enough space in a comment section to explain them in full. First: background is not noise which is not auto fluorescence which is not unspecific staining which is not uneven illumination. And there is no such thing as background-noise. The term is a combination of actually two imaging inaccuracies. One can subtract background but not noise! All of the former need substantially different methods to reduce them: - background can be measured and subtracted pretty much as shown in the beginning of the video. - noise can only be reduced by the imaging and camera setup (e.g. line or frame averaging in confocal systems) and filtering changes all intensities which is only an option in processing but not before measurements - auto-fluorescence cannot be subtracted since uneven and also not be estimated from a unlabeled sample. It needs special technical equippment for taht (spectral imaging and linear unmixing) - unspecific signal can be reduced in processing using the rolling ball or sliding paraboloid method but this is not quantitative and should NOT be used before intensity measurements!!!!! The rolling ball size also has to be determined based on object size not just optically estimated! Creating a background and using the image calculator is just what the "Subtract Background" method does anyways and still is not quantitative! Setting an arbitrary background intensity using brightness and contrast on the rolling ball estimated background image to then subtract something (????) 🤯is definitely NOT an option or by any means good scientific practice. I would be careful with showing those subjective methods to students which might use them unquestioned and then either get accused of image manipulation or simply end up with inaccurate or wrong analysis results. Students should undergo a sound education in image analysis by image analysts either by organizations such as GloBIAS (former NEUBIAS). Additional info for educational workshops adhering to high good scientific practice standards: www.biovoxxel.de/workshops/
It's an informative video. Thanks for that. I have a question that how to remove the additional particles from a microscopic image. The non-relevant particles adhere to the camera which reflects on the images in a large number.
Hi, difficult to comment without seeing example images. If you have particles on the camera which are static but the image is moving then you could do some frame averaging to identify the particles. Or maybe take a blank image (just the particles?) and subtract that from your real image. It’s not clear though how particles might adhere to the camera in a microscope-based system. Could you provide a bit more information? Do you maybe need to clear out all the dust in the optical path?
Thanks for the video. I have images with a totally black background, I have been selecting the object manually then use Clear Outside to remove background. Is there a way to select the background if it is a solid colour, such as black? thx
Very helpful tutorial video. Thanks for sharing. May I ask two questions? If I use Math>subtract way to remove noisy background, does this mean I can continue to quantify the particles? If so, I want to count the particles in C. elegans, how can I decide the threshold for the treated and non-treated groups? The non-treated worms have faint signals and a more obvious noisy background. Many thanks for your help.
Hi. If the two sets of images have different background levels then it’s tricky. I think your best option is to use different thresholds for both but justify your decision in any methods section. Another possibility is to reduce intensity by x% of total brightness. As long as you are clear in your methods and consistent in your approach, I can’t think of another way.
Hello! Thank you for your video - really helpful for my honours project dissertation. Just to double-check, could this sort of background noise correction (in particular the first way you show - by subtracting the average background intensity) be used before doing statistical analysis (i.e., the Manders Coefficient Correlation)?
Hi, yes. I think that would be fine as long as you clearly state in any methods section how you selected the BG value and how/why you subtracted it. If this is a co-localisation experiment then both channels may have different BG levels. Maybe also think about correcting for the noise as shown in one of my colocalisation tutorials. Good luck.
Sir will you help me for some image processing, related to trichome (hair like structure on leaf) counting ? I am trying but not getting proper results.
Hi. I replied to the email you sent. But here (for others) is my suggestion; I have looked at the images you have sent and can only think of one suggestion that does not involve simply measuring each trichome by hand. One thing to think about is; do you need to measure the entire image? If the trichomes are regularly arranged then you could just measure (by hand) a small section and extrapolate from there. Simply using the line tool to identify and measure each hair like structure. That is slow but by far the most accurate. If you need an automated method for multiple images then I suggest you try the following. (see attached image) 1. use Fiji as it has the plugins you will need 2. Change the image to 8 bit grey scale (Image/Type / 8 bit) 3. Enhance the contrast (Process/enhance contrast) 4. Threshold the hairs (Image/adjust/Threshold) and apply 5. remove single pixels (Process/noise/despeckle) 6. Skeletonise (Process/Binary/Skeletonise) 7. Analyse/Skeleton/Analyze Skeleton. That’s my best guess for now. You will need to read the instruction for the various plugins and play with the parameters to get a better result than I have.
Very good video. Thanks for doing it :) I have a naive question: the background-free pic that you create with the 3th model by substracting the BG pic from Original pic...wouldn't be this outcome the same than the outcome got from the second model? Because the BG pic is generated using the Rolling ball Radius so I understand that is the same algorithm right? Thank you for your response
Hi, yes, I think you are right. My reason for expanding on the 3rd method was to show how the calculated BG image can be manipulated prior to being used. As I say in the video, only use that method if you want to make a pretty picture and not if you intend to measure intensity. Thanks for watching. C
Hello Mr. Daly, my name is António Santos. I am a MD and doing my first scientific project. I am analyzing a serration pattern in fluorescent microscopy and trying to develop a way o identify 2 different patterns using Fiji to than use this in the diagnostics. Could I contact you? do you have any ideas if this is possible?
What to subtract the background if my image is a VSI file with 2 different wavelengths (408nm and 480nm)? Is there any way to separate channels based on this wavelength, so that I can substract the background accordingly?
Sorry, not sure as I have never worked with VSI format. Does ImageJ ‘split channels’ not work? Maybe change the image type to 8-bit and try from there?
Thank you for helping me more than my advisor :)
Hi Craig !
Awesome Video, thanks for the Knowledge
Greetings, a young researcher from Germany !
Sorry for adding a bit of a critical comment here, but from a professional Image Analyst's point of view there are some things that are not quite correct and should not be left uncorrected for others who watch this.
However, there is by far not enough space in a comment section to explain them in full.
First: background is not noise which is not auto fluorescence which is not unspecific staining which is not uneven illumination. And there is no such thing as background-noise. The term is a combination of actually two imaging inaccuracies. One can subtract background but not noise!
All of the former need substantially different methods to reduce them:
- background can be measured and subtracted pretty much as shown in the beginning of the video.
- noise can only be reduced by the imaging and camera setup (e.g. line or frame averaging in confocal systems) and filtering changes all intensities which is only an option in processing but not before measurements
- auto-fluorescence cannot be subtracted since uneven and also not be estimated from a unlabeled sample. It needs special technical equippment for taht (spectral imaging and linear unmixing)
- unspecific signal can be reduced in processing using the rolling ball or sliding paraboloid method but this is not quantitative and should NOT be used before intensity measurements!!!!!
The rolling ball size also has to be determined based on object size not just optically estimated!
Creating a background and using the image calculator is just what the "Subtract Background" method does anyways and still is not quantitative!
Setting an arbitrary background intensity using brightness and contrast on the rolling ball estimated background image to then subtract something (????) 🤯is definitely NOT an option or by any means good scientific practice.
I would be careful with showing those subjective methods to students which might use them unquestioned and then either get accused of image manipulation or simply end up with inaccurate or wrong analysis results.
Students should undergo a sound education in image analysis by image analysts either by organizations such as GloBIAS (former NEUBIAS). Additional info for educational workshops adhering to high good scientific practice standards: www.biovoxxel.de/workshops/
Great Ex-planation, helped a lot in my thesis!
Thank you! Helped me in my Image Analysis course in my Master's :D
It's an informative video. Thanks for that. I have a question that how to remove the additional particles from a microscopic image. The non-relevant particles adhere to the camera which reflects on the images in a large number.
Hi, difficult to comment without seeing example images. If you have particles on the camera which are static but the image is moving then you could do some frame averaging to identify the particles. Or maybe take a blank image (just the particles?) and subtract that from your real image. It’s not clear though how particles might adhere to the camera in a microscope-based system. Could you provide a bit more information? Do you maybe need to clear out all the dust in the optical path?
Thanks for the video. I have images with a totally black background, I have been selecting the object manually then use Clear Outside to remove background. Is there a way to select the background if it is a solid colour, such as black? thx
Hello, I have an image with hidden text but I could not view it because I dont know how... can you help please.
Very helpful tutorial video. Thanks for sharing. May I ask two questions? If I use Math>subtract way to remove noisy background, does this mean I can continue to quantify the particles? If so, I want to count the particles in C. elegans, how can I decide the threshold for the treated and non-treated groups? The non-treated worms have faint signals and a more obvious noisy background. Many thanks for your help.
Hi. If the two sets of images have different background levels then it’s tricky. I think your best option is to use different thresholds for both but justify your decision in any methods section. Another possibility is to reduce intensity by x% of total brightness. As long as you are clear in your methods and consistent in your approach, I can’t think of another way.
And yes, subtract background then analyse is ok. But describe this the methods.
Hello! Thank you for your video - really helpful for my honours project dissertation. Just to double-check, could this sort of background noise correction (in particular the first way you show - by subtracting the average background intensity) be used before doing statistical analysis (i.e., the Manders Coefficient Correlation)?
Hi, yes. I think that would be fine as long as you clearly state in any methods section how you selected the BG value and how/why you subtracted it. If this is a co-localisation experiment then both channels may have different BG levels. Maybe also think about correcting for the noise as shown in one of my colocalisation tutorials. Good luck.
Sir will you help me for some image processing, related to trichome (hair like structure on leaf) counting ? I am trying but not getting proper results.
Hi. I replied to the email you sent. But here (for others) is my suggestion;
I have looked at the images you have sent and can only think of one suggestion that does not involve simply measuring each trichome by hand. One thing to think about is; do you need to measure the entire image? If the trichomes are regularly arranged then you could just measure (by hand) a small section and extrapolate from there. Simply using the line tool to identify and measure each hair like structure. That is slow but by far the most accurate.
If you need an automated method for multiple images then I suggest you try the following. (see attached image)
1. use Fiji as it has the plugins you will need
2. Change the image to 8 bit grey scale (Image/Type / 8 bit)
3. Enhance the contrast (Process/enhance contrast)
4. Threshold the hairs (Image/adjust/Threshold) and apply
5. remove single pixels (Process/noise/despeckle)
6. Skeletonise (Process/Binary/Skeletonise)
7. Analyse/Skeleton/Analyze Skeleton.
That’s my best guess for now. You will need to read the instruction for the various plugins and play with the parameters to get a better result than I have.
Very good video. Thanks for doing it :) I have a naive question: the background-free pic that you create with the 3th model by substracting the BG pic from Original pic...wouldn't be this outcome the same than the outcome got from the second model? Because the BG pic is generated using the Rolling ball Radius so I understand that is the same algorithm right? Thank you for your response
Hi, yes, I think you are right. My reason for expanding on the 3rd method was to show how the calculated BG image can be manipulated prior to being used. As I say in the video, only use that method if you want to make a pretty picture and not if you intend to measure intensity. Thanks for watching. C
Hello Mr. Daly, my name is António Santos. I am a MD and doing my first scientific project. I am analyzing a serration pattern in fluorescent microscopy and trying to develop a way o identify 2 different patterns using Fiji to than use this in the diagnostics. Could I contact you? do you have any ideas if this is possible?
Hi Antonio, Sure find me at University of Glasgow and send an image. I'm thinking FFTs might be your answer. C.
@@CraigDaly thank you so much Mr. Daly, I will send you some images!
What to subtract the background if my image is a VSI file with 2 different wavelengths (408nm and 480nm)? Is there any way to separate channels based on this wavelength, so that I can substract the background accordingly?
Sorry, not sure as I have never worked with VSI format. Does ImageJ ‘split channels’ not work? Maybe change the image type to 8-bit and try from there?
Nice explanation. Thanks.
Awesome video!
Thanks. That’s appreciated.
Thanks.
6:45
Vey helpfull video, thank you