Hello, thanks for the helpful video. I am new to the world of image processing. I will be starting a project to identify fungal spore cells from images taken of microscopy slides (fluorescent and/or polarized). The goal is to train a model which can identify spores between different fungal species. In my case, the spores of the two species of interest differ slightly by size and spore wall structure. Essentially, my end goal is to process thousands of images of spores with a working model (1. identify the spores on a slide image, and 2. determine the species based on known parameters unique to each species). My apologies for the long winded inquiry, but any pointers or recommendations you could provide me would be immensely helpful. I am trying to determine different softwares and/or programs where this could be possible. Do you think this would be possible by only using Fiji and ImageJ or do I need to look into StarDist and Ilastik?
Hey, thanks for your video. It was wonderful to learn so much about ImageJ. I just had a query, how to measure the velocity of a rising bubble in any solution using Image J?
Do you happen to know how to stitch images without compromising the original image data integrity? for example, when analyzing fluorescence intensity from images of sections of a C. Elegans, when stitched, the intensity and pixelation changes, how can you control for that?
The integrity doesn't have to be compromised. It depends what method you use. Typically if there is some kind of interpolation of the image i.e. if there is any rotation or sub-pixel movements, this can change values. Also some techniques which will correct for artefacts will also apparently change the values, i.e vignetting is sometimes corrected and will alter values. As long as the introduced errors are not biasing toward one of your conditions then you should be able to proceed (with caution). If you want no changes then choose a simple translation transformation rather than a rigid, or affine transformation.
@@sharavanakkumarsk5367 Hi, you cannot define two independent channels from one data array. I think you might be talking about the Lookup-Tables (LUTs) which map from the data to the colour shown on the monitor. In this case it is the same underlying data visualised in two different ways. [x1,x2,x3...xn] -> [[0,0,x1],[0,0,x2],[0,0,x3]....[0,0,xn]] would be give blue appearance in a RGB colorspace. Whereas [x1,x2,x3...xn] -> [[x1,0,0],[x2,0,0],[x3,0,0]....[xn,0,0]] would give red in a RGB colorspace. where x is your intensity distribution [0,255]. The underlying array x is the same in both cases, however the mapping defined by the LUT controls the resulting colour when displayed through your graphics card and monitor. Is that clearer?
Learning Fiji for an internship and this helped loads, thank you very much
Hello, thanks for the helpful video.
I am new to the world of image processing. I will be starting a project to identify fungal spore cells from images taken of microscopy slides (fluorescent and/or polarized). The goal is to train a model which can identify spores between different fungal species. In my case, the spores of the two species of interest differ slightly by size and spore wall structure. Essentially, my end goal is to process thousands of images of spores with a working model (1. identify the spores on a slide image, and 2. determine the species based on known parameters unique to each species).
My apologies for the long winded inquiry, but any pointers or recommendations you could provide me would be immensely helpful. I am trying to determine different softwares and/or programs where this could be possible. Do you think this would be possible by only using Fiji and ImageJ or do I need to look into StarDist and Ilastik?
Thank you for the downloadable practice images🤓😁
This really answered some annoying to research questions i had!
Hey, thanks for your video. It was wonderful to learn so much about ImageJ. I just had a query, how to measure the velocity of a rising bubble in any solution using Image J?
great intuition and well explained. Can I use your slides?
Go ahead!
Super useful mate!
Do you happen to know how to stitch images without compromising the original image data integrity? for example, when analyzing fluorescence intensity from images of sections of a C. Elegans, when stitched, the intensity and pixelation changes, how can you control for that?
The integrity doesn't have to be compromised. It depends what method you use. Typically if there is some kind of interpolation of the image i.e. if there is any rotation or sub-pixel movements, this can change values. Also some techniques which will correct for artefacts will also apparently change the values, i.e vignetting is sometimes corrected and will alter values. As long as the introduced errors are not biasing toward one of your conditions then you should be able to proceed (with caution). If you want no changes then choose a simple translation transformation rather than a rigid, or affine transformation.
Thank you!
This was super useful, thanks
Cheers, if you have any topics you would like to see in the future, please feel free to suggest.
Amazing talk.
Thanks for the message. Happy to take requests, if you have a topic you want covering.
Dominic Waithe I have been taking talks since morning but you explained so well 🙌🏻
If I have any question in future, I will definitely ask. Thanks
Why the array of numbers represented in 2D
That is data which underlies the image.
How the single value on pixel determines 2 or more different colour on a single image
I am not quite sure what you mean. Can you elaborate a bit more?
@@odlogo
If I'm having the combination of colours for eg 2 colours green, red now how can I define the 2 colours with only one array data
@@sharavanakkumarsk5367 Hi, you cannot define two independent channels from one data array. I think you might be talking about the Lookup-Tables (LUTs) which map from the data to the colour shown on the monitor. In this case it is the same underlying data visualised in two different ways. [x1,x2,x3...xn] -> [[0,0,x1],[0,0,x2],[0,0,x3]....[0,0,xn]] would be give blue appearance in a RGB colorspace. Whereas [x1,x2,x3...xn] -> [[x1,0,0],[x2,0,0],[x3,0,0]....[xn,0,0]] would give red in a RGB colorspace. where x is your intensity distribution [0,255]. The underlying array x is the same in both cases, however the mapping defined by the LUT controls the resulting colour when displayed through your graphics card and monitor. Is that clearer?
@@odlogo
Yeah, I can understand, thnks
Thank you
thanks