Thank you for this instructive video! I am really looking forward to attempting this on my own data set. I have used dragonfly a bit, it is excellent software, your support team is helpful, and your developers seem to do a great job. Kudos! I am now going to attempt the auto classification for the first time, so I went to this video. I have questions. At t=22:00, one can see the 3D result of the nn classifier segmentation. It seems that small fibres are well isolated, except when there is a kink in the fibre bundle. You see that in the centre of the lower fibre strain there is a rather red-shaded part. I guess this is not due to big fibres in front, because in the beautiful footage in the end the small fibre "knot" seems directly accessible from the outside. So this might be misclassification, which in my experience is common with neural networks (unless one has a neat data set like this one). (i) Is there an option to review the validation output of the final epoch, e.g. by creating a data set and multi-ROI? (would help to see if irregularities were captured by the training) (ii) Is it possible to train the existing classifier for several more epochs after adding some training samples from those critical areas? (would be just quicker than re-training) (iii) Furthermore, is it possible to find out which properties of the training stack the classifier has learned, i.e does it learn the position of the bundles in the slice, or the diameter/island area, or both? (both diameter and position change at the fibre bundle kink) I know that it is not generally possible to decode the trained state of the network layers. I'd rather want to understand which aspects contribute how much to classification, in order to anticipate where "deep learning" segmentation is useful. Thank you in advance! I'm looking forward to more tutorial videos: they are excellent for users, and good ad for your software!
Hello Funmorph, We apologize for this late answer. (i) Semantic segmentation models can be applied on a full stack, selected slices or a mask ROI. The output is a MultiROI. We have tools to compare MultiROIs over standard metrics (Accuracy, DICE ...) (ii) Yes you can. You can also freeze some layers and duplicate models. (iii) This specific dataset was train with 2D U-Net with a patch size of 64x64. So it's not aware of the fiber's position . More generally we currently don't have tools to visualize the output for each layer.
Thanks so much for the video! However, I couldn't reproduce this process, as after I generated the model, I can't assign the "Output". it's fixed with "None". Any suggestions?
Thank you for this instructive video! I am really looking forward to attempting this on my own data set. I have used dragonfly a bit, it is excellent software, your support team is helpful, and your developers seem to do a great job. Kudos! I am now going to attempt the auto classification for the first time, so I went to this video.
I have questions. At t=22:00, one can see the 3D result of the nn classifier segmentation. It seems that small fibres are well isolated, except when there is a kink in the fibre bundle. You see that in the centre of the lower fibre strain there is a rather red-shaded part. I guess this is not due to big fibres in front, because in the beautiful footage in the end the small fibre "knot" seems directly accessible from the outside.
So this might be misclassification, which in my experience is common with neural networks (unless one has a neat data set like this one).
(i) Is there an option to review the validation output of the final epoch, e.g. by creating a data set and multi-ROI? (would help to see if irregularities were captured by the training)
(ii) Is it possible to train the existing classifier for several more epochs after adding some training samples from those critical areas? (would be just quicker than re-training)
(iii) Furthermore, is it possible to find out which properties of the training stack the classifier has learned, i.e does it learn the position of the bundles in the slice, or the diameter/island area, or both? (both diameter and position change at the fibre bundle kink) I know that it is not generally possible to decode the trained state of the network layers. I'd rather want to understand which aspects contribute how much to classification, in order to anticipate where "deep learning" segmentation is useful.
Thank you in advance! I'm looking forward to more tutorial videos: they are excellent for users, and good ad for your software!
Hello Funmorph,
We apologize for this late answer.
(i)
Semantic segmentation models can be applied on a full stack, selected slices or a mask ROI. The output is a MultiROI. We have tools to compare MultiROIs over standard metrics (Accuracy, DICE ...)
(ii)
Yes you can. You can also freeze some layers and duplicate models.
(iii)
This specific dataset was train with 2D U-Net with a patch size of 64x64. So it's not aware of the fiber's position .
More generally we currently don't have tools to visualize the output for each layer.
@@dragonfly_software Great, thank you for the detailed answer!
How can I isolate just the segmented region, in this case the small fibers?
Yes, you can use masks to restrict what areas are used for training.
Thanks so much for the video! However, I couldn't reproduce this process, as after I generated the model, I can't assign the "Output". it's fixed with "None". Any suggestions?
Hi, can you check out this page and see if it helps? If not, feel free to let us know. helpdesk.theobjects.com/a/solutions/articles/48001163553
I am having the same issue, did you ever find a solution?
@@phOOey7 you should choose the class number same as the number of roi
@@alicankaya9421 THANKS SO MUCH, I was locked in this step too
@@ArthurAviz You need to create three classes, 1 small fibers, 2. big fibers and 3. background. Unfortunately it does not work as shown in the video.
Would this work for segmenting a patch of negative space? I use dragonfly to make endocasts
Yes
@@celesteps is there a tutorial for that? I've been trying with some lackluster results
@@Pandas_Thumb Could you send me an image so that I can understand your segmentation issue ?
can we use it in dental field?
Yes, any image data that you can import into Dragonfly can be used.
@@mikemarsh7636 thank you more