A new controller device that greatly improves the ease of use of 3D medical imaging workstations has been developed at the University of Cambridge and Addenbrooke's Hospital.
@BigMTBrain I agree, that could be a problem but within half a decade or so I'm sure that a software will be developed which can differentiate between different eukaryotic cell types- allowing you to basically highlight an organ or anything else and manipulate it as you will
@rayads786 - Indeed! With the controller shown here, I can imagine appearing and disappearing floating tags with multiple pointers to visible areas of any object in view that meets any of multiple sets of selection criteria as the user navigates. Further isolation could be done by clicking on one or more tags or by shift- and/or ctrl-selecting items in a dynamic, visible objects list which reflects the current visible tags. All other items would become semi-transparent. Let's do dis! Hehehe.
@rayads786 - Ha ha. I was thinking the same thing before I read your comment. It seems like a natural fit. I have no doubt then that it will be done. However, though 3D medical imagery is 3D, viewing it in 3D could be problematic. The difficulty would be in deciding which parts to make transparent as you fly around within the data. It will take another layer of sophisticated software to turn grayscale data into recognized 3D organs and tissue that can be isolated, traversed, or made transparent.
@rayads786 - Agreed. Here's how I think it could be done quite effectively: Make an online repository of about 200 full body scans--kind of a Wikipedia for 3D scans. Access to these scans would only be allowed to certified and verified radiologists anywhere in the world. Over the next two to three years, these radiologists will tag different areas of these images across many angles and depths. The final result would be fed to a neural network. Voila! Instant object recognition and manipulation.
@BigMTBrain Exactly and also false colour imaging using the aforementioned tagging system would be useful for the grayscale data
@BigMTBrain I agree, that could be a problem but within half a decade or so I'm sure that a software will be developed which can differentiate between different eukaryotic cell types- allowing you to basically highlight an organ or anything else and manipulate it as you will
@rayads786 - Indeed! With the controller shown here, I can imagine appearing and disappearing floating tags with multiple pointers to visible areas of any object in view that meets any of multiple sets of selection criteria as the user navigates. Further isolation could be done by clicking on one or more tags or by shift- and/or ctrl-selecting items in a dynamic, visible objects list which reflects the current visible tags. All other items would become semi-transparent. Let's do dis! Hehehe.
@rayads786 - Ha ha. I was thinking the same thing before I read your comment. It seems like a natural fit. I have no doubt then that it will be done. However, though 3D medical imagery is 3D, viewing it in 3D could be problematic. The difficulty would be in deciding which parts to make transparent as you fly around within the data. It will take another layer of sophisticated software to turn grayscale data into recognized 3D organs and tissue that can be isolated, traversed, or made transparent.
@rayads786 - Agreed. Here's how I think it could be done quite effectively: Make an online repository of about 200 full body scans--kind of a Wikipedia for 3D scans. Access to these scans would only be allowed to certified and verified radiologists anywhere in the world. Over the next two to three years, these radiologists will tag different areas of these images across many angles and depths. The final result would be fed to a neural network. Voila! Instant object recognition and manipulation.
Integrate the controller with the holographic 3d deisplay seen on newscientist!