- 117
- 248 028
Andrew Reader
United Kingdom
เข้าร่วมเมื่อ 5 ม.ค. 2012
Medical Imaging, Deep Learning, AI, Atomic Physics, Signals & Systems
Andrew J. Reader is a Professor of Imaging Sciences at King's College London, UK
Andrew J. Reader is a Professor of Imaging Sciences at King's College London, UK
Wave-particle duality, wave functions, quantum superposition, wave function collapse
Simple understanding of one of the most widely accepted interpretations of the wave function
มุมมอง: 530
วีดีโอ
Heisenberg's Uncertainty Principle, Momentum and Spatial Frequency
มุมมอง 46310 หลายเดือนก่อน
Understanding precision and uncertainty in position and momentum
Bohr’s Quantum Leap in Atomic Modelling with Stunning Agreement with Experimental Results!
มุมมอง 67610 หลายเดือนก่อน
Nobel prize winning Bohr's amazing result, which explains the spectral lines for hydrogen in terms of fundamental constants.
Photoelectric Effect Explained with Equations: Einstein's Nobel Prize Winning Work on Photons
มุมมอง 79210 หลายเดือนก่อน
The photoelectric effect needed an explanation beyond classical physics
Millikan's Nobel Prize: Oil Drop Experiment for Measuring the Fundamental Unit of Electric Charge
มุมมอง 22410 หลายเดือนก่อน
Millikan's Nobel Prize: Oil Drop Experiment for Measuring the Fundamental Unit of Electric Charge
PET acquisition and iterative reconstruction (basics in 6 minutes!)
มุมมอง 1.7K11 หลายเดือนก่อน
Quick simulation of PET data acquisition basics and iterative EM reconstruction
Basics of least squares: modelling, predicting and reconstructing!
มุมมอง 279ปีที่แล้ว
Iterative least squares Landweber iteration Direct least squares solution
How to undo convolution: deconvolution for image reconstruction (via Fourier & convolution theorem)
มุมมอง 1.7Kปีที่แล้ว
Understanding deconvolution via the convolution theorem
Schrodinger's cat: did you ever really get it, and the link to quantum computing?
มุมมอง 840ปีที่แล้ว
Schrodinger's cat: did you ever really get it, and the link to quantum computing?
Einstein's Nobel Prize: Photoelectric Effect
มุมมอง 563ปีที่แล้ว
Einstein's Nobel Prize: Photoelectric Effect
Basics of k-space for MRI (magnetic resonance imaging)
มุมมอง 3.1Kปีที่แล้ว
Basics of k-space for MRI (magnetic resonance imaging)
Filtered backprojection (FBP) for image reconstruction: central section theorem, Radon & Fourier
มุมมอง 3.5Kปีที่แล้ว
Filtered backprojection (FBP) for image reconstruction: central section theorem, Radon & Fourier
Visualising 2D k-space and Fourier synthesis (1D & 2D, helps for image reconstruction and analysis)
มุมมอง 1.7Kปีที่แล้ว
Visualising 2D k-space and Fourier synthesis (1D & 2D, helps for image reconstruction and analysis)
Image reconstruction: sinograms, backprojected images, convolution, Radon & x-ray transform
มุมมอง 3.7Kปีที่แล้ว
Image reconstruction: sinograms, backprojected images, convolution, Radon & x-ray transform
Deep image prior: simple code for image restoration with no training data needed
มุมมอง 4.7Kปีที่แล้ว
Deep image prior: simple code for image restoration with no training data needed
Schrodinger equation: fast, simple mathematical description with consideration of the hydrogen atom
มุมมอง 3Kปีที่แล้ว
Schrodinger equation: fast, simple mathematical description with consideration of the hydrogen atom
Thomson's cathode ray tube: finding the charge to mass ratio of the electron
มุมมอง 397ปีที่แล้ว
Thomson's cathode ray tube: finding the charge to mass ratio of the electron
Get started with convolutional neural networks (CNNs) to process an image - Jupyter Notebook/PyTorch
มุมมอง 1.6Kปีที่แล้ว
Get started with convolutional neural networks (CNNs) to process an image - Jupyter Notebook/PyTorch
Master your understanding of Fourier: basics to equations
มุมมอง 15Kปีที่แล้ว
Master your understanding of Fourier: basics to equations
Most remarkable formula in mathematics: Euler’s formula and expressions for cosine and sine
มุมมอง 6202 ปีที่แล้ว
Most remarkable formula in mathematics: Euler’s formula and expressions for cosine and sine
Image reconstruction: FBP (filtered backprojection) basics, X-Ray Transform, Central Section Theorem
มุมมอง 1.7K2 ปีที่แล้ว
Image reconstruction: FBP (filtered backprojection) basics, X-Ray Transform, Central Section Theorem
Image reconstruction by deconvolution: least squares, Tikhonov regularisation & Gaussian likelihood
มุมมอง 1.1K2 ปีที่แล้ว
Image reconstruction by deconvolution: least squares, Tikhonov regularisation & Gaussian likelihood
Convolution equation explained simply in 3 forms: discrete, continuous and matrix-vector
มุมมอง 1.1K2 ปีที่แล้ว
Convolution equation explained simply in 3 forms: discrete, continuous and matrix-vector
Image reconstruction basics: vectors, matrices and manifolds
มุมมอง 1.8K2 ปีที่แล้ว
Image reconstruction basics: vectors, matrices and manifolds
PyTorch Principles for Deep Learning Iterative Reconstruction (Live recording, Elba, May 2022)
มุมมอง 6122 ปีที่แล้ว
PyTorch Principles for Deep Learning Iterative Reconstruction (Live recording, Elba, May 2022)
Code for Deep Learned Filtered Backprojection (FBP) Image Reconstruction (PyTorch)
มุมมอง 3.2K2 ปีที่แล้ว
Code for Deep Learned Filtered Backprojection (FBP) Image Reconstruction (PyTorch)
Simple PyTorch code to put deep learning into iterative image reconstruction (embeds a CNN in MLEM)
มุมมอง 3.2K2 ปีที่แล้ว
Simple PyTorch code to put deep learning into iterative image reconstruction (embeds a CNN in MLEM)
Basics of AI for PET Image Reconstruction
มุมมอง 1.1K2 ปีที่แล้ว
Basics of AI for PET Image Reconstruction
Can you send the code
Does that not look like the sun with planets around it ? 😂
Now that you mention it....
Thank you very much professor! Very clearly explained and very helpful!
Thank you so much for your support, much appreciated!
great video
Thanks - appreciated!
Why you stop making videos
The reason is that I have been working on a textbook! I fit that into my spare time instead of videos. When completed, I hope to get back to some video making!
Please contribute these
I hope to add videos soon, but could be a while...
Sir, make a video on GAN for denoising
Thanks for the comment. A GAN for denoising seems quite extreme. GANs can be unstable to train, and would compare a generated image to a reference training set. For denoising, I would think that a simpler approach suffices. Or, for state of the art, I would now consider diffusion models rather than GANs.
Thank you so much for producing the most valuable learning media. You are the first person who makes me almost fully understand how k-space coordination working. I wonder if you can show example of cartesian method of k-space trajectory ?
Thanks so much for the feedback. I hope this video will help for the Cartesian case: th-cam.com/video/TZzX-M1mVJ4/w-d-xo.html
Thanks, Dr. Reader. Still find it useful after 3 years. Simple MR guidance that crosstalk into PET is indeed an issue.
Many thanks for the feedback! We have new methods being developed that could help with the potential crosstalk issue.
Can this be done with any image?
Thanks for the comment - yes, indeed it can be done with any image
What is detected on the screen? A partial fraction or a whole electron? The wave function is not the electron, the collapse of the wave function pulls together the smeared electron, demolishing the entanglement in the system?
A whole electron (there can't be a partial fraction of an electron) is detected. The wavefunction describes all of the information about the electron, including information about its position. Initially there is a superposition of many possible position states, and at detection the wavefunction, describing the electron, collapses to just one of those position states- the position where the electron is detected.
Thank you so much for the informative video! I'm new to using MatLab-- where did you import the data of the sinograph for your code to work on it?
Thanks for the feedback! In this code I used the phantom that is available in Matlab, and then used the radon function to create the sinogram data from that phantom. Hence I did not need to import any data for the sinogram.
Thanks for sharing.
Thanks for the feedback!
Greetings! Professor Reader, I would like to ask you for help in creating a syllabus for self-learning, if I may. In a nutshell, I have a couple of old videos, which I want to de-noise and upscale. I decomposed videos into images and tried couple of existing solutions, i.e., ESRGAN in Google Collab. Some things worked better that others, but I was not happy with the result. Plus, upscaling takes large amount of time, so not really do-able on a free version of Colab. I am wondering if I can possibly run it locally. All in all, this topic got me interested. I would like to try more things on my own and experiment, rather than using a “pre-packaged” solution. However, even a cursory search showed me that computer vision / image processing is a vast domain! Right now I am in need of a syllabus or a roadmap. Could you kindly put me on a right track? I do have basic Python programming skills. As a hobbyist, where do I start? What do I need to learn? Having a structure has always been a problem in my self-education efforts. I would greatly appreciate any directions, given by you. Thank you!
Very good!
Thanks for the feedback! There is a newer version of the video here: th-cam.com/video/G0V6ulOIlJc/w-d-xo.html
I came up with a matrix-vector convolution kernel which is similar to yours except instead of flipping and shifting the matrix rows right, it flips the vector and shifts the matrix rows to the left descending. I suppose both methods work though, yours is just more intuitive
Thanks for the feedback. Convolution is only really defined in one way (if we overlook edge effects) - and that would mean that the kernel is not flipped in any way at all when placed into columns of the matrix, just shifted. Hopefully that would be true for your approach too?
@@AndrewJReader when the flipped vector is multiplied by the shifted matrix rows, the rows are not flipped but are just shifted. The resultants both ways end up being equivalent, possibly hence the associative property of convolution's intrinsic symmetry. I guess linear algebra is complicated in this way mostly because of all the moving parts, but it sure does do the job efficiently
Excellent explanation, very comprehensive and complete, thanks !
Thanks so much for your feedback!
Thank you so much for the explanation!
Many thanks for the feedback!
Great explanation, first video that makes it clear for someone who just wanted to know after seeing it on Stranger Things 😅
Great to hear, thanks so much for the feedback!
"The higher the frequency, the more energy we need to make a unit." Very nicely explained, Thank you!
Glad this helped, so simple on the one hand, yet with massive consequences!
Can make course in matlab ??
I have provided some example Python code here: th-cam.com/play/PL557uxcMh3xwUsqofih09ZPqHMdhe-rui.html&si=FyNe0xAO62HU_Of3 In terms of Matlab, I have only done one video: th-cam.com/video/r5lzacT3HkE/w-d-xo.html
Hello Sir, could you please explain " fig1.canvas.manager.window.move(0, 0)" the line or line number 20? manager has no attributes named window. This line is optional but why you write that which have no existence.
For the libraries / packages that I had installed on my system when I did this video, there was no problem at all. So for me at that time there was no error report. Other installations / environments / OS can of course vary. So it did exist for my setup. I used that line to move the window to the top left corner. Hope that helps, and thanks for the feedback.
Bro i want these codes 😢
great video professor thank you for sharing these valuable information! i am liking the series so far, but i was wondering if there would be any book we can take as reference in order to go deeper into this domain. thank you so much!
Many thanks for the feedback! In fact yes, I am working on a book now, but it might be a while yet for it to be released. I think the content I have presented here, and its level, is distinct from that in other courses / books.
Thanks for sharing this concept! I've swapped the system matrices with the "radon" and "iradon" function from skimage.transform. While the code runs without errors, the loss remains constant throughout all epochs. Converting between tensors and numpy arrays works using torch.from_numpy(array.astype(np.float32)).unsqueeze(0).unsqueeze(0) and tensor.detach().cpu().squeeze().numpy() respectively.
Thanks for the feedback! If you use radon and iradon from skimage.transform, the gradients won't propagate for training due to the functions using numpy arrays, rather than torch tensors. So just converting between torch tensors and numpy arrays before/after use of radon or iradon would not be sufficient. Hope that makes sense (I assume your current version is not working, when you say that the loss remains constant for all epochs?).
@@AndrewJReader Exactly. Although it's not a huge issue, since I can use the approach with the system matrix. Was just curious whether I can embed radon/iradon into this calculation, to avoid a file with gigabytes in size.
Yes you can use a radon / iradon function, it's just that it needs to work on torch tensors all the way through. Some people have already done this, for example take a look here: torch-radon.readthedocs.io/ (note that I have not used anything from that package)
Now this is extremely profound! Thanks for this in-depth insight to the idea
Thanks so much for the feedback, appreciated! Yes, the physics of what happens is remarkable!
The image reconstruction series is so thorough and is really helping my intuition. Thanks!
Many thanks for the feedback, appreciated!
I wanted to express my gratitude for your work, because your videos and scientific papers on deep learned CT image reconstruction have been invaluable references in the making of my bachelor thesis. I wish you all the best and continued success, professor.
Thank you so much for the feedback, means a lot. Likewise, very best wishes for your next steps.
Thank you for your video!I have a question, if the noise is multiple and distributed differently, isn't it harder to reduce the noise
Thanks for the feedback and question. This method for denoising is applicable whatever the type or distribution of noise present in an image. But of course, more severe noise levels will always mean it is harder to reliably reduce the noise while preserving the signal.
My favourite subject ❤❤❤
Really good to know! Thanks for the comment!
Thanks a lot for your video and effort sir!
Many thanks for the feedback!
Very good explanation!
Thanks for the feedback!
Dear professor , I was never this much clear about the concepts, Thank you for this much effort in each videos . please share the slides for the videos , if possible.
Nice video, Andrew! I just have one question: What would you do if you didn't have the true image? Thank you!
Many thanks! You can do a self supervised approach - a very simple example being to artificially create a noisier version of the image that you have, and train the CNN to denoise that back to the version of the image that had not been made noisier. Then use that trained CNN on the original image. But there are other methods that are more sophisticated of course.
ధన్యవాదాలు
prof,could u plz share the slides?
Thanks again for the very detailed description of your script. I do have a quick question which I hope that you may be able to answer. It is the realm of deep image prior. I have a half tone image that I’ve taken from the newspaper. So my question is in a nutshell can this approach to DIP conver a half tone image to Grayscale. It’s not that the image is lacking detail but there needs to be some kind of conversion from an apparent tonal value which is actually black to corresponding grey scale value.
Interesting question, it depends on the sampling of the image. In fact I would simply suggest that you could resize a highly (over)sampled version of the image (just regrid at lower resolution, assigning a value to the lower resolution image pixels according to the sum of corrsponding pixel values in the high res image). Hope that makes sense, no real need for DIP (which can do inpainting, and for a high res half tone image it would possibly just fill the gaps in accordance with the neighbouring values - and if at core the image is binary valued, then the result could be binary as well). It all depends on many things though. I would start with a simple non-deep learning approach for your problem.
@@AndrewJReader that’s gonna keep me fairly busy
Can you share the code ?
Hi Andrew, Thanks a lot for these nice clips. I would like to request you have a look at www.mathworks.com/matlabcentral/fileexchange/24479-pet-reconstruction-system-matrix and also appendices code of my thesis at discovery.ucl.ac.uk/id/eprint/1450246/2/Munir_Ahmad_Final_Thesis.pdf just in case of interest.
I really like the way you demo in the video, especially the intuition of uncertainty principle. It is really really straightforward for me.
Thank you so much for the feedback, really appreciated.
Thank you Andrew, amazing lecture! I don't know anybody else who can explain complex matters in such a clear way. Often, there's missing information in the flow of lectures, but yours are high quality without noise or missing data ;)
Thank you so much Parisa, your feedback really means a lot!
thank you so much
Really appreciate the feedback
This is a very interesting video and a rare one in this field on TH-cam. Thank you for this. I read many of your reviews on PET reconstruction and I'm very hopeful for the impact of CNNs (and GANs) on PET imaging. I'm a master's student in PET reconstruction and I'm currently assessing the potential of "Histo-Images" into reconstruction. I wonder if you think this new format is simply a new temporary *fad* in the literature or a potential new standard in clinical contexts (or something in-between). Anyhow, thank you again for this video and I will recommend your channel to my peers.
Thanks so much for your feedback, really appreciated. I think histoimages have a definite future for practical and fast input to reconstruction networks. MLAP vs TOF-backprojection needs to be considered. Thanks also for recommending my channel to your peers!
Thank you very much, sir! This is the best explanation of Fourier Transform I have ever seen! But I have a question. If you want to determine whether there is a specific frequency in f(t), you need to calculate an integral once. But there are countless frequencies, so do you need to calculate countless integral operations to know the result?
Many thanks for the feedback! Yes indeed, one would need to calculate an integral (a product of 2 functions and then sum) for each and every frequency of interest. However, there is only a finite number of frequencies in the case of discrete functions (the discrete Fourier transform is used), so it is feasible. As it can however be slow, this is where the fast Fourier transform (FFT) comes in - it dramatically speeds up the calculation of the discrete Fourier transform. For continuous functions, analytic integration can be done, to give the solution in the form of another function.
@ 0:18 Hi Andrew. I was wondering, isn't the photon "going upward" deflected from its path by collisions with atoms in the brain cells?
Hi Jacob, thanks for the question. Yes, there is a probability of Compton scatter (inelastic) or even a tiny probably of Thomson scatter (elastic) of the 511 keV photons with the electrons in the attenuating medium (i.e. the brain, skull, etc.). However, a good fraction of these high-energy photons do escape without any interaction at all. (Photoelectric absorption can also occur with relatively low probability).
@@AndrewJReader Aha, thanks! What is modern physics without probabilities ;-)
Your videos are the best explanations of Qm,I finally have an understanding of H,U,P and wave functions,good stuff
Great to hear, thanks so much for the feedback!
Can u do without iradon
Great question - yes, you can. One approach would be to use the deep image prior representation (or even just a pixel grid in fact, if not early stopping or regularisation sought) with a forward model only and use an optimiser such as those used in AI, such as Adam, but it would be slow
@@AndrewJReader i mean unfiltered backprojection without iradon
But backprojection is iradon (when used with no filter). The transpose of the radon transform can be found via iradon(no filter). If you mean to avoid using iradon as a function, then you can write your own version, such as, for example, what I did in this video: th-cam.com/video/BXXLoVyAT0Q/w-d-xo.htmlsi=D6PoRXZRiT3Jm9z3 (see about 18 mins into the video)
Wonderful. This and part 1 videos help me have a quick literature review about ML in PET reconstruction.
Really glad the videos have been helpful
Thanks Andrew. This video just made the intuition behind the MAPEM method clearer.
Great to hear, thanks for the feedback
Thanks
Thanks also!