Thierry Pécot
Thierry Pécot
  • 99
  • 94 106
Droplet segmentation within cells - processing
This video shows how to use the ImageJ macro designed to identify the lipid droplets within cells.
มุมมอง: 73

วีดีโอ

Droplet segmentation within cells - installation
มุมมอง 79หลายเดือนก่อน
This video shows how to install Fiji and the required plugins to run the macro designed to identify the lipid droplets within cells.
Napari - pipelines with Napari assistant
มุมมอง 56หลายเดือนก่อน
This video shows how to create pipelines with Napari assistant. The image used in this video is available at www.ebi.ac.uk/biostudies/bioimages/studies/S-BIAD1077?query=brain microscopy fluorescence.
Napari - pixel classification
มุมมอง 23หลายเดือนก่อน
This video shows how to use the plugin napari-accelerated-pixel-and- object-classification to train a pixel classifier in Napari. The image used in this video is available at www.ebi.ac.uk/biostudies/bioimages/studies/S-BIAD1077?query=brain microscopy fluorescence.
Napari - Cellpose
มุมมอง 86หลายเดือนก่อน
This video shows how to use Cellpose in Napari. The image used in this video is available at osf.io/uzq3w/.
Napari - Stardist
มุมมอง 49หลายเดือนก่อน
This video shows how to use Stardist in Napari. The image used in this video is available at osf.io/uzq3w/.
Napari - recording a movie
มุมมอง 64หลายเดือนก่อน
This video shows how to use the plugin napari-animation to record a movie. The image used in this video is available at zenodo.org/records/3981193.
Napari - object measurements
มุมมอง 28หลายเดือนก่อน
This video shows how to use the plugin napari-skimage- regionprops to measure features associated with objects. The image used in this video is available at zenodo.org/records/3981193.
Napari - interpolating annotations
มุมมอง 55หลายเดือนก่อน
This video shows how to use the plugin napari-label-interpolator to interpolate annotations for saving time. The image used in this video is available at zenodo.org/records/3981193.
Napari - manual annotations
มุมมอง 55หลายเดือนก่อน
This video shows how to use Napari to manually label images. The image used in this video is available at zenodo.org/records/3981193.
Napari - interface
มุมมอง 36หลายเดือนก่อน
This video shows how Napari interface is organized. The image used in this video is available at zenodo.org/records/3981193.
Napari - zarr conversion
มุมมอง 43หลายเดือนก่อน
This video shows how to convert a tif image into zarr format. The image used in this video is available at zenodo.org/records/3981193.
Napari - virtual environment for workshop
มุมมอง 84หลายเดือนก่อน
This video shows how to create a virtual environment and to install the required Python packages for the workshop.
Importing Cytomap clustering into QuPath
มุมมอง 4489 หลายเดือนก่อน
This video shows how to import Cytomap unsupervised clustering in QuPath. The fluorescence image used in this video is available at zenodo.org/records/10362593. This video is part of a series of videos about the analysis of multiplexed whole slide images with QuPath and Cytomap, a workshop proposed at MIFOBIO 2023.
Unsupervised cell clustering with Cytomap
มุมมอง 4689 หลายเดือนก่อน
This video shows how to cluster cells with unsupervised methods in Cytomap and to export this clustering. The fluorescence image used in this video is available at zenodo.org/records/10362593. This video is part of a series of videos about the analysis of multiplexed whole slide images with QuPath and Cytomap, a workshop proposed at MIFOBIO 2023.
Load and visualize data with QuPath
มุมมอง 2869 หลายเดือนก่อน
Load and visualize data with QuPath
Importing measurements into Cytomap
มุมมอง 4529 หลายเดือนก่อน
Importing measurements into Cytomap
Tissue and nuclei segmentations
มุมมอง 3369 หลายเดือนก่อน
Tissue and nuclei segmentations
Export measurements from QuPath
มุมมอง 8269 หลายเดือนก่อน
Export measurements from QuPath
MuViSS - processing L3-CT scans with known segmentations
มุมมอง 51ปีที่แล้ว
MuViSS - processing L3-CT scans with known segmentations
MuViSS - segmenting and processing L3-CT scans
มุมมอง 82ปีที่แล้ว
MuViSS - segmenting and processing L3-CT scans
MuViSS - downloading and setting up Fiji
มุมมอง 576ปีที่แล้ว
MuViSS - downloading and setting up Fiji
Cellpose installation for QuPath and Fiji
มุมมอง 7Kปีที่แล้ว
Cellpose installation for QuPath and Fiji
Muscle fiber segmentation and muscle fiber type classification with QuPath
มุมมอง 1.8Kปีที่แล้ว
Muscle fiber segmentation and muscle fiber type classification with QuPath
Cellpose GPU installation for QuPath and Fiji
มุมมอง 2.5Kปีที่แล้ว
Cellpose GPU installation for QuPath and Fiji
Deep learning for biologists - Nuclei segmentation - Stardist processing (Python and Fiji)
มุมมอง 8642 ปีที่แล้ว
Deep learning for biologists - Nuclei segmentation - Stardist processing (Python and Fiji)
Deep learning for biologists - Nuclei segmentation - Stardist training
มุมมอง 6292 ปีที่แล้ว
Deep learning for biologists - Nuclei segmentation - Stardist training
Deep learning for biologists - Nuclei segmentation - UNet processing (Python and Fiji)
มุมมอง 3552 ปีที่แล้ว
Deep learning for biologists - Nuclei segmentation - UNet processing (Python and Fiji)
Deep learning for biologists - Tissue segmentation - UNet processing (Fiji)
มุมมอง 4052 ปีที่แล้ว
Deep learning for biologists - Tissue segmentation - UNet processing (Fiji)
Deep learning for biologists - Nuclei segmentation - UNet training
มุมมอง 1602 ปีที่แล้ว
Deep learning for biologists - Nuclei segmentation - UNet training

ความคิดเห็น

  • @margotissertine2096
    @margotissertine2096 5 วันที่ผ่านมา

    Thank you so much for your videos, it helps me a lot! I just have a question: would it possible to add a line in the code and increase the thickness of the segmentation ? I haven't been able to do so just yet.. If you are open to it, I would love to get in touch with you and discuss it. Thank you so much !

    • @thierrypecot8747
      @thierrypecot8747 3 วันที่ผ่านมา

      Hi! If you want to expand (dilate) the segmentations, I do not know an easy way to do it, but you can expand annotations. you would have to convert the detections into anotations with a script: def detections = getDetectionObjects() def newAnnotations = detections.collect { return PathObjects.createAnnotationObject(it.getROI(), it.getPathClass()) } removeObjects(detections, true) addObjects(newAnnotations) and then you can expand the annotations by using the command: Objects -> Annotations... Expand annotations. If you just want to visually thicken the segmented objects, I do not know how to do it. Hope this will help!

  • @zhiyudeng1377
    @zhiyudeng1377 11 วันที่ผ่านมา

    Thank you so much.

  • @maryambello4545
    @maryambello4545 21 วันที่ผ่านมา

    Thank you for the awesome videos. very articulate and clear. I found it really helpful

  • @dongxind7646
    @dongxind7646 หลายเดือนก่อน

    Amazing! your videos are very helpful! However, the muscle fibers fluorescence image used in this video cannot open on the website, Can you provide a new download address?😄

    • @thierrypecot8747
      @thierrypecot8747 หลายเดือนก่อน

      Hi! Glad to see it's helpful, unfortunately, this image is not available, sorry.

  • @lealea3271
    @lealea3271 หลายเดือนก่อน

    Thanks for all the videos.

  • @magdalesinski3840
    @magdalesinski3840 2 หลายเดือนก่อน

    Thank you so muuch for this tutorial! I am new to coding overall and my lab has been manually measuring fiber size so I'm trying to optimize this. However this tutorial doesn't exactly match the updated scripts or updates so I'm getting errors that I'm not sure how to address. This is the error message referring to line 6 in this tutorial and 8 in the updated script from your github - ERROR: Cannot invoke "qupath.lib.projects.Project.getPath()" because the return value of "qupath.lib.scripting.QP.getProject()" is null in QuPathScript at line number 8 qupath.ext.biop.cellpose.CellposeBuilder.build(CellposeBuilder.java:801) Any help would be appreicated

    • @thierrypecot8747
      @thierrypecot8747 2 หลายเดือนก่อน

      Hi @magdalesinski3840! It's a bit hard to answer remotely but according to the error message, it seems that you did not create a project to start with. QuPath is designed to be used with projects, you can see how to create one here: th-cam.com/video/vr9w_LYtSso/w-d-xo.html. Hopefully this will resolve your problem!

  • @katiehanna8528
    @katiehanna8528 5 หลายเดือนก่อน

    Hi @thierrypecot8747 thank you so much for your very helpful videos. I would not have been able to get to the point that I have without your help. I think my problem is downloading the pretrained models. I am unsure what path I need to input into the anaconda prompt. Thank you

    • @thierrypecot8747
      @thierrypecot8747 5 หลายเดือนก่อน

      Hi! Normally, when you run Cellpose the first time, it's gonna download automatically the pre-trained model you want to use. If it does not work, you need to open an anaconda prompt, activate the virtual environment and then copy the command line (from the script editor inQuPath under the script or from the console in Fiji) just above "This command should run directly if copy-pasted into your shell" in the anaconda prompt, that should run fine and download the pretrained models if downloading failed previously. I hope this will help!

  • @ramishjadoon6383
    @ramishjadoon6383 7 หลายเดือนก่อน

    @Thierry Pecot, I did'nt find the option DAB:Nucleus:Mean in my Measurements option. What should I do for that?

    • @thierrypecot8747
      @thierrypecot8747 7 หลายเดือนก่อน

      Hi Ramish! It probably is the case becaus only nuclei were segmented and no extension was processed for a fake cytoplasm. In this cas, change DAB:Nucleus:Mean with DAB:Mean and that should be fine.

    • @ramishjadoon6383
      @ramishjadoon6383 7 หลายเดือนก่อน

      @@thierrypecot8747 thank you for your reply.. there are no such options in my case, I mean, it shows me very few options like area, length and related to blue , red and green .. other than these few option , in my measuring tab, no such options present as shown in video..

    • @thierrypecot8747
      @thierrypecot8747 7 หลายเดือนก่อน

      It means that the image is not defined as H-DAB. When you drag the image in QuPath to open it, make sure you select "Bright field (H-DAB)" in the "Set image type" drop down menu. Hope this will help!

  • @Julie-ww2ki
    @Julie-ww2ki 8 หลายเดือนก่อน

    Bonjour, merci pour ces informations qui sont très claires ! Y a t il une alternative pour calculer ce ratio épithélium/stroma sur des lames complètes d’HE ?

    • @thierrypecot8747
      @thierrypecot8747 8 หลายเดือนก่อน

      Content de voir que c'est utile ! Il n'y a pas de méthode simple pour faire ce genre d'analyse, il serait possible d'entraîner un modèle de deep learning pour le faire, mais ce serait plus lourd. Cette méthode est probablement la plus accessible pour le faire.

  • @jackt9535
    @jackt9535 9 หลายเดือนก่อน

    It's an excellent idea to put ToDelete in the code lines. I have some questions: 1/ In a HDAB with membrane stain, when I use StarDist, nuclei in intense DAB regions are often ignored for a unknown reason. 2/ I also see intense "mean DAB" non nuclear detections often have higher "mean Hematox" than some actual nuclei. This puzzles me a lot

    • @thierrypecot8747
      @thierrypecot8747 9 หลายเดือนก่อน

      Glad to see it's helpful! It's a bit difficult to answer without your images but I can't try to make some assumptions: 1/ nuclei showing high intensity for DAB might be ignored if intensity is saturating all over the nuclei, that would correspond to a flat region and might be ignored as images used for training do not show saturation; 2/ if you have both high intensity for DAB and hematoxylin and you do not expect that, it is then a staining problem, you should talk with the people at the shared facility that stain your samples. Additionally, in a normal setting, stardist is used to segment nuclei so you should not obtain non nuclear detections. If these are artefacts coming from staining and you want to disregard them, you could train an object classifier to identify "correct" nuclei and "false" nuclei. I hope this will help!

  • @scds7
    @scds7 11 หลายเดือนก่อน

    Thanks...this is extremely helpful. Will you be able to show how to get the Cellpose plugin into CellProfiler?

    • @thierrypecot8747
      @thierrypecot8747 11 หลายเดือนก่อน

      Hi! Glad to see it's helpful. I actually don't use CellProfiler so I don't plan to do it in a near future. You might have seen it, there's a tutorial available here: forum.image.sc/t/new-cellprofiler-4-plugin-runcellpose/56858 I hope this will be helpful!

  • @thierrypecot8747
    @thierrypecot8747 ปีที่แล้ว

    !!!!QuPath update!!!! QuPath extension >= 0.7.0 have a different preferences window than shown in the video. Now, when you open the Edit -> Preferences... window, in the Cellpose/Omnipose section, you need to define the Cellpose 'python.exe' location. If you installed the virtual environment at location <cellpose-virtual-environment-location>, you need to define in "Cellpose 'python.exe' location" within the Edit -> Preferences... window the following path: <cellpose-virtual-environment-location>\Scripts\python.exe

  • @thierrypecot8747
    @thierrypecot8747 ปีที่แล้ว

    !!!!QuPath update!!!! QuPath extension >= 0.7.0 have a different preferences window than shown in the video. Now, when you open the Edit -> Preferences... window, in the Cellpose/Omnipose section, you need to define the Cellpose 'python.exe' location. If you installed the virtual environment at location <cellpose-virtual-environment-location>, you need to define in "Cellpose 'python.exe' location" within the Edit -> Preferences... window the following path: <cellpose-virtual-environment-location>\Scripts\python.exe

  • @AlessioTorcinaro
    @AlessioTorcinaro ปีที่แล้ว

    Hello! How do you "re-call" your CellPoseGPUenvironment/folder the second time or the N-time you want to use it? I mean which are the command lines for that purpose, or you can just run QuPath and the CellPoseGPUenvirnonment is ready for use forever? Thank you for your tutorials!

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      Hi! Once the environment is well installed, you can run QuPath forever. However, whenever you change QuPath version, you'll have to install again the Cellpose extension and define again the environment path in QuPath. The last version changed and the env installation is not available any more, I'm gonna have a look at it to see if it's still possible to use the same virtual environment.

    • @AlessioTorcinaro
      @AlessioTorcinaro ปีที่แล้ว

      @@thierrypecot8747 Thank you for your reply! Yes, I have noticed that last version of CellPose extension has something different in QuPath Preferences.

    • @AlessioTorcinaro
      @AlessioTorcinaro ปีที่แล้ว

      @@thierrypecot8747 Hi, Thierry! Are you going to make an updated tutorial with the last versions of Qupath and CellPose extension?

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      Hi @@AlessioTorcinaro! I won't make a new one in a near future, but I think it's the change is pretty straight forward. If you're stuck, don't hesitate to send me a message, I'll help you then.

  • @fidelsaenz4893
    @fidelsaenz4893 ปีที่แล้ว

    How do you tell QuPath to exclude (not count) some cells within the selected annotation? Is there a way to do this?

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      Hi! The best way is probably to define an annotation that only includes the regions in the slide that you want to consider for nuclei segmentation. It is also possible to change the annotation by removing the unwanted regions with alt key. The cells are still segmented but if you look at the number of cells in the annotation, it should have changed as the unwanted cells are no longer part of it. Finally, it is possible to remove cells by selecting them (double right click) and hit the suppr key. Hope this will be useful!

  • @moyofeyide7490
    @moyofeyide7490 ปีที่แล้ว

    Thanks for the video, it's been very helpful. If there's an instance where you've loaded a classifier onto an image and some parts are not annotated correctly, is it possible to correct this after loading the classifier? and would the classifier "learn" from the adjustment when loading it onto subsequent images?

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      Hi, glad to see it's helpful! If you want to modify the classifier with a different image, it is possible to do it if you use the same project. When you go to Classify -> Object classification -> Train object classification, you open the window for the object classifier. In this window, you can click on Load training, which lets you choose which image(s) in your project you want to load your object classifier with. In that case, all the cells that were annotated in the image(s) you chose will be taken into account in your classifier, and you can add new annotations to keep training your classifier.

    • @moyofeyide7490
      @moyofeyide7490 ปีที่แล้ว

      @@thierrypecot8747 thanks for the quick reply. can this also be applied to pixel classification?

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      @@moyofeyide7490 Yes, same thing

  • @Mainline100
    @Mainline100 ปีที่แล้ว

    Does this workflow work for other extensions like stardist?

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      Hi! This workflow is designed to use cellpose. If you want to use Stardist, it's totally doable, you can check this video that shows how to use Stardist to segment cells with QuPath: th-cam.com/video/GBFBVT2stMQ/w-d-xo.html

  • @asmaelaouina7047
    @asmaelaouina7047 ปีที่แล้ว

    your videos are Gold! thank you very much for the hard work!

  • @koungngeun8055
    @koungngeun8055 ปีที่แล้ว

    Does it work in HE stain. I follow your tutorial to use in imageJ Fiji, but there was no result photo come out.

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      Hi! There is no pre-trained cellpose model for H&E staining. You could use the nuclei pre-trained model but you'll have to select one channel. The best way to do it with QuPath is to use the "Estimate stain vectors" functionality to separate hematoxylin from eosin and then apply the cellpose model on the estimated hematoxylin channel. Hope this will help!

    • @pawesuliga8815
      @pawesuliga8815 ปีที่แล้ว

      Hi - very interesting video, thank you. I tried to build a model on HE stained slides but it wasn't perfect. Although it was good enough for my analysis. Estimate stain vector sounds like a good option - I will give it a try @@thierrypecot8747

  • @KeesieOilcorner
    @KeesieOilcorner ปีที่แล้ว

    Thanks!

  • @surgdoc-chen324
    @surgdoc-chen324 ปีที่แล้ว

    Very useful video, Thank you!

  • @olehkashyn9699
    @olehkashyn9699 ปีที่แล้ว

    Thank you so much! You gave me super important information and now I can really understand all steps. You are awesome! Thank you!

  • @AlessioTorcinaro
    @AlessioTorcinaro ปีที่แล้ว

    Thank you so much, Thierry! I was struggling with running cellpose on Qupath. I have also tried to uninstall and reinstall CellPose (by using Anaconda). I will take a look to your tutorial! FIngers crossed

  • @thierrypecot8747
    @thierrypecot8747 ปีที่แล้ว

    0:37 Visualization 5:00 Script for segmenting muscle fibers with Cellpose 8:57 Tissue segmentation with thresholder 10:59 Muscle fiber segmentation with Cellpose 16:51 Image duplication for each marker 18:31 Positive and negative annotations for first marker 21:44 Object classifier training for first marker 28:30 Saving object classifier for first marker 29:16 Applying all object classifiers to identify marker(s) associated with each muscle fiber

  • @thierrypecot8747
    @thierrypecot8747 ปีที่แล้ว

    0:31 Anaconda installation 2:13 Virtual environment creation and activation 8:15 Cellpose installation 9:46 Cellpose extension for QuPath 12:17 Cellpose path definition in QuPath 14:38 Cellpose processing from QuPath 21:25 BIOP plugin installation in Fiji 24:12 Cellpose path definition in Fiji 25:07 Cellpose processing from Fiji

  • @thierrypecot8747
    @thierrypecot8747 ปีที่แล้ว

    0:31 Anaconda installation 2:24 Cuda toolkit installation 5:05 Cudnn installation 9:11 Virtual environment creation and activation 12:30 PyTorch installation 14:10 Cellpose installation 15:03 Cellpose extension for QuPath 18:53 Cellpose path definition in QuPath 20:02 Cellpose processing from QuPath 25:46 BIOP plugin installation in Fiji 26:30 Cellpose path definition in Fiji 27:21 Cellpose processing from Fiji

  • @phezzanfnord1089
    @phezzanfnord1089 ปีที่แล้ว

    Sound is all echos. can't understand

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      Hi! Sorry if the sound quality is not good enough, I'm working on improving it for the next videos. Best

  • @AlessioTorcinaro
    @AlessioTorcinaro ปีที่แล้ว

    How do you "undo" single dot annotations made with "points" annotation tool? Ctrl + z does not work

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      Hi again Alessio! Ctrl + z works for me, but I'm still using version 0.3.0 (I'm waiting for Stardist extension to work to move to 0.4.0). However, I never found a way to remove some of the selected dots, maybe in the new version :)

    • @AlessioTorcinaro
      @AlessioTorcinaro ปีที่แล้ว

      @@thierrypecot8747 Yeah, same problem ;) I hope that Stardist and Cellpose, as well as trained models will work soon on v0.4.0 Thank you again

  • @AlessioTorcinaro
    @AlessioTorcinaro ปีที่แล้ว

    Hi, Thierry! How do you automatize "auto" brightness/contrast? Which would be a general script for doing that in both Brightfield and IF images? Thank you so much for your tutorials :)

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      Hi Alessio! Glad to see these videos are useful. I don't know how to script auto brightness/contrast, but if you check the box "Keep settings" in the "Brightness & contrast window" below the list of channels and then apply auto contrast to a channel, it should propagate to the other images in your project. Hope this will help!

  • @CarlosMartinez-jn2cf
    @CarlosMartinez-jn2cf ปีที่แล้ว

    Hello Thierry, nice video!, thanks a lot!. I have a video that I took myself and can't analyze it using Trackmate, I was wondering if you can tell me what is the format of the video?, or the file extension so I can change my current video to that extension compatible with Trackmate? Thanks!

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      Hi Carlos! You can use a video tif, that will work for sure. Hope this will help!

  • @univweb1385
    @univweb1385 ปีที่แล้ว

    hello sir, i am on nuclei segmentation and i need some datasets to work with, please do you have any idea howa can i get segmentation datasets such as TCGA ...

    • @thierrypecot8747
      @thierrypecot8747 ปีที่แล้ว

      Hi! You can use the dataset I put on the github (github.com/tpecot/NucleiSegmentationAndMarkerIDentification/tree/master/MaskRCNN/datasets/nucleiSegmentation_E2Fs). There is a huge dataset for fluorescence (TissueNet) available at datasets.deepcell.org/ and a large dataset for H&E images at nucleisegmentationbenchmark.weebly.com/dataset.html. There are others you can look for on the Internet. Hope this will help!

    • @univweb1385
      @univweb1385 ปีที่แล้ว

      @@thierrypecot8747 thank you

  • @vicknabalarajah118
    @vicknabalarajah118 2 ปีที่แล้ว

    Very good explanation and easy to follow. Really helped with my analysis

  • @rkman9963
    @rkman9963 2 ปีที่แล้ว

    Hi Thierry, when I run coloc2, I cannot get scatter plot. There is just a line and black backgroud. Can you figure out it? Your help would be greatly appreciated. Thank you!.

    • @thierrypecot8747
      @thierrypecot8747 2 ปีที่แล้ว

      Hi! It's hard to tell like this, maybe you can try on another computer. If it works then, it means that there is a problem with the installatin of Fiji on your computer, I'm not sure I would have a solution then. Good luck!

    • @DK-1474
      @DK-1474 3 หลายเดือนก่อน

      This is extremely late but in case it helps anywhere else this is an issue with newer versions of fiji! Either use an older version, or when you get the results click the "log" checkbox in the bottom right corner.

  • @johnyang5440
    @johnyang5440 2 ปีที่แล้ว

    Hi, thanks for sharing the vedio. I want to know how can I find the cellpose ipynb on jupyter notebook?

    • @thierrypecot8747
      @thierrypecot8747 2 ปีที่แล้ว

      Hi John! The training and running Jupyter notebooks shown in this video for Cellpose are available at github.com/tpecot/NucleiSegmentationAndMarkerIDentification/tree/master/Cellpose. Hope this will help!

  • @gorantomic6049
    @gorantomic6049 2 ปีที่แล้ว

    Many thanks for the video, really helpful! Is it possible to manually correct misidentified annotations afterwards? If for example, there is one sample that is clearly different than the rest of the batch in H&E intensity and classifer performs poorly.

    • @thierrypecot8747
      @thierrypecot8747 2 ปีที่แล้ว

      Hi Goran, glad to see this video is helpful! It is totally possible to correct annotations. If the results are not too far from what you would expect, you can just modify the annotated regions, otherwise you can remove the estimated annotations and define new ones. if you have several slides for which the staining makes the classifier fail, you can also train another classifier just for this batch. In any case, it's extremely important that you describe in the "Materials and Methods" section of the publication what you do, manual correction or training of several classifiers, and why you do it, for instance staining was different between batches of slides. Hope this will help!

    • @gorantomic6049
      @gorantomic6049 2 ปีที่แล้ว

      @@thierrypecot8747 Great, thanks! Actually, training a different classifier is easier and more reliable than manually correcting everything. The tutorial really helped me out!

    • @thierrypecot8747
      @thierrypecot8747 2 ปีที่แล้ว

      @@gorantomic6049 That's great, good luck for your research!

  • @clementtetaud4374
    @clementtetaud4374 2 ปีที่แล้ว

    Bonjour, Je souhaite quantifier un ratio de surface marqué positivement. Je délimite ma région d’intérêt puis je délimite des zones positives et négatives et lorsque je clique sur live prediction dans le pixel classifier j'obtiens très souvent un "NaN" sur les pourcentages positif et négatif de ma ROI alors que je les obtiens sur mes autres annotations. Y a t-il quelque chose que j’oublie de faire ?

    • @thierrypecot8747
      @thierrypecot8747 2 ปีที่แล้ว

      Bonjour Clément, Il est possible que le mode live ne soit pas suffisant. Le mieux quand on quantifie des surfaces est d'entraîner un classifieur, de le sauver puis de l'appliquer aux régions d'intérêt. Dans ce cas, les mesures doivent s'effectuer normalement. J'espère que ce sera utile.

    • @clementtetaud4374
      @clementtetaud4374 2 ปีที่แล้ว

      @@thierrypecot8747 Bonjour, merci pour cette réponse. J'utilise ce logiciel depuis seulement 1 semaine et n'ayant que vos vidéos pour comprendre son fonctionnement je ne comprend pas encore très bien comment l'utiliser. Quand vous dites "entraîner un classifieur", il ne s'agit pas de sélectionner différentes zones pour avoir des annotations négatives et positives ? On peut obtenir les pourcentages sans utiliser le live prediction ? Est-il possible de discuter avec vous par mail, téléphone ou Teams ? Car j'aurais plusieurs questions a vous poser sur le logiciel. Merci d'avance

    • @thierrypecot8747
      @thierrypecot8747 2 ปีที่แล้ว

      @@clementtetaud4374 Bonjour Clément, Je suis en vacances, je suis désolé, je n'ai pas pu répondre plus rapidement. Entraîner un classifieur revient à annoter des régions pour les différentes classes à estimer, dans ce cas région positive et région négative. Cela se fait dans QuPath avec "Train pixel classifier", les régions annotées seront alors considérées pour entraîner un classifieur. Le "live prediction" permet de voir l'influence de l'ajout de nouvelles annotations. Une fois le classifieur entraîné, il est possible de le sauver pour pouvoir ensuite l'appliquer à d'autres images, ce qui permet d'obtenir les mesures souhaitées, dans ce cas le pourcentage de régions positives et négatives. J'ai créé ces vidéos dans le cadre d'une formation que je fais sur le campus à Rennes, ça permet aux personnes qui assistent à la formation de revoir les différentes étapes abordées pendant la formation. Je n'ai malheureusement pas le temps de faire du conseil en dehors des collaborations que j'ai dans le cadre de mon poste à Rennes 1, je ne pourrai donc pas prendre le temps de discuter par mail, téléphone ou Teams. Le mieux est de chercher autour des vous des experts en analyse d'images. Si vous êtes dans une université, il est fort probable qu'il y en ait. Sinon, d'autres vidéos sont disponibles. La documentation de QuPath (qupath.readthedocs.io/en/stable/) est également très bien faite. J'espère que cela vous aidera dans votre recherche.

  • @jackt9535
    @jackt9535 2 ปีที่แล้ว

    Bonjour! Merci pour ces cours très formateurs!! J'ai essayé d'imaginé de rédiger un script pour automatiser tout cela... Mais plusieurs obstacles: - les Régions sont à définir manuellement, ce qui écourte la liste de taches qu on peut automatiser via un script avant de devoir reprendre à la main. - les intensités de colorations calibrées dans un classificateur de pixels sont sujettes à une reproductibilité médiocre à cause des qualités variables d'une solide à l'autre. - Le CPU est utilisé au lieu d'une GPU = la durée des processus - in fine les ressources (énergie électrique) à mobiliser pour "faciliter" une analyse restent plus importantes que les bénéfices effectives.

    • @thierrypecot8747
      @thierrypecot8747 2 ปีที่แล้ว

      Bonjour Jack ! Je vais essayer de répondre rapidement à ces quatre points : - une fois le classifieur entraîné, il suffit de l'appliquer à de nouvelles images pour directement extraire les régions d'intérêt. Il est possible de créer un script qui va appliquer un classifieur entraîné à une collection d'images définies dans un même projet. - si le marquage est trop différent entre les images, ça peut évidemment être un problème, je conseillerais dans ce cas de discuter avec la plateforme d'histo pour voir s'il est possible d'uniformiser les techniques de marquage afin de minimiser les différences entre lames. - il est vrai que seul le CPU est utilisé, toutefois le temps de traitement par lame une fois un classifieur entraîné ne prend pas sensiblement plus de temps que l'acquisition d'une lame, donc le pipeline reste tout à fait envisageable et clairement plus rapide qu'à l'oeil, surtout que les machines peuvent aussi travailler la nuit et le weekend. - le rapport ressources / bénéfices dépend évidemment de l'application. J'espère que ces quelques remarques seront utiles.

  • @amitnadig2884
    @amitnadig2884 2 ปีที่แล้ว

    Hi, could you please tell me the difference between the spot and tracks option at 20:28 please. I couldn't understand it. I have a bead that is moving and I am using trackmate to get the trajectory of the bead. So should I choose spots or tracks? Thank you very much for the video.

    • @thierrypecot8747
      @thierrypecot8747 2 ปีที่แล้ว

      Hi Amit. Exporting results associated with spots, you obtain a file with one line per detected object. For each line, you have the measurements associated to the given object, such as coordinates, track id ... Exporting results associated with tracks, you obtain a file with one line per track. For each track, you also have a number of measurements, such as number of spots in the track, duration, ... If you want to get the trajectory of a given object, one possibility is to export results associated with spots, then to sort the results according to the track ids (if you have several tracks) and then according to the frame, so you should end up with a file that gives you, for a given track, the successive coordinates of the tracked object. Hope it'll help.

    • @amitnadig2884
      @amitnadig2884 2 ปีที่แล้ว

      @@thierrypecot8747 thank you very very much for the clear cut explanation 🙏🏻🙏🏻 it will definitely help a lot

  • @medanatomie7390
    @medanatomie7390 2 ปีที่แล้ว

    Bonjour, sauriez-vous comment "exporter " un script de pixel annotation , pour utiliser ce même script mais dans un autre projet ? Par exemple dans votre video comment on pourrait exporter le script de la protate vers un autre projet qupathsur des epitheliums d'intestin par exemple?

    • @thierrypecot8747
      @thierrypecot8747 2 ปีที่แล้ว

      Bonjour ! Évidemment, il serait possible d'ajouter les nouvelles images au même projet pour appliquer le classifieur. Il est aussi possible de récupérer le classifieur dans un autre projet. Dans ce cas, il faut aller dans le répertoire du projet dans lequel le classifieur a été créé, puis naviguer dans "classifiers/pixel_classifiers/", copier le fichier json avec le nom du classifieur (epistroma.json si le classifieur a été enregistré come epistroma), et aller le coller dans le répertoire "classifiers/pixel_classifiers/" du nouveau projet. Si ce répertoire n'existe pas, il faut le créer, et normalement il sera possible d'utiliser "load pixel classifier" et de voir le classifieur pour l'appliquer aux nouvelles images. J'espère que ce sera utile !

    • @chloelahaie7203
      @chloelahaie7203 2 ปีที่แล้ว

      @@thierrypecot8747 je vous remercie!

  • @erinsimonds1721
    @erinsimonds1721 2 ปีที่แล้ว

    Thanks for this video, Thierry -- it was very clear and exactly what I needed.

  • @pampadey7208
    @pampadey7208 2 ปีที่แล้ว

    Hi @Thierry, Thank you so much for your video on tracking particles. But I also need the information about the speed components, like vx and vy, instead of the speed magnitude. How can I extract those? Could you please inform me?

    • @thierrypecot8747
      @thierrypecot8747 2 ปีที่แล้ว

      Hi! Glad to see this video is helpful. Speed information is available in the "Edges" tab in the track table. It gives you the instantaneous speed for a given tracked object between two frames. Hope it'll be helpful.

  • @thierrypecot8747
    @thierrypecot8747 2 ปีที่แล้ว

    2:20 Duplicate image for each channel 4:04 Create annotations for first (nuclear) marker 5:55 Create object classifier for first marker 13:12 Saving object classifier for first marker 13:53 Create annotations for second (membranar) marker 15:27 Create object classifier for second marker 16:31 Apply all object classifiers to identify markers associated to each cell 18:30 Load pixel classifier for epithelium/stroma segmentation 19:53 Measurements

  • @thierrypecot8747
    @thierrypecot8747 2 ปีที่แล้ว

    0:23 Opening the script 1:28 Script processing (parameters tuning) 6:28 Script processing (whole image)

  • @thierrypecot8747
    @thierrypecot8747 2 ปีที่แล้ว

    1:50 First set of annotations 2:53 Pixel classifier creation 4:17 Pixel classifier with more annotations 9:13 Saving the pixel classifier

  • @thierrypecot8747
    @thierrypecot8747 2 ปีที่แล้ว

    0:36 Install stardist extension for QuPath 1:44 Move the script folder in the right place (so the script knows where the model is) 3:25 Open script 7:44 Running the script (parameters tuning) 17:00 Running the script (whole slide) 18:50 Identification of DAB positive cells (thresholding) 26:31 Measurements

  • @thierrypecot8747
    @thierrypecot8747 2 ปีที่แล้ว

    0:25 Load image in the project 1:14 Stain deconvolution 2:10 Positive cell detection (parameters tuning) 18:12 Positive cell detection (processing the whole slide) 20:43 Measurements

  • @thierrypecot8747
    @thierrypecot8747 2 ปีที่แล้ว

    2:40 Annotations for creating a training image 6:02 Training image creation 8:30 Classes and first set of annotations for pixel classifier 11:30 Pixel classifier creation 20:03 Classification after more annotations were defined 25:44 Saving the pixel classifier 26:40 Applying pixel classifier to one image 29:57 Obtain the measurements 30:47 Applying pixel classifier to several images (batch processing) 35:12 Exporting measurements for several images

  • @thierrypecot8747
    @thierrypecot8747 2 ปีที่แล้ว

    0:00 QuPath installation 0:41 QuPath documentation 1:31 Image.sc forum 4:17 Script and model downloading 5:07 Image downloading

  • @karinaratautaite7050
    @karinaratautaite7050 2 ปีที่แล้ว

    Hi. If I have a 3 Chanells (red, green and blue) and whant to calculate Pearson coefficient with JACoP? So, I should calculate red+green, red+blue, green+blue and then calculate avarage or which number (coefficient) is correct?

  • @raziasultana8103
    @raziasultana8103 3 ปีที่แล้ว

    @Thierry Pécot Thank you for the detailed video. I am trying to study the ER-mitochondria contact sites. I have used respective tracker dyes and trying to measure colocalization by JaCoP plugin in Fiji. I am having the issue of getting similar Manders and Pearson's coefficient values for all the ROIs in an image and even without any ROI, I get the same values (I'm trying to set the same threshold value for every ROI in one image). Another issue is that the threshold is changing for every ROI in the same image when I click the plugin every time. Isn't the threshold supposed to be the same? Your help would be greatly appreciated. I am trying to figure this out for the past 3 days. Thank you!.

    • @thierrypecot8747
      @thierrypecot8747 3 ปีที่แล้ว

      Hi Razia! It's not surprising to get different values across the ROIs in your images, there are always variabilities when observing biological processes. However, if you have a sufficiently high number of observations, you should be able to conclude. Then, the threshold values are used to segment your objects, it is acceptable to have different values for these thresholds depending on the intensity variations that you observe locally. You can also investigate other ways to segment your objects. For instance, it might be a good idea to normalize the intensity so you can use the same threshold. At the end of the day, as for any image analysis task, there are many ways to do it, but you need to make sure you are careful, that your analysis makes sense and is well described in the method section of the articles you write. It is a bit difficult for me to offer more help, I would suggest if it's possible to find image analysis experts around you so that you can talk about it in more details. Hope it'll help.

    • @raziasultana8103
      @raziasultana8103 3 ปีที่แล้ว

      @@thierrypecot8747 Thank you for your reply. My problem is JaCoP plugin is not handling the ROIs. I get similar values for all the ROI. In other words, ROIs are not being selected. Is there a way I can use JaCOP on images with ROIs (all ROIs were added to ROI manager). I don't know if I'm missing something in the steps to be performed. Thanks in advance

    • @thierrypecot8747
      @thierrypecot8747 3 ปีที่แล้ว

      ​@@raziasultana8103 Hi Razia! You're right, JaCOP does not handle ROIs, so the whole image is considered. If you want to compute the Manders coefficients in some parts of your image only, you'll have to crop your image when using JaCOP, that's the only way unfortunately. Hope it'll help.

    • @raziasultana8103
      @raziasultana8103 3 ปีที่แล้ว

      @@thierrypecot8747 I got it.. Thank you so much for your prompt replies. I appreciate it

    • @thierrypecot8747
      @thierrypecot8747 3 ปีที่แล้ว

      @@raziasultana8103 Glad to be helpful, good luck with your analysis!!!