- 31
- 8 178
NeuroTech
เข้าร่วมเมื่อ 26 ก.ย. 2012
As a neuroscience researcher, I provide detailed tutorials on various techniques, including animal behavior analysis, microscopy image analysis, and cellular physiology. If you're looking to learn and use emerging AI technologies (like deep learning) for these purposes, then please subscribe to my channel and support my efforts to bring you step-by-step tutorials.
Quantify Repetitive Jumping Behavior (A Simple Post-DLC Python Project)
In this video, I guide you through a project from start to finish. You'll see every step, every mistake, and the final product.
Jumping behaviors are a common occurrence in many animal model species. While these behaviors have been closely associated with OCD, autism, and habitual behaviors, the exact pathobiology remains unknown. In this video, I use the jumping behaviors of these animals as an example of repetitive behaviors. Repetitive behaviors are fixed, rhythmic, and purposeless movements lacking any apparent goal.
Github Repo for this project: "Coming Soon!"
Jumping behaviors are a common occurrence in many animal model species. While these behaviors have been closely associated with OCD, autism, and habitual behaviors, the exact pathobiology remains unknown. In this video, I use the jumping behaviors of these animals as an example of repetitive behaviors. Repetitive behaviors are fixed, rhythmic, and purposeless movements lacking any apparent goal.
Github Repo for this project: "Coming Soon!"
มุมมอง: 29
วีดีโอ
Kinematics and Postural Statistics from DLC Data Using BehaviorDEPOT: (Part 2)
มุมมอง 79หลายเดือนก่อน
BehaviorDEPOT is a flexible software designed for decoding animal behaviors based on positional tracking. In this video, I walk you through the process of using BehaviorDEPOT to detect behaviors from video time series and analyze the results of experimental assays. I demonstrate its application using a demo video from BehaviorDEPOT Github repo, but the software is versatile enough to be used fo...
VAME - Training, Model Evaluation, and Behavior Segmentation
มุมมอง 1212 หลายเดือนก่อน
This video is a follow-up to Quick start tutorial video I posted earlier (th-cam.com/video/k8mZhNLPE60/w-d-xo.htmlsi=YW4Lwfn9meQBLVSS). In the former video, I rushed through the last part and did not demonstrate some of the capabilities of VAME. Here, I review some more code examples and demonstrate some of VAME's capabilities in segmenting various types of animal behaviors from pose estimates....
Variational Animal Motion Embedding (VAME): Quick Start Tutorial
มุมมอง 1342 หลายเดือนก่อน
In this video, I will guide you through the VAME (Variational Animal Motion Embedding) pipeline setup. This video is a quick-start and setup tutorial to get you started quickly. Vame allows for unsupervised learning, which means it can identify meaningful patterns (motifs) without relying on labeled datasets. This approach enables the analysis of high-dimensional behavioral time series data, mu...
Chat With Research Documents Privately
มุมมอง 422 หลายเดือนก่อน
I assume many of you watching this video would agree that AI tools offer tremendous productivity benefits. But every time I input a query into any GPT interface or ask some interface a question, I get a sinking feeling in my stomach. That feeling stems from the privacy terms in the age of cloud computing, where AI service providers potentially store all the queries we make. Using large language...
Using A-SOiD and DeepLabCut for Behavior Classifications: Installation & Basic Tutorial
มุมมอง 2353 หลายเดือนก่อน
In this video, I will guide you through the use of A-SOID, a platform for creating efficient classifiers for behavior identification from DeepLabCut and Sleap.ai outputs. I start by explaining the requisites for getting started with A-SOID, which include having A-SOID installation, CSV outputs from DeepLabCut, MP4 labeled files, and ground truth data as behavior annotations to train the classif...
Example of Novel Object Interaction_w/YOLOv8
มุมมอง 494 หลายเดือนก่อน
Hi everyone, In this video, I demonstrate the usability of YOLOv8 in a novel object recognition test with mice. I use T-maze videos to show you how to set up your project and train your models to recognize a mouse and any object. While this video is an example, it is a powerful reminder that many of the techniques I showcase on this channel can be modified for use in other animal behavior resea...
Kinematics and Postural Statistics from DLC Data Using BehaviorDEPOT: (Part 1)
มุมมอง 1254 หลายเดือนก่อน
BehaviorDEPOT is a flexible software designed for decoding animal behaviors based on positional tracking. In this video, I walk you through the process of using BehaviorDEPOT to detect behaviors from video time series and analyze the results of experimental assays. I demonstrate its application using a T-maze example, but the software is versatile enough to be used for open field, elevated plus...
Sleap.ai (Installation and Training Tutorial)
มุมมอง 4454 หลายเดือนก่อน
This is a video tutorial on how to install SLEAP and how to begin training a model for pose-estimations. Installation instructions can be found here: sleap.ai/installation.html Tutorial is based on : SLEAP: 1.2.0 TensorFlow: 2.7.1 Numpy: 1.21.5 Python: 3.7.11 OS: Windows-10
3D Mitochondria Analysis Using ImageJ: (Part1)
มุมมอง 4404 หลายเดือนก่อน
Join me in this tutorial as I dive into the fascinating world of mitochondria and explore the power of ImageJ, an open-source image processing program designed for scientific multi-dimensional images. In this video, I will guide you through analyzing various aspects of 3D mitochondrial images. You will learn how to obtain information such as mitochondria distribution, proximity to each other, a...
Bonsai + DCL (Part 3): Extracting Quantitative Behavioral Data
มุมมอง 1095 หลายเดือนก่อน
Video Description:- Bonsai and DLC integrate well, but how do you turn the positional data from pose estimations into behavioral information? This tutorial follows up on two previous ones, in which I showed how to set up Bonsai with DLC (th-cam.com/video/ZfKEbzZHupk/w-d-xo.html), and how to draw multiple regions of interest (ROIs) within Bonsai (th-cam.com/video/2S-XsBhmFOs/w-d-xo.html). In thi...
Single Animal Behavior Analysis (Yolov8)
มุมมอง 895 หลายเดือนก่อน
Code Available on the Github Page: github.com/farhanaugustine/BehaviorAnalysis_YOLOv8 Zenodo page: zenodo.org/doi/10.5281/zenodo.11264288 1. Create a Folder for Images - Create a folder and name it ‘images’. 2. Run FFmpeg - Run the following command in your terminal: `` ffmpeg -i "video path/name" -vf "fps=1/2" -frames:v 20 outputd.png `` - This command will extract frames from the video file ...
T-Maze (Real-time) Quantification w/YoloV8
มุมมอง 746 หลายเดือนก่อน
This is a small demonstration of this concept. Please let me know if you want a full-scale demonstration that includes the Python script. Here, I showcase real-time quantification of entries, exits, speed, and time in ROI (# of frames) using YoloV8. The model is trained to segment the mouse from the background and calculate the center of mass for accurate tracking. Positional data is then used ...
Animal Detection & Tracking with YOLO
มุมมอง 4536 หลายเดือนก่อน
Animal Detection & Tracking with YOLO
DeepLabCut- From Creating a Project to Exporting a Model
มุมมอง 6817 หลายเดือนก่อน
DeepLabCut- From Creating a Project to Exporting a Model
DeepLabCut_Multi_ROI_Analysis (T-Maze Behaviors)
มุมมอง 3297 หลายเดือนก่อน
DeepLabCut_Multi_ROI_Analysis (T-Maze Behaviors)
(Easy!) Analyze DeepLabCut CSV Like This...
มุมมอง 5798 หลายเดือนก่อน
(Easy!) Analyze DeepLabCut CSV Like This...
Installing DeepLabCut + CUDA (RTX-3060 Laptop)
มุมมอง 6269 หลายเดือนก่อน
Installing DeepLabCut CUDA (RTX-3060 Laptop)
BONSAI + DLC (Part 1): How much time does an animal spend in a single ROI?
มุมมอง 2709 หลายเดือนก่อน
BONSAI DLC (Part 1): How much time does an animal spend in a single ROI?
BONSAI + DLC (Part 2): Create Multiple ROIs for Animal Behavior Analysis in Bonsai
มุมมอง 1509 หลายเดือนก่อน
BONSAI DLC (Part 2): Create Multiple ROIs for Animal Behavior Analysis in Bonsai
Easy Tracking of Mitochondrial Fission-Fusion Events With ImageJ
มุมมอง 8362 ปีที่แล้ว
Easy Tracking of Mitochondrial Fission-Fusion Events With ImageJ
Mitochondrial Dynamics In Olfactory Supporting Cells (Part 1)
มุมมอง 362 ปีที่แล้ว
Mitochondrial Dynamics In Olfactory Supporting Cells (Part 1)
Mitochondrial Dynamics-Part 2 (Enhancing Time-lapse Video via Deconvolution)
มุมมอง 1322 ปีที่แล้ว
Mitochondrial Dynamics-Part 2 (Enhancing Time-lapse Video via Deconvolution)
@NeuroGuides Hello! It's been a while since the training was completed, so I don't exactly remember the specific settings. Here's the export code I used: sleap-export --model "D:\Sleap\0727\models\240727_223137.centroid.n=55" --model "D:\Sleap\0727\models\240727_223713.centered_instance.n=55" I hope this helps! (Sorry, I'm not sure why I can't reply to comments-I can only start a new comment here )
great
So, for behavior annotation, you created some folder of with designated behavior and went through 1000 frames (in this case) and put frame by frame in appropriate behavior folder?
Great clarification question! Thank you for asking. Yes. Behaviors spanned multiple frames. Therefore, each frame that showed the mouse doing the behavior-of-interest was pulled into a separate labeled folder. For example, frames showing rearing behaviors were copy-pasted into a folder called "rears."
@@NeuroGuides Thanks for clarifying! I have a few more questions: i) What should I enter for "sample rate of your annotation files". If for one-hot encoding I split a 5 minute video to split into 1000 frames (each frame 300/1000 = 0.33), it would be 0.33(1/0.33)? ii) Can I upload multiple DLC pose estimation .csv files and annotation files corresponding to their videos to get better result? iii) In predict tab, is there an option to export the output? Where does the 'Create labeled videos' being saved, in the same output directory?
@@atanu_giri i). Sample Rate of Annotation Files: If you split a 5-minute video into 1000 frames, each frame would be 0.33 seconds apart (since 300 seconds / 1000 frames = 0.33 seconds per frame). For one-hot encoding, you should, therefore, enter 0.33 as the sample rate. ii). Yes, you can indeed upload multiple DLC pose estimation .csv files and their corresponding annotation files. This can help improve the model's performance by providing more comprehensive training data. iii). There should be an option to export the output in the predict tab. The 'Create labeled videos' option will save the labeled videos in the same output directory where your input files are located.
@@NeuroGuides Thank you!
SELAP doesn't have zhq options
Thank you very much, this video is helpful .Could you please show the process of freezing behavior in fear conditioning.
Thank you for sharing. When will Part 2 of the tutorial be available?
Hi @ZhudiTang, I was planning to make the second part, but my Matlab license expired. I will make the part 2 video once I am able to renew my license. Meanwhile, if you are looking for anything specific about the BehaviorDEPOT, you can reach out via GitHub and I can try to help you more.
I keep receiving an error at In[11] : Please help! I am not a programmer instead I work in a biology lab and was tasked with analyzing the data we have collected. This is for a novel object behavior test looking at how long a mouse spends with known object vs a new object. I am at a complete loss and stand still. Do you believe this will work for this? I also received errors in Bonsai, so so far I cant get any results. Here is my error for this. --------------------------------------------------------------------------- KeyError Traceback (most recent call last) File C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\pandas\core\indexes\multi.py:3053, in MultiIndex.get_loc(self, key) 3052 try: -> 3053 return self._engine.get_loc(key) 3054 except KeyError as err: File index.pyx:776, in pandas._libs.index.BaseMultiIndexCodesEngine.get_loc() File index.pyx:167, in pandas._libs.index.IndexEngine.get_loc() File index.pyx:196, in pandas._libs.index.IndexEngine.get_loc() File pandas\\_libs\\hashtable_class_helper.pxi:2152, in pandas._libs.hashtable.UInt64HashTable.get_item() File pandas\\_libs\\hashtable_class_helper.pxi:2176, in pandas._libs.hashtable.UInt64HashTable.get_item() KeyError: 69 The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) Cell In[11], line 10 5 unique_body_parts.remove('bodyparts') 7 for part in unique_body_parts: 8 # Use the MultiIndex to access the data 9 body_part_data[part] = { ---> 10 "x": df.loc[:, (part, 'x')].to_numpy(), 11 "y": df.loc[:, (part, 'y')].to_numpy(), 12 "likelihood": df.loc[:, (part, 'likelihood')].to_numpy() 13 } 15 for part in body_part_data.keys(): 16 print(f"body parts: {part}") File C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\pandas\core\indexing.py:1184, in _LocationIndexer.__getitem__(self, key) 1182 if self._is_scalar_access(key): 1183 return self.obj._get_value(*key, takeable=self._takeable) -> 1184 return self._getitem_tuple(key) 1185 else: 1186 # we by definition only have the 0th axis 1187 axis = self.axis or 0 File C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\pandas\core\indexing.py:1368, in _LocIndexer._getitem_tuple(self, tup) 1366 with suppress(IndexingError): 1367 tup = self._expand_ellipsis(tup) -> 1368 return self._getitem_lowerdim(tup) 1370 # no multi-index, so validate all of the indexers 1371 tup = self._validate_tuple_indexer(tup) File C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\pandas\core\indexing.py:1041, in _LocationIndexer._getitem_lowerdim(self, tup) 1039 # we may have a nested tuples indexer here 1040 if self._is_nested_tuple_indexer(tup): -> 1041 return self._getitem_nested_tuple(tup) 1043 # we maybe be using a tuple to represent multiple dimensions here 1044 ax0 = self.obj._get_axis(0) File C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\pandas\core\indexing.py:1153, in _LocationIndexer._getitem_nested_tuple(self, tup) 1150 axis -= 1 1151 continue -> 1153 obj = getattr(obj, self.name)._getitem_axis(key, axis=axis) 1154 axis -= 1 1156 # if we have a scalar, we are done File C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\pandas\core\indexing.py:1431, in _LocIndexer._getitem_axis(self, key, axis) 1429 # fall thru to straight lookup 1430 self._validate_key(key, axis) -> 1431 return self._get_label(key, axis=axis) File C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\pandas\core\indexing.py:1381, in _LocIndexer._get_label(self, label, axis) 1379 def _get_label(self, label, axis: AxisInt): 1380 # GH#5567 this will fail if the label is not present in the axis. -> 1381 return self.obj.xs(label, axis=axis) File C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\pandas\core\generic.py:4287, in NDFrame.xs(self, key, axis, level, drop_level) 4285 if axis == 1: 4286 if drop_level: -> 4287 return self[key] 4288 index = self.columns 4289 else: File C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\pandas\core\frame.py:4101, in DataFrame.__getitem__(self, key) 4099 if is_single_key: 4100 if self.columns.nlevels > 1: -> 4101 return self._getitem_multilevel(key) 4102 indexer = self.columns.get_loc(key) 4103 if is_integer(indexer): File C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\pandas\core\frame.py:4159, in DataFrame._getitem_multilevel(self, key) 4157 def _getitem_multilevel(self, key): 4158 # self.columns is a MultiIndex -> 4159 loc = self.columns.get_loc(key) 4160 if isinstance(loc, (slice, np.ndarray)): 4161 new_columns = self.columns[loc] File C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\pandas\core\indexes\multi.py:3055, in MultiIndex.get_loc(self, key) 3053 return self._engine.get_loc(key) 3054 except KeyError as err: -> 3055 raise KeyError(key) from err 3056 except TypeError: 3057 # e.g. test_partial_slicing_with_multiindex partial string slicing 3058 loc, _ = self.get_loc_level(key, list(range(self.nlevels))) KeyError: ('Unnamed: 1_level_0', 'x')
@schaefermuellermusic, Hi, Sorry for the late reply. Please tag me in you comments so youtube will notify me. The error message suggests that you are missing a column in a MultiIndex DataFrame. Are you running a multianimal model?
Hey @@schaefermuellermusic, please reach out to me via Github if you can. I would be happy to help if I can, and will be better able to track the issues you are facing with the code. If you can, please open a new discussion in the Github discussions section (github.com/farhanaugustine/DeepLabCut-Analysis-Jupyter-Scripts/discussions). Please describe in detail at which step in your analysis are you getting stuck and if possible share as much detail as possible so I can understand where the problem might be. The jupyter script should work for novel object test.
Thank you very much! This video is very helpful for my analysis!!
Thank you so much for letting me know this. It really means a lot. I’m glad this video helped you with your analysis. I have a few more videos planned around 3D mitochondrial morphology analysis. Stay tuned for those in the future.
This tutorial is incredibly detailed and has been extremely helpful for me in learning how to combine BONSAI with DLC. I have a few questions: When running the program, the system performance is limited due to the use of DLC, resulting in lower frame rates during video analysis. Is there any solution to this issue? I noticed in your video that you manually drag the video progress bar. Does this affect the frame count during analysis?
Hi @samyou6552, the frame drop issue is with the "DetectPose" node. Bonsai-DLC community is aware of this issue, and DLC seems to be the bottle neck. Here is what I do, which seems to work on my hardware. It may or may not work for you, but it is worth trying. I use scale factor in Bonsai interface (see time stamp 2:55) and downsample my videos. I personally downsample my videos from 60fps (1080p) to 20fps (720p and some even to 360p for long videos) so that my computer can run DLC without bottlenecking frames. Downsampling my videos tends to halt the frame loss in my analysis. I also scale the incoming frames for Bonsai-rx. In default settings of your DLC pose_config.yaml, you can see the lowest and highest global scale and image input size that your model can use for inferences. The default is 0.5 to 1.25 for input size and 1 for global scale, meaning that you can scale your input images down to 0.5 times the input image size for faster prediction with minor-to-mild decrease in precision. I know is might not be the answer you were hoping for, but it's what works for me. I will let you know if I can figure out a better work around in the future. This issue is known in the Bonsai-DLC community. Unfortunately, there is no real fix to this issue for now and DLC DetectionPose seems to be the bottlenesk. Addressing your last question: lol, please don't dragging the progress bar manually, it will cause a massive loss of frames. I did it in my video because I wanted to finish the video analysis quickly. 😅
Thanks!😊😊I am also wondering that if bonsai can use the DLC nodes to analyze the 3d model or multiple animals model? Have you ever tried? If so, i look forward to see more tutorial videos 😊
Thank you for your question! I will look into it. I have tried using ma_dlc models with Bonsai-rx before (in 2022), but at that time, the results were not satisfactory. The key points generated by the models often didn’t make sense. Officially, ma_dlc models can work in Bonsai-rx multi-animal tracking. However, with the release of DLC 3.0+, which supports Pytorch, I anticipate that integrating these models with Bonsai-rx will need to be reworked entirely. Given these considerations, I wouldn’t recommend using ma_dlc models in Bonsai-rx at the moment unless it’s vital. Once I have some time, I can revisit this and see if there have been improvements.
@@NeuroGuides Thank you very much for your patient and detailed answer. I appreciate it.❤️
@@NeuroGuides Hello, I apologize for bothering you again. I would like to ask you more about the issue of implementing multi-animal tracking in Bonsai. I'm using the bonsai.sleap module for multi-animal tracking and can track multiple animal body parts in real time, but I'm encountering some issues when exporting the data for each mouse ID. I was wondering if you have looked into the application of the SLEAP open-source package in Bonsai's real-time prediction models? If you have some time, could you please try it out and perhaps create a tutorial video on it? I would greatly appreciate it!
help me a lot, appreciate XD
😄 glad it helped!
I will watch your video later. Because I'm trying to apply this technique in my lab.
Hi @cesarcaballero4780, I'm just following up to see if you were successful in applying Yolo towards your behavior analysis. I hope your behavior analysis is going well.
@@NeuroGuides Thank you so much for your interest. Unfortunately, my lab and I were in the final part of the semester and needed to train a new generation of bachelor students to do lab work during the summer holidays. Now, we are on holiday, and I hope to apply YOLO. It’s a bit difficult for me because I have learned with DLC. If I have an issue, I will write to this channel.
9:19 13:27 to export the model, write these lines in anaconda prompt: activate DEEPLABCUT ipython import deeplabcut deeplabcut.export_model(r"PATH")
Yolo is what I recommend if you don’t need specific body part information. Yolo is easier to train and can be very efficient. In addition, using python, many calculations can be done simultaneously while inferencing, which can ultimately save time.
For open field test and novel recognition test, do you recomend deeplabcut or yolo?
I personally use Yolo for open field tests. I also find Yolo to be easier and faster to train. In addition, Yolo workflow is easier to expand upon using Colab and Python.
13:20 what do you mean generate the labels ? Analyze videos?
Yes, I meant analyze the videos. Thank you for brining it to my attention.
Do you know if the extracted model can be converted to tensorflow lite model? I want to use the extracted model on an edge device like raspberry pi 4.
You should you be able to convert the DLC export model. However, you should try the DLC-live with Autopilot first. You can always reach out to DLC-Live team on their GitHub page. There are various ways to get the DLC to work with Arduino and Raspbarry Pi controllers. Kane et al., 2020 has a nice paper highlighting real-time closed-loop feedback for markerless posture tracking. Autopilot integrates DLC-live and you can use your pretrained models via Ras Pi. Autopilot Website: docs.auto-pi-lot.com/en/latest/guide/quickstart.html DLC-Live SDK: github.com/DeepLabCut/DeepLabCut-live DLC-Live team's Github page: github.com/DeepLabCut/DeepLabCut-live/issues/50
Thanks for the reply! I was able to get all the libraries installed on a raspberry pi 4 and when running inference using a normal model export I was getting less than 1 fpsand 100% cpu usage lol. I ordered a jetson nano which is a bit beefier. I will try again on that device@@NeuroGuides
@@thegtlab I would love to know if you jetson nano works for you. Are you using Mobilenet or Resnet model?
Thank you for your videos, it have been really good. I am using DLC but in google colab because i dont have of gpu. Could you make some examples in this plataform?
@cesarcaballero4780 I am glad that you find these videos useful, and I appreciate your question. A lot of people use Google Colab, including myself. I will try to incorporate it into some of my other videos.
hi Neuro Tech, Thank you so much for the video,I was not able generate the .pb file.I was using this code " import yaml import deeplabcut # Load the configuration from the YAML file config_path = "C:/Users/jdeora/Desktop/DLC-jay-2024-03-06/dlc-models/iteration-0/DLCMar6-trainset95shuffle1/train/pose_cfg.yaml" with open(config_path, "r") as config_file: cfg = yaml.safe_load(config_file) # Specify the snapshot prefix (e.g., "snapshot-1000") snapshot_prefix = "path/to/snapshot-somenumber" # Replace with your actual snapshot prefix # Export the model deeplabcut.export_model(cfg, snapshot_prefix, TFGPUinference=True) )",please help me with above issue.Thank you for your time.
Hi @sheshendra7, I will need more information about any errors you are receiving. If you are not receiving any errors and the code is executing without issues, then I suspect that your paths in pose_config or config.yaml file have been altered in some way. I have recently uploaded a video where I walk you through how to initiate a DLC project, extract and label frames, train your model, and export your model for use with Bonsai. Please use the information provided in that video to help you out. th-cam.com/video/pTFPo14dIYQ/w-d-xo.html
hey super video! how you get the ".pb" from deeplabcut? When I try to have one with a network training, I don't get any .pb, can you help me plz? Thanks in advance!
I use model export function. It is part of the Helperfunctions.md and simple to execute. Are you using Colab or using local anaconda DLC installation to extract your trained model? You can execute the following command from terminal (either from anaconda or Docker) using the following command. deeplabcut.export_model(cfg_path) You can find more information about it here: github.com/DeepLabCut/DeepLabCut/blob/main/docs/HelperFunctions.md
When i was executing the above command,This is the error" deeplabcut.export_model("C:/Users/jdeora/Desktop/DLCdlc-models/iteration-0/DLCMar6-trainset95shuffle1/train/pose_cfg.yaml") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\deeplabcut\pose_estimation_tensorflow\export.py", line 312, in export_model sess, input, output, dlc_cfg = load_model( File "C:\ProgramData\anaconda3\envs\DEEPLABCUT\lib\site-packages\deeplabcut\pose_estimation_tensorflow\export.py", line 113, in load_model train_fraction = cfg["TrainingFraction"][trainingsetindex] TypeError: 'NoneType' object is not subscriptable" , i was encountered in terminal and jupyter Notebook. @@NeuroGuides ,kindly help me with this.
@@sheshendra7 The error message "TypeError: 'NoneType' object is not subscriptable" is raised when you try to access an item from a None object. In your case, it seems like the cfg["TrainingFraction"] is None. This could be due to several reasons: 1. The configuration file 'pose_cfg.yaml' might not be correctly formatted or might be missing some key-value pairs. Or could have become corrupted. Please check the file and ensure it contains the Training Fraction. 2. The 'pose_cfg.yaml' file might not be correctly loaded, resulting in cfg being None. You could can open the pose config file and see if the file path is correct and accessible. Also, according to GitHub issues related to DeepLabCut, this type of error can occur if an image defined in your training dataset is not found at its original location (most likely because it got deleted, or the “project_path” in your config.yaml file is incorrect). I had this happen to me once when I downloaded the project from my Google Drive after training my model in Colab. I was able to fix the issue after correcting project_path in config.yaml file and paths defined in the pose config file. This issue can also sometimes come up when deeplabcut fails to update project paths when additional training frames are added. In your case: train_fraction = cfg["TrainingFraction"][trainingsetindex] TypeError: 'NoneType' --- So I would first look at the 'pose_cfg.yaml' and 'config.yaml' to make sure all the paths listed are correct.
hi, @farhanaugustine21619 Have you ever used MiNA - Mitochondrial Network Analysis plugin in Imagej?
Hi @meenakshimaurya8233 - my apologies for this late reply. I did not have my notifications turned on. Regarding your question, I have used MiNA in the past. Personally, I find it very useful for 2D images. I have not tried it for 3D and 4D microscopy data.