Man, awasome tutorial, i was watching Cyberpunk the documentary, and suddenly start to think in the kinect, matrix, altered carbon, 3d, and here I am. Thanks a lot.
You are the person who inspired thousands not to be afraid to start coding. Thanks for that. I watched all the tutorials and when there are pixel related loops my processing render goes with a lag and I do not know what is the main reason, because it is MacBook Pro 2015 which should do the job quite fast. Maybe you can tell what do you use and maybe it could be other reasons for that happening. Sorry for the question that is not 100 percent related to the processing. Would like to hear more about the algorithms which optimise the computer work so everything would go faster. Thanks again, You are the best, Keep Rockin.
Thanks alot for your very precise and fun tutorials! You made my efforts to tame the Kinect as a graphical tool so much easier. This deserves so much attention. Thank you for taking the time.
I watch your videos and when I finish them I think I suddenly know how to do the work you just presented, and then reality sets in and I take another sip of beer and go to sleep.
I'm so grateful we have enthusiastic teachers like yourself helping people like me get excited about programming!! :D May I ask how you are pulling off your magic with having the computer screen show behind you?
this video should be used in every computer vision class to teach students how to reverse camera projection using depth information and focal distance (instead of learning it the hard way without any experimentation)
I used this technique combined with blob detection for an artistic installation where the kinect was filming people touching a wall, from top . And I used the same "calibration technique" =) Happy to see I wasn't alone "hacking" kinect this way !
I do have a suggestion for determining the threshold. You can combine your code with a library like DeepVision that will detect where your hand is on screen. Then, you can use a mathematical formula (which I'll leave as a reply) that gets the distance between the hand and the camera. Using the distance of such in millimeters, and where the DeepVision library detects your hand is on screen with pixels, you can make a threshold that is not a constant, but rather changes based on where your hand is. Therefore, you won't have to worry about standing a specific distance from the camera-the camera will just know where your hand is and base the thresholds off of that
How can I move the rotation point/axis to the center of the scene? Right now the scene rotates out of the screen. Tried to find an answer - did not find anything. Please help. Thanks for the great tutorial!
Ahhh what a missed opportunity - instead of clipping the wall, you should have used the fact that it's already a green screen and "just" keyed it out. Keep up the awesome stuff!
Muchas gracias ,desde Guadalajara Jalisco , if you come some day don´t think twice to call, you have a home and friends heare , thanks for your videos.
Love your tutorials! Managed to get the first one to run. Anything past that is a no. Now I keep receiving, The function kinect.initIR(); does not exist. Working on OSx kinect v1 ?? Thanks again!
Is there a way to record this data and enter it into a program where all the dots move and you can orbit pan zoom in and observe it after capturing? pls thanks.
How can I save the point cloud coordinates so that I can use them for 3d reconstruction or correspondence matching? Also, can I use the PCL library in processing ide?
Hi shiffman. I am think about making an interactive project using kinect. Where can I get those kinects now? Or can I made one with my own camera and computer vision libraries such as openCV?
I have the Intel RealSense D415 depth camera and I would really like to do this with that camera, but since I am quite new to Processing I don't know where to start. I have added the RealSense library to processing and the examples there work great, but I would like to visualize a real time point cloud like in this video. Any help would be much appreciated!
Hi Daniel. Thank you for a fantastic tutorial series. I am an architect and I have been looking into exporting the point cloud generated by a Kinect in processing into an .obj file. I am using Kinect v.1. Exporting the points in real time would be perfect but I am happy to only export a few static frames for now, the kinect would essentially act as a scanning tool. I found a few old tutorials on the superCAD library but the library seems to be outdated and no longer existant. Do you have any suggestions? Just to give you a idea, the goal is to then import the .obj files (.ply or .stl would work too) into 3D modeling softwares such as Rhinoceros, Meshlab, Blender or Maya. Thank you very much and I really appreciate your generosity in sharing all this knowledge! Z
+Z Krtm take a look at this library (not sure if it's been kept up to date). github.com/nervoussystem/OBJExport I would ask on forum.processing.org, there must be a library that does this!
Dear Dan, Thanks so much again for your perfect epic tutorials. I've one question. It should be possible somehow to compare a static background image with your recent incoming body-movements, am I right? (In case I took a background picture with not having myself on the photo). Would it be difficult to realize something like that? I'm trying to track a average x,y,z body position in a space. but My problem are some obsicals striaght next to me, that also got tracked....
+Chris Los Yes, you'll want to make a copy of depth data in a separate array or image and then compare that to the current depth map to see which pixels are different. If you work on windows with the MS SDK it will do almost all of this for you also! github.com/ThomasLengeling/KinectPV2 Going to make some video tutorials about this soon.
+Daniel Shiffman thanks so much for your quick response. Makes sense to me. Unfortunately I have to work on OSX with a Kinect1 because lot's of my students are working with this configuration. I'll give it a try today. Hopefully I get the machine to surprise me with a valid xyz position. Thanks again. Best, Christian
+Gabriel Netto I believe initDevice is not needed for the v1. My apologies for this, I need to make v1 versions of all the examples will get to that soon. Keep reminding me!
+Daniel Shiffman Thank you! I was wondering about forcing a initDevice because I'm writing a Kinect-to-syphon sketch (using your code) for 2 devices (v1 1473) and only the first Kinect shows up when rendered on canvas but "println" shows them as recognized. I'm new to processing... Maybe something wrong sending "createGraphics" instead of PGraphics?
+Gabriel Netto Your example "MultipleServers" works great but I couldn't send PGraphics to Syphon's canvas. It's my fault! But I would appreciate very much some guidance... ;)
can I use the raw depth data of a cloud and combine it with the game of life parameters? maybe its easier if I take the RGB (mostly greys) parameters and applied to the game of life rules? thanks!
Hi Daniel , great tutorials, I was wondering if you can help me with some sources for particle effect with Kinect + Processing , I would like to capture live video and turn the movement into particle effect , not the background just the individual .
Hi Where is the OpenKinect API reference documentation? As I found Kinectv1.getDepthHeight() does not work for v1 and you have to just use Kinect.height(), just because I tried it in an earlier example...
Hi! thanks for the tutorials, they're great. On this sketch u use the CameraParams for the V2 Kinect: do you have the CameraParams for V1 or somewhere I can find them?
+Moisés H The CameraParams don't exist for V1 unfortunately. But if you look in the library example files there is a version of this example for Kinect V1s.
Hello I have been following your tutorials for quite a while and these are really cool and awesome. This might be a little bit off topic but is it possible to save still point cloud in P3D as file formats like ptx, pts, xyz, txt etc. that 3D rendering software like MeshLab, 123D, reality capture, context capture etc. can import. Can you help me out here? I am actually doing a project on room scanning robot and I did manage to scan my room and make a 3D point cloud display of my room in processing.
Hello. I know this video is rather old, but are these examples compatible with processing4? I´m very new to programming and I got and error that some dependent libraries are missing.
UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries java.lang.UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries A library used by this sketch relies on native code that is not available. UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries at processing.opengl.PSurfaceJOGL.lambda$initAnimator$2(PSurfaceJOGL.java:426) at java.base/java.lang.Thread.run(Thread.java:833) UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries
Hello, when I tried running this it gives me the error "depthWidth" cannot be resolved or is not a field. do you know if it is because I'm using a V1 Kinect?
Search for "edge detection" algorithms. Also, I would suggest asking this at forum.processing.org? It's easier for me and others to help that way (you can share code there easily!).
Hey Danie, How can i get raw data of RGB & infrared in face area ? Then i need to save it to .csv or .txt. I found many way but i still can't got solution. please help me! If it possible. best reagrds Buzz
Hello, first I'd like to say that your videos are amazing and very informative (and you are great as well). i need your advice please. can I map the room and draw only in pixels that the Kinect detected a change? can you help me with that please?
Try out a threshold. int minDepth = 0; int maxDepth = 1200; -> everything between closest detectable point and 1.2 meters. When you draw the point: if (depth[offset] >= minDepth && depth[offset]
I've done it this way, using Daniel's pointcloud code: First declare the min and max threshold at the beginning of the code. Then go to the loop where point(); is drawn, and create a boolean code around it. If d (depth variable) > minThresh and < maxThresh, draw a point(0,0);. Else, offset the point's color to black. This way only the pixels inside your desired area will be colored white and the ones outside it will be black. Try it out. If you can't get it working, ask me. I'm not a pro though.
Hi thanks a lot for all your videos! they are great, i am trying to learn processing for doing projection mapping, i wanted to ask you if there is an example of this for kinect v1? thanks a lot again!
Love these videos and thank you for sharing your knowledge!! How do I obtain the code for a simple pointcloud feed? And how do I plug it into processing? Just started coding yesterday literally just for the Kinect.
I'm having an issue when it comes to the PVector depthToPointCloudPos. It's throwing me an error that cameraParams.cx & CameraParams.cy does not exist. Has something changed? Where can I view that in the library?
+Michelle Sherman Make sure you have the most recent Processing (3.0.2) and version of the library. I think this issue has been resolved. If not you can post here: github.com/shiffman/OpenKinect-for-Processing/issues/
Hey, thanks for the tutorials, I been learning a lot of them. I'm trying to use fisica + openKinect and I'm having some problems. You think you could give me a hand? I would really appreciate it. The code is simple, but i don't know how add the values of the depth to a FBlob in fisica, to make them move with the data take from the kinect.
Hi! I'm trying to select a portion of kinect's depth to select a custom threshold to separate a body from the background. The problem is that I'm using a kinect 1 so when I tweak the code (changing kinect2 to kinect on the PointCloud2 and other examples) Processing returns that some functions and variables don't exist, like "initDevice", "depthHeight", "depthWidth", and the class "KinectTracker" at this example. I've tried your "Kinect Processing Demo" example and it does work like a charm... any clues? (I've posted a similar question at github too) Thanks!
+Gabriel Netto My question is partly answered by Zaina Squid below. The functions part (depthHeight is height on kinect1 and so...) Where can I find the correct syntax for kinect1? As I'm starting with Processing coding recently I just can't realize some things... and I'm a little lost. I try to understand the index provided (shiffman.net/p5/kinect/reference/org/openkinect/processing/Kinect.html) but I don't understand it as much as I would like... any hints? Thanks!
I watched most of your videos but I can't find my goal, in my project I have to separate what the knect image sees in 640x480 into 100 frames, where each frame has a reading from 0 to 10 where 0 is 0 meters and 10 is 2 meters. then each frame has to send that information to move a servo assigned to each frame, for example if it sees a 5 move 45°. It's possible, I don't know how. if necessary I subscribe for a year! but tell me if it's possible. i try con AI but the same Thk regars!
Hi Daniel, Very exciting stuff. I have the point cloud rendering based on your tutorials but am now at a loss regarding: 1.recording the point cloud data 2.exporting it as a CSV file. I would like to import the CSV into Cinema 4d and I have seen some python scripts online that I may be able to use. Anyway you may be able to advise me on this would be great. Conversely if anyone here knows how to make this into some sort of production usable pipeline out there I would be happy to compensate/hire them for something usable. Thank you for your efforts here Daniel - I know they are appreciated by many of us! Respectfully, Jordan
I was hoping you would set the colour of the visible values to the depth value :p Also I feel like keeping track of only the pixels that change might be useful for tracking movement since you could just keep track of your hands position and just check against the changing values to see where and how much your hands moved. I need a Kinect :(
hi does the link that you have included in the description have the actual code that is used at about 6 minutes and if so it what files in git hub are they ? :) thanks
Hi Daniel, when I tried to run the example code, the console says "kinect.depthWidth cannot be resolved or is not a field." How do I resolve this issue? I am using the Kinect v1. Thank you very much!
Hello, i like your tutorials and explanations how to use the kinect, Is possible to put in contact by email or video call? I'm working in a project to get depth image by kinect to use like interpreter of sign language, thanks
Hey, I love your video, but the script downloaded from GitHub appear No Device Connected Cannot Find Devices because I was running on Windows? I look forward to your answer.THX
Use Zadig to install all libusbK drivers. Remember to go to option -> List all devices to install all drivers needed. You may want to use the Zadig 2.0.1 version if you're using Kinect v1.
Hi Dear! Well done. I'm new in the 3d images and i'd like know where can i get a script like you showing, but to get a file from kinect image. Something like a .stl. Thanks million
+Daniel Shiffman It was because my USB hub was not working properly By the way I am thinking If there is a tutorial for hand tracking. I actually want to control leds through kinect hand tracking and arduino
+Sankalp porwal the next videos show a way of doing hand tracking. i would investigate github.com/ThomasLengeling/KinectPV2 also, I wil lbe making more tutorials with this library soon.
This video uses Processing (which is built on top of the Java programming language). For more info, visit processing.org and also this video might help th-cam.com/video/AmlAiKsiy0o/w-d-xo.html.
hey! You are fucking amazing! Thank you very much for this awsome tutorial! I study graphic design in Netherlands and I started learning processing because one of my class! I get in contact with kinect because I needed to do an installation and then my teacher recommend me your videos! I cant express with words how much this video help me! I did the tutorial plus I added more 6 layers side by side with differents colours! Now I`m in the second stage of my installation and I would like to ask you one question! I want to use those differents layers with the tint command, but it only works with image! For the last, in the background I`m running music, do you think is possible to controlo the speed of the music with my hands? For instance, if I put my right hand towards to right the music would go faster, with the left hand, the music would go slower and if you draw a circle in the air you would make a looping! I was trying to show you the video I did, but I cant put videos here! I was so happy because after severals weeks trying running thoses codes well I manage to do it with your videos!! Thank you very very much for all this classes! Its because of the effort of people like you that more and more people can get access to top level knowledge! Spread this magic again and again! Cheers!
thank you for the explanation, this is really helping me to do my final year project, sir. I hope you going rich and can buy Lamborgini
Man, awasome tutorial, i was watching Cyberpunk the documentary, and suddenly start to think in the kinect, matrix, altered carbon, 3d, and here I am. Thanks a lot.
I think I´ve never enjoyed more I video of coding like I did with this one! You are so inspiring!!
Mapping the min max treshold with the mouse was a brilliant idea! Nice video. Keep em up!
lmao 😂😂
You are the person who inspired thousands not to be afraid to start coding. Thanks for that.
I watched all the tutorials and when there are pixel related loops my processing render goes with a lag and I do not know what is the main reason, because it is MacBook Pro 2015 which should do the job quite fast. Maybe you can tell what do you use and maybe it could be other reasons for that happening.
Sorry for the question that is not 100 percent related to the processing. Would like to hear more about the algorithms which optimise the computer work so everything would go faster.
Thanks again,
You are the best,
Keep Rockin.
thanks a lot, you are one of the greatest teacher i've ever met!!
+Fiatty Panich Thanks for watching!
Thanks alot for your very precise and fun tutorials! You made my efforts to tame the Kinect as a graphical tool so much easier. This deserves so much attention. Thank you for taking the time.
+Kong Kongterton you're welcome, thanks for the nice feedback!
I watch your videos and when I finish them I think I suddenly know how to do the work you just presented, and then reality sets in and I take another sip of beer and go to sleep.
Your enthusiasm is infectious. Thank you so much
I'm so grateful we have enthusiastic teachers like yourself helping people like me get excited about programming!! :D May I ask how you are pulling off your magic with having the computer screen show behind you?
i'm using a greenscreen and wirecast software.
this video should be used in every computer vision class to teach students how to reverse camera projection using depth information and focal distance (instead of learning it the hard way without any experimentation)
+davide sito thanks, I'm glad to hear it's useful!
this is awesome, thank you! The possibilities with this is limitless, can't wait to play around with the Kinect myself.
I used this technique combined with blob detection for an artistic installation where the kinect was filming people touching a wall, from top . And I used the same "calibration technique" =) Happy to see I wasn't alone "hacking" kinect this way !
You are very enjoyable to watch. I think I can get the idea in my head working thanks to your videos- so thanks =]
I am very in love with your tutorials. Can always come back to them :)
I do have a suggestion for determining the threshold. You can combine your code with a library like DeepVision that will detect where your hand is on screen. Then, you can use a mathematical formula (which I'll leave as a reply) that gets the distance between the hand and the camera. Using the distance of such in millimeters, and where the DeepVision library detects your hand is on screen with pixels, you can make a threshold that is not a constant, but rather changes based on where your hand is. Therefore, you won't have to worry about standing a specific distance from the camera-the camera will just know where your hand is and base the thresholds off of that
Distance to Object(mm) = ( f(mm) * real height(mm) * image height(px) ) / ( object height(px) * sensor height(mm) )
This is one of the most useful videos I've found - thanks for sharing!!!
I'm having issues with this, could you copy and paste the code to me as I feel there is somewhere I'm going wrong!
I love how you explain your ideas
keep making these awesome videos
Using this for a design project, thank you !!
So dope...btw can see the screen through your body lol, the green bar on your shirt.
How can I move the rotation point/axis to the center of the scene? Right now the scene rotates out of the screen. Tried to find an answer - did not find anything. Please help. Thanks for the great tutorial!
I kept getting an error saying "depthWidth cannot be resolved or is not a field", is there any way to go around this? Thank you!
That was the most helpful guide ever
Ahhh what a missed opportunity - instead of clipping the wall, you should have used the fact that it's already a green screen and "just" keyed it out.
Keep up the awesome stuff!
More than interesting, if i would represent 3D points with a xyz camera would work with that formula ? Because u don't use z value of a camera
Muchas gracias ,desde Guadalajara Jalisco , if you come some day don´t think twice to call, you have a home and friends heare , thanks for your videos.
Love your tutorials! Managed to get the first one to run. Anything past that is a no. Now I keep receiving, The function kinect.initIR(); does not exist. Working on OSx kinect v1
?? Thanks again!
Is there a way to record this data and enter it into a program where all the dots move and you can orbit pan zoom in and observe it after capturing? pls thanks.
Hi there, How can you set the min/max threshold with the data point visualization? I'm using KinectV2 on a mac. Huge Thanks!
Man your video is AMAZING!!!! Love your enthusiasm XDD
Thank you!
Awesome! I'm trying to map RPlidar data and a stepper motor step/angle to create real time room mapping
This is gold. Thank you.
you're welcome!
The Coding Train Seriously thank you. Just starting out with coding, and it's a bit intimidating to me, but your vids make it a little less so.
HI, How can the Kinect measure a 3D object and reproduce an object on a 3D printer? Thanks.
How can I save the point cloud coordinates so that I can use them for 3d reconstruction or correspondence matching? Also, can I use the PCL library in processing ide?
where is the playlist for this video series?
The distance you are using is in terms of what, cm or inches? Nice video by the way!
Hi shiffman. I am think about making an interactive project using kinect. Where can I get those kinects now? Or can I made one with my own camera and computer vision libraries such as openCV?
I have the Intel RealSense D415 depth camera and I would really like to do this with that camera, but since I am quite new to Processing I don't know where to start. I have added the RealSense library to processing and the examples there work great, but I would like to visualize a real time point cloud like in this video. Any help would be much appreciated!
Hi Daniel. Thank you for a fantastic tutorial series. I am an architect and I have been looking into exporting the point cloud generated by a Kinect in processing into an .obj file. I am using Kinect v.1. Exporting the points in real time would be perfect but I am happy to only export a few static frames for now, the kinect would essentially act as a scanning tool. I found a few old tutorials on the superCAD library but the library seems to be outdated and no longer existant. Do you have any suggestions? Just to give you a idea, the goal is to then import the .obj files (.ply or .stl would work too) into 3D modeling softwares such as Rhinoceros, Meshlab, Blender or Maya. Thank you very much and I really appreciate your generosity in sharing all this knowledge! Z
+Z Krtm take a look at this library (not sure if it's been kept up to date). github.com/nervoussystem/OBJExport I would ask on forum.processing.org, there must be a library that does this!
If you've had success with this, please let me know. I will need to do something similar shortly.
@@madmaxkal did you get close?
Dear Dan,
Thanks so much again for your perfect epic tutorials. I've one question. It should be possible somehow to compare a static background image with your recent incoming body-movements, am I right? (In case I took a background picture with not having myself on the photo). Would it be difficult to realize something like that? I'm trying to track a average x,y,z body position in a space. but My problem are some obsicals striaght next to me, that also got tracked....
+Chris Los Yes, you'll want to make a copy of depth data in a separate array or image and then compare that to the current depth map to see which pixels are different. If you work on windows with the MS SDK it will do almost all of this for you also! github.com/ThomasLengeling/KinectPV2 Going to make some video tutorials about this soon.
+Daniel Shiffman
thanks so much for your quick response. Makes sense to me. Unfortunately I have to work on OSX with a Kinect1 because lot's of my students are working with this configuration. I'll give it a try today. Hopefully I get the machine to surprise me with a valid xyz position. Thanks again. Best, Christian
Hi again! so in V1 it's kinect.width instead of Kinect.depthWidth. It works well now! thanks
+Zaina Squid indeed that's right sorry to be slow in the reply. I need to add that an annotation!
+Zaina Squid Thanks for the tip! Do you have another about "initDevice"?
+Gabriel Netto I believe initDevice is not needed for the v1. My apologies for this, I need to make v1 versions of all the examples will get to that soon. Keep reminding me!
+Daniel Shiffman Thank you! I was wondering about forcing a initDevice because I'm writing a Kinect-to-syphon sketch (using your code) for 2 devices (v1 1473) and only the first Kinect shows up when rendered on canvas but "println" shows them as recognized. I'm new to processing... Maybe something wrong sending "createGraphics" instead of PGraphics?
+Gabriel Netto Your example "MultipleServers" works great but I couldn't send PGraphics to Syphon's canvas. It's my fault! But I would appreciate very much some guidance... ;)
can I use the raw depth data of a cloud and combine it with the game of life parameters? maybe its easier if I take the RGB (mostly greys) parameters and applied to the game of life rules? thanks!
Hi Daniel , great tutorials, I was wondering if you can help me with some sources for particle effect with Kinect + Processing , I would like to capture live video and turn the movement into particle effect , not the background just the individual .
Hi Where is the OpenKinect API reference documentation? As I found Kinectv1.getDepthHeight() does not work for v1 and you have to just use Kinect.height(), just because I tried it in an earlier example...
Hi! thanks for the tutorials, they're great. On this sketch u use the CameraParams for the V2 Kinect: do you have the CameraParams for V1 or somewhere I can find them?
+Moisés H The CameraParams don't exist for V1 unfortunately. But if you look in the library example files there is a version of this example for Kinect V1s.
@@TheCodingTrain Amazing thank you !
Hello
I have been following your tutorials for quite a while and these are really cool and awesome. This might be a little bit off topic but is it possible to save still point cloud in P3D as file formats like ptx, pts, xyz, txt etc. that 3D rendering software like MeshLab, 123D, reality capture, context capture etc. can import. Can you help me out here? I am actually doing a project on room scanning robot and I did manage to scan my room and make a 3D point cloud display of my room in processing.
Did you figure out a solution? I am looking to do something similar.
Hello. I know this video is rather old, but are these examples compatible with processing4? I´m very new to programming and I got and error that some dependent libraries are missing.
UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries
java.lang.UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries
A library used by this sketch relies on native code that is not available.
UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries
at processing.opengl.PSurfaceJOGL.lambda$initAnimator$2(PSurfaceJOGL.java:426)
at java.base/java.lang.Thread.run(Thread.java:833)
UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries
great videos. thanks so much! how would one create multiple depth thresholds in the same sketch?
Anyone knows why these examples are running so slow ? I'm using a MacBook pro 2015.
Hi! Is it possible to create an interactive projection using depth maps and processing language??
Hello, when I tried running this it gives me the error "depthWidth" cannot be resolved or is not a field. do you know if it is because I'm using a V1 Kinect?
+Zaina Squid I'm using kinect v1 as well. I found that you can just change kinect2.depthWidth to kinect.width
how can i get a raw depth values in python????
Hi! Im trying to use this code line for Kinect v1: int[] depth = kinect.getRawDepth(); but it doesnt seems to work. Anyone knows why?
Thanks for the tutorial! I was wondering if there is a way to create an outline of the users body?
Search for "edge detection" algorithms. Also, I would suggest asking this at forum.processing.org? It's easier for me and others to help that way (you can share code there easily!).
Hello, I'm having trouble getting the rawdepth to control the tint from an mov file. Does anyone know how to make a video interactive with kinect?
Hey Danie,
How can i get raw data of RGB & infrared in face area ? Then i need to save it to .csv or .txt.
I found many way but i still can't got solution. please help me! If it possible.
best reagrds
Buzz
Hello, first I'd like to say that your videos are amazing and very informative (and you are great as well). i need your advice please.
can I map the room and draw only in pixels that the Kinect detected a change? can you help me with that please?
Basically I want to remove the background including the floor
Try out a threshold.
int minDepth = 0;
int maxDepth = 1200; -> everything between closest detectable point and 1.2 meters.
When you draw the point:
if (depth[offset] >= minDepth && depth[offset]
Hey Daniel, how would you do the depth threshold when your using the point cloud?
I've done it this way, using Daniel's pointcloud code:
First declare the min and max threshold at the beginning of the code. Then go to the loop where point(); is drawn, and create a boolean code around it. If d (depth variable) > minThresh and < maxThresh, draw a point(0,0);. Else, offset the point's color to black. This way only the pixels inside your desired area will be colored white and the ones outside it will be black.
Try it out. If you can't get it working, ask me. I'm not a pro though.
can we do this with a webcam?
Hi thanks a lot for all your videos! they are great, i am trying to learn processing for doing projection mapping, i wanted to ask you if there is an example of this for kinect v1? thanks a lot again!
Love these videos and thank you for sharing your knowledge!! How do I obtain the code for a simple pointcloud feed? And how do I plug it into processing? Just started coding yesterday literally just for the Kinect.
Thank you so much!! Helps a lot.
I'm having an issue when it comes to the PVector depthToPointCloudPos. It's throwing me an error that cameraParams.cx & CameraParams.cy does not exist. Has something changed? Where can I view that in the library?
+Michelle Sherman Make sure you have the most recent Processing (3.0.2) and version of the library. I think this issue has been resolved. If not you can post here: github.com/shiffman/OpenKinect-for-Processing/issues/
Hey, thanks for the tutorials, I been learning a lot of them.
I'm trying to use fisica + openKinect and I'm having some problems. You think you could give me a hand? I would really appreciate it. The code is simple, but i don't know how add the values of the depth to a FBlob in fisica, to make them move with the data take from the kinect.
Hi!
I'm trying to select a portion of kinect's depth to select a custom threshold to separate a body from the background.
The problem is that I'm using a kinect 1 so when I tweak the code (changing kinect2 to kinect on the PointCloud2 and other examples) Processing returns that some functions and variables don't exist, like
"initDevice", "depthHeight", "depthWidth", and the class "KinectTracker" at this example. I've tried your "Kinect Processing Demo" example and it does work like a charm... any clues?
(I've posted a similar question at github too)
Thanks!
+Gabriel Netto My question is partly answered by Zaina Squid below. The functions part (depthHeight is height on kinect1 and so...)
Where can I find the correct syntax for kinect1? As I'm starting with Processing coding recently I just can't realize some things... and I'm a little lost. I try to understand the index provided (shiffman.net/p5/kinect/reference/org/openkinect/processing/Kinect.html) but I don't understand it as much as I would like... any hints? Thanks!
+Gabriel Netto I've now got on my list to make v1 versions of these examples, stay tuned!
when i try using the example of point cloud that came with the library in the console apear a message saying isochronous transfer error 1
+Allan Hagelstrom does the example run ok? i think you can ignore the error.
where can i get the data to practice tutorial
Hi Daniel! How can I export the points of cloud into excel or any other program? Since I need the coordinates to make a 3D model of that. Thanks!
Im in the same boat. Let me know if you figure its out please.
@@madmaxkal did you find an answer?
I watched most of your videos but I can't find my goal, in my project I have to separate what the knect image sees in 640x480 into 100 frames, where each frame has a reading from 0 to 10 where 0 is 0 meters and 10 is 2 meters. then each frame has to send that information to move a servo assigned to each frame, for example if it sees a 5 move 45°. It's possible, I don't know how. if necessary I subscribe for a year! but tell me if it's possible. i try con AI but the same Thk regars!
how can we record depth in webm format
Hi Daniel,
Very exciting stuff. I have the point cloud rendering based on your tutorials but am now at a loss regarding:
1.recording the point cloud data
2.exporting it as a CSV file.
I would like to import the CSV into Cinema 4d and I have seen some python scripts online that I may be able to use.
Anyway you may be able to advise me on this would be great.
Conversely if anyone here knows how to make this into some sort of production usable pipeline out there I would be happy to compensate/hire them for something usable.
Thank you for your efforts here Daniel - I know they are appreciated by many of us!
Respectfully,
Jordan
hi!
Had you find any solution?
I will need to do the same things
since the op has not answered and i need the same thing, have you found any solution?
si the sample accesible this look amazing
code is all here: github.com/CodingRainbow/Rainbow-Code
Dude, you are just bloody amazing.
I gave a thumbs up as soon as you hugged yourself... hahaha. Thanks for the great video!
+Jeffrey Cordova hah, thank you!
I was hoping you would set the colour of the visible values to the depth value :p Also I feel like keeping track of only the pixels that change might be useful for tracking movement since you could just keep track of your hands position and just check against the changing values to see where and how much your hands moved. I need a Kinect :(
hi does the link that you have included in the description have the actual code that is used at about 6 minutes and if so it what files in git hub are they ? :) thanks
+Samuel coldicutt That one is here: github.com/shiffman/Video-Lesson-Materials/tree/master/code_kinect/PointCloud2
github.com/CodingTrain/Rainbow-Code/tree/master/Tutorials/Processing/12_kinect/sketch_12_3_PointCloud2
My processing said "The function kinect.getRawDepth(); does not exist." Can you help me?
+John Smith Could post your full code and ask at forum.processing.org? Feel free to link from here.
Hi Daniel, when I tried to run the example code, the console says "kinect.depthWidth cannot be resolved or is not a field." How do I resolve this issue? I am using the Kinect v1. Thank you very much!
+fable59 It's kinect.width for v1. Will be updating the examples that go along with this tutorial soon!
Hello, i like your tutorials and explanations how to use the kinect, Is possible to put in contact by email or video call? I'm working in a project to get depth image by kinect to use like interpreter of sign language, thanks
Hey, I love your video, but the script downloaded from GitHub appear
No Device Connected
Cannot Find Devices
because I was running on Windows? I look forward to your answer.THX
Use Zadig to install all libusbK drivers. Remember to go to option -> List all devices to install all drivers needed. You may want to use the Zadig 2.0.1 version if you're using Kinect v1.
In kinect v1 kinect .depthWidth doen't work . What to do?
+Sankalp porwal it's just kinect.width for v1, sorry!
Hi Dear! Well done. I'm new in the 3d images and i'd like know where can i get a script like you showing, but to get a file from kinect image. Something like a .stl. Thanks million
awesome stuff
Pride run ☺️❤️
when i run this it says i'm runnin static and active dodes with [stroke (255); ]
help!!
did you solve this issue?
You are great!
has anybody a step by step tutorial how to get the kinect to work with processing on win 10?
Why does it not just use a 2d array?
how could i ony find this now
How to remove Isonchronous transfer error
+Sankalp porwal I wish I knew! Do the examples work ok for you?
+Daniel Shiffman It was because my USB hub was not working properly
By the way I am thinking If there is a tutorial for hand tracking.
I actually want to control leds through kinect hand tracking and arduino
+Sankalp porwal the next videos show a way of doing hand tracking. i would investigate github.com/ThomasLengeling/KinectPV2 also, I wil lbe making more tutorials with this library soon.
That so cool man, now I was wondering how should be the code to send the frames to a syphon server? Could you help me
um this is really old, but we can calculate the plane of the wall and then filter just in front of the wall
Sir, how to use .png files of rgb and depth images and convert to point cloud.
Thanks to xkcd, every time I see a Moiré pattern (such as at 0:49) I get Dean Martin stuck in my head.
Which language are you using...
This video uses Processing (which is built on top of the Java programming language). For more info, visit processing.org and also this video might help th-cam.com/video/AmlAiKsiy0o/w-d-xo.html.
hey! You are fucking amazing! Thank you very much for this awsome tutorial! I study graphic design in Netherlands and I started learning processing because one of my class! I get in contact with kinect because I needed to do an installation and then my teacher recommend me your videos! I cant express with words how much this video help me! I did the tutorial plus I added more 6 layers side by side with differents colours! Now I`m in the second stage of my installation and I would like to ask you one question! I want to use those differents layers with the tint command, but it only works with image! For the last, in the background I`m running music, do you think is possible to controlo the speed of the music with my hands? For instance, if I put my right hand towards to right the music would go faster, with the left hand, the music would go slower and if you draw a circle in the air you would make a looping! I was trying to show you the video I did, but I cant put videos here! I was so happy because after severals weeks trying running thoses codes well I manage to do it with your videos!! Thank you very very much for all this classes! Its because of the effort of people like you that more and more people can get access to top level knowledge! Spread this magic again and again! Cheers!
5:14 song starts playing: "My Girls by Animal Collective"
peacebone
lol i like this dude
i am talking of the point cloud sketch, sorry
How can I detect an object? let's say I want to find a toy
Just a tip next time you record a video don't wear anything with green. It looks like there is a hole through you.