I know this is fairly different in format and in content from what I usually make, but I hope you enjoy it anyway! The Go game development stuff is more what I do as a hobby. Video processing in Python is what I do as a job :)
This was a fantastic overview. Well done, to the point, very informative. And the brief overview of video compression CODECs and containers was a nice bonus. Thanks for making this!
Posting this in case anybody else ran into the same error as I did: When trying to install GStreamer, the plugins and PyGObject on Windows with the command provided in the text version of the tutorial and you get an error like this: warning: database file for 'ucrt64' does not exist (use '-Sy' to download) error: failed to prepare transaction (could not find database) use -Sy instead of -S as in the text version. This worked for me: pacman -Sy mingw-w64-x86_64-gstreamer mingw-w64-x86_64-gst-devtools mingw-w64-x86_64-gst-plugins-{base,good,bad,ugly} mingw-w64-x86_64-python3 mingw-w64-x86_64-python3-gobject And thanks for the nice tutorial! I really appreciate the text version you included.
Thank you for the great video!, It's so much helpful for me. I search more than 5 hours about gstreaming python, but I think your tutorial video is the best document for beginner
Hello, so I'm having issues with importing gi. I keep getting a "No module name 'gi'" error after running "sudo apt install...." on a Raspberry Pi 3. I was hoping you had any advice? Also, what will you recommend for real-time streaming on a Pi from a stereo camera if you've ever dabbled with something like that? 👀
Hey! I amge getting Numpy array from special camera using their API, How can I encode that numpy array into .png using Gstreamer to display that array in GUI, Currently I am using OpenCV (imencode and resize) currently it is really slow. I am doing it on Jetson xavier
In VsCode the *`import gi`* line is underlined in blue and when I hover over it says *`Import "gi" could not be resolved`* Do you happen to have any insight as to why?
If running the code works, my guess would be that VSCode is using the wrong Python interpreter. Try selecting the interpreter in VSCode's settings and changing it to the one installed by msys2.
@@Velovix Thank you for replying. Unfortunately that didn't fix my issue because I only have one version of python installed. I have also tried reinstalling the library using the command for pacman you gave, but still to no avail.
Thanks a lot man! It was absolutely beginner friendly and interesting. I can watch these videos for my whole life. This is so cool. Can you also tell me, how can I send my video to Janus server to webrtc while taking it from RTSP server as live stream.
It really depends on what you're doing. Getting Python involved is often necessary if you're trying to integrate video streaming with a larger application. That said, if you can accomplish what you want with gst-launch-1.0, do that! It's much simpler since it does so much for you.
Thank you for a nice introduction to GStreamer! May I ask, suppose I want to extract coordinates per frame and store them in arrays for further data analysis, how do you think GStreamer can help me in achieving that goal?
In what way are you trying to extract these coordinates? If you have your own algorithm, you would probably use an appsink to pull the frames out of GStreamer after they're decoded and process them like normal.
hey, thanks a lot for this content. I would highly request to make more videos on gstreamer. there are not so great videos on this. Also, can you share the resosources from where you learned this. Thanks again
Thank you for this useful tutorial. I have one question: how can I send frames of a video which I extract and process with openCV as video stream to web to (as I know I should convert to http stream formats like HLS, OGG, webRTC)?
The main loop is in charge of doing the work that we ask GStreamer to do. When we start a pipeline to decode a video, all that work is managed by the main loop. The main loop needs an entire thread to itself in order to do that work. We need to run the main loop in a separate thread so we can use our main thread to run the code we write.
this is the error that i am getting after running the second line in msys2 error: failed to commit transaction (invalid or corrupted package) Errors occurred, no packages were upgraded.
Great video, very well explained. May I ask how you would introduce a specific video latency between the recorded image and the displayed image? Would simply changing the value in time.sleep(0.1) be enough ?
That sleep is done outside of GStreamer's streaming threads, so it won't have any effect on streaming performance. You should be able to create an artificial delay by adding a "queue" element to your pipeline. The queue element is a buffer that collects video frames, so if you make the queue's buffer large enough, I think it will create a delay. You can use the queue element's "min-threshold-time" parameter to control how large the buffer should be, in nanoseconds.
Thank you for the great video!! I could finally get the overview of gstreamer. By the way, I was also impressed by the background songs in the video. Could you tell me how can I listen to them as songs? What kind of words do I need to type on google to find them??
Believe it or not, they're just free-to-use songs from the TH-cam audio library! I don't know where the project files are for this video though, so I don't know their names :(
Very good video for studying gstreamer. Thank you for the video. Do you have a series of teaching using gstreamer for streaming video (e.g., 4K/8K video, add AI feature to detect object in the pipeline, etc.)?
It would be very cool to make a video about integrating AI with video streaming! It's something I've worked professionally on in the past. I don't have any videos currently though. The processing of video over 1080p is conceptually the same as 4K and 8K, but hardware accelerated decoding becomes a lot more important. Unfortunately hardware decoding is handled very differently between operating systems and even hardware configurations. That makes it difficult for me to produce a video that is useful to a wide audience.
There are a few ways to host video streams online, but one common way is to use the gst-rtsp-server library. They have a few helpful examples in the repository.
Hi, nice tutorial by the way. I have a hard time understanding this library. Also I have a question, my scenario is that I have a video processing Class using CV2 with Python which will receive a frame or video link as input and output a processed frame or processed video, I wonder how can I implement this into Gstreammer pipeline. I appreciate any help. Thanks.
@@kresb I appreciate the suggestion, but unfortunately my program required to use python because I need to utilize a Deep Learning Image Processing library, which doesn't support Rust yet. Thanks by the way man.
missing the most important part. Streaming the video from one system to another over the network. The tutorial should have been about that. Same script is starting the pipeline as well as gathering the images.
hello velovix! great tutorial! I just have one question: what should I do if I want to use gstreamer to stream video files from a server to a client with the video's audio as well?
Good question! This is a bit of an involved process and it's more than I can fit into one comment, so I'll give you some hints. Take a look at the gst-rtsp-server library. You give the library a pipeline and it sends the data to clients as RTP packets. Then, on the client side, you'll use the rtspsrc element to receive the stream.
When you're providing parameters to an element, it's always in the syntax "parameter_name=my_value". So, in this case, your use of filesrc should look like "filesrc location=./test.mp4 ! decodebin ..."
did you solve this problem? i am also using jetson nano and i want to strem video from jetson nano to other PC if you have solved this problem kindly guide me
It looks like PyGObject does not support Python 2.7. I would definitely recommend finding a way to move to a newer version, since 2.7 is now unsupported by the Python developers.
@@Velovix I see. You've used a busy-loop for fetching frames. Though I've never touched the API, but I believe there may be a callback based API. Correct me if I'm wrong.
Yes, there is! You can attach callbacks to a pipeline to get information on when it changes states. That's what I've done with production code in the past.
If you're asking how to develop outside of MSYS2's directory, it's relatively easy. MSYS2 puts your regular Windows file system under the /c/ directory. So, if you want to develop in a directory called "my_project" under your "My Documents" folder, you can just run the following command in MSYS2 to go to that directory: cd "/c/Users/{your username}/My Documents/my_project"
I know this is fairly different in format and in content from what I usually make, but I hope you enjoy it anyway! The Go game development stuff is more what I do as a hobby. Video processing in Python is what I do as a job :)
nice
@@johannesritter2053 That's awesome! I'm glad the videos were able to help
I loved the Go game development series. Good to see you're still around
This was a fantastic overview. Well done, to the point, very informative. And the brief overview of video compression CODECs and containers was a nice bonus. Thanks for making this!
Posting this in case anybody else ran into the same error as I did:
When trying to install GStreamer, the plugins and PyGObject on Windows with the command provided in the text version of the tutorial and you get an error like this:
warning: database file for 'ucrt64' does not exist (use '-Sy' to download)
error: failed to prepare transaction (could not find database)
use -Sy instead of -S as in the text version. This worked for me:
pacman -Sy mingw-w64-x86_64-gstreamer mingw-w64-x86_64-gst-devtools mingw-w64-x86_64-gst-plugins-{base,good,bad,ugly} mingw-w64-x86_64-python3 mingw-w64-x86_64-python3-gobject
And thanks for the nice tutorial! I really appreciate the text version you included.
Thank you for the great video!, It's so much helpful for me. I search more than 5 hours about gstreaming python, but I think your tutorial video is the best document for beginner
You are absolutely the best! Helped me a lot, i was stuck at configuration and the basics.
One of the best clean and clear, sort and sweet tech videos I've ever seen, keep it up!
Thank you so much for putting this together! Really helped me get up and running with GStreamer.
Thanks for this gstreamer tutorial. Please post more tutorials about gstreamer customisation
This video had tons of great content and was super valuable for me. Thank you!
Thank you for making this! Great stuff!
Thank you a lot. I wish the video never ended!
thank you so much for showing how to get started, this helped a ton!
Wow. Thanks for this amazing video.
Excellent video!
Amazing video. Informative and entertaining. Thank you for your work!
Great! I can't wait to try this!
Hello, so I'm having issues with importing gi. I keep getting a "No module name 'gi'" error after running "sudo apt install...." on a Raspberry Pi 3. I was hoping you had any advice? Also, what will you recommend for real-time streaming on a Pi from a stereo camera if you've ever dabbled with something like that? 👀
same
Thank you man! This tutorial was fantastic, you should produce more quality content like this :)
I'm glad you enjoyed it 😁
Excellent Video. I highly appreciate your efforts. Thanks a ton
Thanks Tyler, this tutorial is very helpful.
Hello, how did you import the library into the ide? I have the same code as in the video but I get an importerrror
Hey! I amge getting Numpy array from special camera using their API, How can I encode that numpy array into .png using Gstreamer to display that array in GUI, Currently I am using OpenCV (imencode and resize) currently it is really slow. I am doing it on Jetson xavier
thanks for this overview😀
can we pass those samples to any CNN algorithm? like object detection and so on? thanks!
amazing video. Thanks so much
Hey I have to apply some detections on the image frame and show the detection in m3u8 format can you help me?
Nice video. Thanks for sharing.
A Like from me for the Kami House!
In VsCode the *`import gi`* line is underlined in blue and when I hover over it says *`Import "gi" could not be resolved`* Do you happen to have any insight as to why?
If running the code works, my guess would be that VSCode is using the wrong Python interpreter. Try selecting the interpreter in VSCode's settings and changing it to the one installed by msys2.
@@Velovix Thank you for replying. Unfortunately that didn't fix my issue because I only have one version of python installed. I have also tried reinstalling the library using the command for pacman you gave, but still to no avail.
Thanks a lot man! It was absolutely beginner friendly and interesting. I can watch these videos for my whole life. This is so cool.
Can you also tell me, how can I send my video to Janus server to webrtc while taking it from RTSP server as live stream.
Thanks. Generally would you recommend using the launch string over the python bindings?
It really depends on what you're doing. Getting Python involved is often necessary if you're trying to integrate video streaming with a larger application. That said, if you can accomplish what you want with gst-launch-1.0, do that! It's much simpler since it does so much for you.
Thank you for a nice introduction to GStreamer! May I ask, suppose I want to extract coordinates per frame and store them in arrays for further data analysis, how do you think GStreamer can help me in achieving that goal?
In what way are you trying to extract these coordinates? If you have your own algorithm, you would probably use an appsink to pull the frames out of GStreamer after they're decoded and process them like normal.
hey, thanks a lot for this content. I would highly request to make more videos on gstreamer. there are not so great videos on this. Also, can you share the resosources from where you learned this. Thanks again
Thank you for this useful tutorial. I have one question: how can I send frames of a video which I extract and process with openCV as video stream to web to (as I know I should convert to http stream formats like HLS, OGG, webRTC)?
I'm sorry to say I don't really have experience with browser compatible streaming protocols. Good luck though!
Awesome video dude
why do we create a main loop in a different thread what is its purpose?
The main loop is in charge of doing the work that we ask GStreamer to do. When we start a pipeline to decode a video, all that work is managed by the main loop. The main loop needs an entire thread to itself in order to do that work. We need to run the main loop in a separate thread so we can use our main thread to run the code we write.
Brilliant, thanks!
this is the error that i am getting after running the second line in msys2
error: failed to commit transaction (invalid or corrupted package)
Errors occurred, no packages were upgraded.
Great video, very well explained. May I ask how you would introduce a specific video latency between the recorded image and the displayed image? Would simply changing the value in time.sleep(0.1) be enough ?
That sleep is done outside of GStreamer's streaming threads, so it won't have any effect on streaming performance. You should be able to create an artificial delay by adding a "queue" element to your pipeline. The queue element is a buffer that collects video frames, so if you make the queue's buffer large enough, I think it will create a delay. You can use the queue element's "min-threshold-time" parameter to control how large the buffer should be, in nanoseconds.
Thank you for the great video!! I could finally get the overview of gstreamer.
By the way, I was also impressed by the background songs in the video. Could you tell me how can I listen to them as songs? What kind of words do I need to type on google to find them??
Believe it or not, they're just free-to-use songs from the TH-cam audio library! I don't know where the project files are for this video though, so I don't know their names :(
@@Velovix oh I didn’t know that! Thanks for the information!
can i use this with aws kinesis video stream library producer to trigger in linux
Very good video for studying gstreamer. Thank you for the video.
Do you have a series of teaching using gstreamer for streaming video (e.g., 4K/8K video, add AI feature to detect object in the pipeline, etc.)?
It would be very cool to make a video about integrating AI with video streaming! It's something I've worked professionally on in the past. I don't have any videos currently though.
The processing of video over 1080p is conceptually the same as 4K and 8K, but hardware accelerated decoding becomes a lot more important. Unfortunately hardware decoding is handled very differently between operating systems and even hardware configurations. That makes it difficult for me to produce a video that is useful to a wide audience.
Sorry, switched some words around. I meant to say that processing 4K and 8K video is conceptually similar to 1080p.
thank you
Cool tutorial, thanks
can we use jetson nano as a source and we can see the live video on the screen
I dont get why doing a script instead of a classic gstreamer command line
can you do an video of smart recording or "splitmuxsink" using tee for saving video as well as rtsp streaming
Absolute understandable explanation . I have a question that How to pipeline video stream hosted online using gstreamer as u said at 5:21 ???
There are a few ways to host video streams online, but one common way is to use the gst-rtsp-server library. They have a few helpful examples in the repository.
great! Thx
I want to access the webcam while someone will open our website and video stream should be started, please help me how i can do it.
Just draw rest of the owl
Hi, nice tutorial by the way. I have a hard time understanding this library. Also I have a question, my scenario is that I have a video processing Class using CV2 with Python which will receive a frame or video link as input and output a processed frame or processed video, I wonder how can I implement this into Gstreammer pipeline. I appreciate any help. Thanks.
I am trying to do a very similar thing but in Rust. They have a nice compositor implementation example in gstreamer-rs repo maybe it will be helpful
@@kresb I appreciate the suggestion, but unfortunately my program required to use python because I need to utilize a Deep Learning Image Processing library, which doesn't support Rust yet. Thanks by the way man.
@@khoai1788 you can just look at examples and do it in python. I often reference C/C++ and Python when I write Rust
missing the most important part. Streaming the video from one system to another over the network. The tutorial should have been about that.
Same script is starting the pipeline as well as gathering the images.
hello velovix! great tutorial! I just have one question: what should I do if I want to use gstreamer to stream video files from a server to a client with the video's audio as well?
Good question! This is a bit of an involved process and it's more than I can fit into one comment, so I'll give you some hints. Take a look at the gst-rtsp-server library. You give the library a pipeline and it sends the data to clients as RTP packets. Then, on the client side, you'll use the rtspsrc element to receive the stream.
@@Velovix will try that. thank you very much!!!!
how can I change pipeline to stream a video on my computer by giving path?
The element you're looking for is `filesrc`. It takes in a path as a parameter and provides to the pipeline raw data that can be decoded.
So you mean i should do pipeline=Gst.parse_launch('filesrc ./test.mp4 ! decodebin....) or anything else? Because this code does not work for me
When you're providing parameters to an element, it's always in the syntax "parameter_name=my_value". So, in this case, your use of filesrc should look like "filesrc location=./test.mp4 ! decodebin ..."
Can you please show us how to build simple py rtsp server?
please do more gaming with go tutorials theres no other tutorials on go sdl2 other than tour videos here.
Hi Velovix, I'm working on jetson TX1 and I wanna stream video from jetson TX1 to other PC. How can i use gstreamer for this work ?
did you solve this problem?
i am also using jetson nano and i want to strem video from jetson nano to other PC
if you have solved this problem kindly guide me
hi , does it work with python2.7?
It looks like PyGObject does not support Python 2.7. I would definitely recommend finding a way to move to a newer version, since 2.7 is now unsupported by the Python developers.
i am getting an error after i write on the msys2 console ? anyone who can point out the reason?
Can you post the error that you're getting?
what do u think i must do?
Well you can do the same with gst-launch, you don't gotta code.
That's true, but I wanted to focus on how it's done in Python to give people an on-ramp to automate the process and pull frames out of the pipeline
@@Velovix I see. You've used a busy-loop for fetching frames. Though I've never touched the API, but I believe there may be a callback based API. Correct me if I'm wrong.
Yes, there is! You can attach callbacks to a pipeline to get information on when it changes states. That's what I've done with production code in the past.
My god the background image is superbe! Does someone have a link to download that ????
It's cool, right? I found a copy of it here: wallpaperaccess.com/full/4545965.png
can you teach me how can i change file location?
Cut the file, and paste it somewhere else. The file location is now changed.
If you're asking how to develop outside of MSYS2's directory, it's relatively easy. MSYS2 puts your regular Windows file system under the /c/ directory. So, if you want to develop in a directory called "my_project" under your "My Documents" folder, you can just run the following command in MSYS2 to go to that directory:
cd "/c/Users/{your username}/My Documents/my_project"
In gstreamer, created file location. I want to change save file everyday.
Great video!