I'm not sure if anybody ever mentioned this, but it's really nice that you use full commands with expanded arguments when you could have just prepped some aliases and used abbreviated arguments. This shows your commitment to teaching, to didactics, instead of trying to be more efficient. Kudos to you for that! 🙂
I do my best to always use full commands (e.g., `kubectl` instead of `k`), full arguments (e.g., `--namespace` instead of `-n`), and to avoid any tools that are not strictly necessary for that subject. As a matter of fact, manyof the things you see in videos are not things I normally use. For example, videos are done in a standalone terminal with bash while I use the one baked in VS Code and with zsh. Thank you for noticing that. You are the first one who made such a comment. I feel those are the things that people do not notice directly but, hopefully, have a better and easier experience without knowing why. It's a kind of an improvement without a reward :)
@@DevOpsToolkit here's a more detailed feedback: you have a great style of presenting your ideas, and this contributes both to the learning of your audience and to your own success. Some very good points in the way you present your videos are that you're patient, calm and friendly. That sets up an "environment" that's very welcoming to the audience, and without being overwhelming (of course, if people do like me and keep jumping between parts of the video they might get overwhelmed sometimes... hahaha). Regarding bad points, I don't have any of them, to be honest, I like both the form and the content of what you publish here, and I think you usually gravitate to very good DevOps tools (I'm a big fan of Argo tools myself - I've used Workflows for implementing an ETL at a past job I had and it worked amazingly well). Anyway, I hope you get tons of sponsors here to keep producing great content 🙂
Hey Viktor, I am using tekton triggers and i am not aware that Argo Events are covering some similar area. You have a very clear & easy understandable video and explanation, we are waiting for your Argo Workflows video (+1), thanks for all your support, its always good to follow you and Darin
Tekton Triggers are mostly designed around the goal of triggering Tekton pipelines from Webhooks. Argo Events, on the other hand, are meant to connect many different event sources with quite a few triggers. They are more generic and with a much wider scope. Which one makes more sense depends on the use case. If all you need is to trigger a Tekton pipeline from a GitWebhook, Tekton triggers are a better option. On the other hand, if you might want to go beyond that, Argo Events are probably a better choice. For example, you might want to create an event when a k8s Deployment is created/updated/deleted, and use a trigger that would send a notification to Slack and, also, update a DB, push some change to Git, etc.
I'm convinced this decoupled, event-based architecture is the only way to build. Mimic biology. I'd love to hear a podcast or some theory on the topic! Keep up the good work.
I'm ahead of you :). Already created the PR (github.com/argoproj/argo-events/pull/1044) few hours ago and it was merged. I guess it will appear in the "Community Blogs and Presentations" (argoproj.github.io/argo-events/#community-blogs-and-presentations) section soon. Is that a good place? Anywhere else I could add it?
@@lafiadavid7400 I never tried that. Is there a specific reason why you'd like to "attach" a file to the request instead of storing that file somewhere from where it could be retrieved by the pipeline?
It's not :( Watch th-cam.com/video/vpWQeoaiRM4/w-d-xo.html first and, from there on in the order they were released. This one assumes some prerequisite knowledge of Argo CD
You don't need to type `clear` to clear the terminal -- just hit CTRL-L. You can also do this while typing a command, and it will clear the screen and leave what you've already typed on the command line.
Even after 3 years, this video is still valid and well explained. Thankyou for your effort in doing this video. Your enthusiasm on technology is inspiring.. One query from this video..I dont see Event bus used here. is it optional?
One thing I do not get is that, with the base installation of Argo Events, you are already deploying the eventbus. At 7:30, why do you need to install event bus again?
This is the first video on this subject I've seen that goes into a good amount of detail about the subject matter. Thanks so much clarifying these points. Wanted to ask what the observability story is for Events. If an event is captured by an Eventsource, how do we know that it has been relayed to the Eventbus, and from there that a sensor captures it and successfully invokes a trigger? I guess the pod logs would be of help here but what should we be looking for? Is there a 'friendlier' abstraction or UI that can help here?
Ideally, argo events would be producing kubernetes events that we could collect like we do from most of the other kubernetes resources. That is not the case and that is one of the most disappointing parts of argo events. On th bright side, logs are available and can be collected. As a side note, I'm working on a video that will use, among other tools, argo events. It should go live in a month (more or less).
Greate explanation and great job! Concern about the affirmation that is 100% decouple, this is not real because if changes occur in the index of json so this will broken your consumer or worse shows a wrong data.
What about the performance? I mean, it's really useful, in a decoupled way, asynchronous, event-trigger orchestration, however, let's say I create events when you hit a GET operation to related endpoint, and if we start to trigger workflow (e.g.) which takes some time, we may stuck with performance maybe, however it's really good for short-live tasks
It all depends on how you define events. It can be one an issue if you trigger events for everything or if you are monitoring everything. Kubernetes API is already producing events so it mostly depends on what you subscribe to. So yes. If you trigger an event for every GET, it will not end up well.
Thanks you Viktor for your teaching! I have taken your devops udemy course and learnt alot from your teaching. I have a question regarding this video, hope you can advise. Following this video, i tried to deploy the sensor and eventsource in another namespace, instead of argo-events. First i deployed another eventbus and serviceaccount in that namespace and the curl was successful, (also saw in argo ui that there is event source and sensor connection created) , however no k8s pod was created (everything works if i deploy in argo-events namespace) Do you have any advice on what is lacking to deploy in another namespace? Or how should i troubleshoot. Thanks in advance!
I want to allow my customers to self onboard and create their own prod env via web interface. Im looking to automate spin off of new environment triggered by user action. This is perfect, however i’d need to pass many business logic related arguments to the env may be as env variables (may be as payload?). I can achieve that simply by creating new git repo for new env manifests which argo will monitor and then apply in the cluster. I can also use argo events as you showcased. Do you think it’s more suitable with events or is there any other better way? Thanks
I don't have it on argo workflows specifically. I do have one that combines all argo projects, including workflows. th-cam.com/video/XNXJtxkUKeY/w-d-xo.html
Hi Viktor! Thanks alot for your videos! Would you know or give some insight into how to trigger Argo workflows when files are uploaded to S3 in IBM Cloud?
I haven't been using IBM much so I cannot answer that specific question. What I can say is that tekton alone does not have much support for triggers and events. I would probably go with Argo Events for that part.
Let me straight come to the point, I need to take a Date input during ArgoFlow So I can pass that Date into program and a report will be generated based on date. Is that possible to send a date as user Input during flow?
Has anyone managed to integrate Argo Events in a real cluster, without the need of port forwarding? (i am talking about triggering an event through an ingress external IP)
@@DevOpsToolkit Thanks! I watched that video at least 3 times in the last month, but it seems I didn't have the knowledge to understand it. Anyway, my problem was as follows: I was using GKE, and it seems that, when deploying an EventSource, in the end, GKE adds a few more labels. My problem was that I was not adding all the labels to my Service, so there were no deployments for that Service => the Load Balancer returned 502.
@@bogdan_angh Do you mean something like github.com/vfarcic/argo-combined-demo/blob/master/argo-events/base/event-sources.yaml? That one creates an Ingress for the EventSource. The service it references is a combination of the EventSource name (`github`) and the suffix EventSource adds to the service it creates automatically (`eventsource-svc`). Is that what you were looking for?
@@DevOpsToolkit yes, something like that. In my work I removed "spec.service" from EventSource and created a separate NodePort service which points to the EventSource deployment through selectors. But I'll try your method as well, thanks a lot!
I'm not sure if anybody ever mentioned this, but it's really nice that you use full commands with expanded arguments when you could have just prepped some aliases and used abbreviated arguments. This shows your commitment to teaching, to didactics, instead of trying to be more efficient. Kudos to you for that! 🙂
I do my best to always use full commands (e.g., `kubectl` instead of `k`), full arguments (e.g., `--namespace` instead of `-n`), and to avoid any tools that are not strictly necessary for that subject. As a matter of fact, manyof the things you see in videos are not things I normally use. For example, videos are done in a standalone terminal with bash while I use the one baked in VS Code and with zsh.
Thank you for noticing that. You are the first one who made such a comment.
I feel those are the things that people do not notice directly but, hopefully, have a better and easier experience without knowing why. It's a kind of an improvement without a reward :)
@@DevOpsToolkit here's a more detailed feedback: you have a great style of presenting your ideas, and this contributes both to the learning of your audience and to your own success. Some very good points in the way you present your videos are that you're patient, calm and friendly. That sets up an "environment" that's very welcoming to the audience, and without being overwhelming (of course, if people do like me and keep jumping between parts of the video they might get overwhelmed sometimes... hahaha). Regarding bad points, I don't have any of them, to be honest, I like both the form and the content of what you publish here, and I think you usually gravitate to very good DevOps tools (I'm a big fan of Argo tools myself - I've used Workflows for implementing an ETL at a past job I had and it worked amazingly well).
Anyway, I hope you get tons of sponsors here to keep producing great content 🙂
the best thing about your videos is your passionate way of explaining, a joy to watch!
Hey Viktor, I am using tekton triggers and i am not aware that Argo Events are covering some similar area. You have a very clear & easy understandable video and explanation, we are waiting for your Argo Workflows video (+1), thanks for all your support, its always good to follow you and Darin
Tekton Triggers are mostly designed around the goal of triggering Tekton pipelines from Webhooks. Argo Events, on the other hand, are meant to connect many different event sources with quite a few triggers. They are more generic and with a much wider scope. Which one makes more sense depends on the use case. If all you need is to trigger a Tekton pipeline from a GitWebhook, Tekton triggers are a better option. On the other hand, if you might want to go beyond that, Argo Events are probably a better choice. For example, you might want to create an event when a k8s Deployment is created/updated/deleted, and use a trigger that would send a notification to Slack and, also, update a DB, push some change to Git, etc.
OMG, this is so helpful. What a great find your channel. Pls pushing more contents. Thanks!
I'm convinced this decoupled, event-based architecture is the only way to build. Mimic biology. I'd love to hear a podcast or some theory on the topic! Keep up the good work.
We recorded a podcast episode on that topic. I think it will be released next week or the week after that in www.devopsparadox.com/
Thank you so much for making these videos. Very clear and easy to follow. Your teaching style is excellent. No nonsense.
how about submitting a PR to add this video to the Argo Events readme?
I'm ahead of you :). Already created the PR (github.com/argoproj/argo-events/pull/1044) few hours ago and it was merged. I guess it will appear in the "Community Blogs and Presentations" (argoproj.github.io/argo-events/#community-blogs-and-presentations) section soon.
Is that a good place? Anywhere else I could add it?
@@DevOpsToolkit do you have any Idea on how I could send like a file in my request that would be used as an input in the first task of the workflow ?
@@lafiadavid7400 I never tried that. Is there a specific reason why you'd like to "attach" a file to the request instead of storing that file somewhere from where it could be retrieved by the pipeline?
@@DevOpsToolkit nope I simply didn't want to bother retrieving it inside actually
Thanks for the answer
Is this the first video I must watch to learn ArgoCD? Thank you for doing this.
It's not :( Watch th-cam.com/video/vpWQeoaiRM4/w-d-xo.html first and, from there on in the order they were released. This one assumes some prerequisite knowledge of Argo CD
@@DevOpsToolkit Thank you!
Thanks for the clear explanation! Helps a lot!
Thanks a lot for the video, it's suuuuuper interesting and I'd appreciate very much if you do a video about workflow. Keep it up!
You're reading my mind :) Argo Workflows video is coming next week (probably on Thursday).
Excellent video Viktor, thanks a lot 🙂
Nice and concise explanation. Thx
Very Nice video. Thank you so much for this :)
Amazing demo
You don't need to type `clear` to clear the terminal -- just hit CTRL-L. You can also do this while typing a command, and it will clear the screen and leave what you've already typed on the command line.
I corrected that a while ago. It's CMD+k for a while now :)
Even after 3 years, this video is still valid and well explained. Thankyou for your effort in doing this video. Your enthusiasm on technology is inspiring..
One query from this video..I dont see Event bus used here. is it optional?
Event bus is mandatory. You might be able to replace it with a different bus, but I haven't tried that.
Please make more Argo workflow cd/ci machine learning pipelines videos 🙏🏾🔥
One thing I do not get is that, with the base installation of Argo Events, you are already deploying the eventbus. At 7:30, why do you need to install event bus again?
To be honest, I recorded that video a while ago and forgot the details. Let me check it out and get back to you.
@@DevOpsToolkit really appreciate you looking into this. Your videos have been a treasure trove of information for me recently.
This is the first video on this subject I've seen that goes into a good amount of detail about the subject matter. Thanks so much clarifying these points. Wanted to ask what the observability story is for Events. If an event is captured by an Eventsource, how do we know that it has been relayed to the Eventbus, and from there that a sensor captures it and successfully invokes a trigger? I guess the pod logs would be of help here but what should we be looking for? Is there a 'friendlier' abstraction or UI that can help here?
Ideally, argo events would be producing kubernetes events that we could collect like we do from most of the other kubernetes resources. That is not the case and that is one of the most disappointing parts of argo events. On th bright side, logs are available and can be collected.
As a side note, I'm working on a video that will use, among other tools, argo events. It should go live in a month (more or less).
Looking forward to seeing it! Thanks for the insight 🚀
thanks alot for your videos! very helpful! any chance of showing how to trigger argo-workflows by files uploaded to minio?
Adding it to my TODO list... :)
Greate explanation and great job! Concern about the affirmation that is 100% decouple, this is not real because if changes occur in the index of json so this will broken your consumer or worse shows a wrong data.
Great job! I like to see you on conferences. I like to watch your videos.
Sometimes you are painting on your screen. Which app you are using for this?
Since a few months ago, I switched to presentify.compzets.com/ for drawing on the screen using cheap, but great, Huion tablet.
What about the performance? I mean, it's really useful, in a decoupled way, asynchronous, event-trigger orchestration, however, let's say I create events when you hit a GET operation to related endpoint, and if we start to trigger workflow (e.g.) which takes some time, we may stuck with performance maybe, however it's really good for short-live tasks
It all depends on how you define events. It can be one an issue if you trigger events for everything or if you are monitoring everything. Kubernetes API is already producing events so it mostly depends on what you subscribe to.
So yes. If you trigger an event for every GET, it will not end up well.
Thanks you Viktor for your teaching! I have taken your devops udemy course and learnt alot from your teaching. I have a question regarding this video, hope you can advise. Following this video, i tried to deploy the sensor and eventsource in another namespace, instead of argo-events. First i deployed another eventbus and serviceaccount in that namespace and the curl was successful, (also saw in argo ui that there is event source and sensor connection created) , however no k8s pod was created (everything works if i deploy in argo-events namespace) Do you have any advice on what is lacking to deploy in another namespace? Or how should i troubleshoot. Thanks in advance!
Can you send me a gist to reproduce it and take a look?
Thanks!
Thanks a ton, Cristiano.
@@DevOpsToolkit keep going you're a great instructor
👏
I want to allow my customers to self onboard and create their own prod env via web interface. Im looking to automate spin off of new environment triggered by user action. This is perfect, however i’d need to pass many business logic related arguments to the env may be as env variables (may be as payload?). I can achieve that simply by creating new git repo for new env manifests which argo will monitor and then apply in the cluster. I can also use argo events as you showcased. Do you think it’s more suitable with events or is there any other better way? Thanks
I don't think there is a need for events in that case. As long as you push changes to manifests to a Git repo, Argo CD should do the rest.
Do you have a video on Argo Workflows?
I don't have it on argo workflows specifically. I do have one that combines all argo projects, including workflows.
th-cam.com/video/XNXJtxkUKeY/w-d-xo.html
Hi Viktor! Thanks alot for your videos! Would you know or give some insight into how to trigger Argo workflows when files are uploaded to S3 in IBM Cloud?
I haven't been using IBM much so I cannot answer that specific question. What I can say is that tekton alone does not have much support for triggers and events. I would probably go with Argo Events for that part.
I want to send a message in slack when we update the revision version on argocd, can this be done by using argo events?
Is github.com/argoproj-labs/argocd-notifications what you're looking for?
What are you using to write on the board?
I record the talking head, screen, and audio separately and overlay them during editing.
Actually, what is said is how I do it now. Older videos, like this one, were done through OBS and I was writing on the screen with a tablet.
@@DevOpsToolkit Thanks, man!
1:07 You're a subscriber! (You missed a promotion op.)
That's a good one :)
Yeah
Let me straight come to the point, I need to take a Date input during ArgoFlow So I can pass that Date into program and a report will be generated based on date. Is that possible to send a date as user Input during flow?
Yes it is.
argoproj.github.io/argo-workflows/workflow-inputs/
Can we like pass a file in the request that will be used in the workflow as an input ?
I would rather opt for storing a file in some storage and reading it from there.
@@DevOpsToolkit have you ever tried implementing a generic eventsource server with grpc protocole ?
@@lafiadavid7400 I haven't done that myself (others I worked with did) :(
Ok thanks
@@DevOpsToolkit can I have an example or a repo git links of they did it or do you intend to make a video about it?
Has anyone managed to integrate Argo Events in a real cluster, without the need of port forwarding? (i am talking about triggering an event through an ingress external IP)
There is an example of using Ingress with events in th-cam.com/video/XNXJtxkUKeY/w-d-xo.html.
@@DevOpsToolkit Thanks! I watched that video at least 3 times in the last month, but it seems I didn't have the knowledge to understand it. Anyway, my problem was as follows: I was using GKE, and it seems that, when deploying an EventSource, in the end, GKE adds a few more labels. My problem was that I was not adding all the labels to my Service, so there were no deployments for that Service => the Load Balancer returned 502.
I'll make sure to watch the video again, I'm sure I'll learn a lot more now that I did my own research :)
@@bogdan_angh Do you mean something like github.com/vfarcic/argo-combined-demo/blob/master/argo-events/base/event-sources.yaml? That one creates an Ingress for the EventSource. The service it references is a combination of the EventSource name (`github`) and the suffix EventSource adds to the service it creates automatically (`eventsource-svc`).
Is that what you were looking for?
@@DevOpsToolkit yes, something like that. In my work I removed "spec.service" from EventSource and created a separate NodePort service which points to the EventSource deployment through selectors. But I'll try your method as well, thanks a lot!