So good explanation, but for a couple beginning parts. What are the nodes that you started the video with? 1 "master" and 2 "workers". What are those containers? Jenkins? Other?
Hi. my very first question is, where is the environment you login at the beginning at 3:35? how are the machines built? how to set the IPs? I know I have low info on it but there s lack of explanation on that part.
Thanks for the question but this is beyond the scope of this video. Setting up the Linux boxes and handling networking is a prerequisite for this and I may cover that in the future once I get time.
Swarm itself does not do anything different with volumes, it runs any volume mount command you provide on the node where the container is running. If your volume mount is local to that node, then your data will be saved locally on that node. There is no built-in functionality to replicate data between nodes automatically.
but right now, you are inside a master node and crearing all the applications.. But if the docker swarm cluster is somewhere else, will i be able to use it from my laptop? ie, for kubernetes we use kubectl to login right
HI akash, I have seen your video and it was clearcut explanation. but my doubt is how to setup postgresql streaming replication setup across different servers with HA
Thanks for your video. How we can add the deleted (worker) node from the docker swarm back to the Docker swarm node list? Let's say we forgot to save the docker swarm token id required to add the nodes into the list Thanks
Just run docker swarn join-token worker in the docker swarm master machine. This command will print out the command and generated token to be added in the new node or the deleted node. You need not to remember any token ID as such since it is going to recreate new token id for every new instance.
Hi Akash, what if my manager node goes down? Will my worker node be promoted as a manager? If yes then which IP I should use to access my application? Will there be a virtual IP? if not then how will it help me to achieve HA?
Very good video. Can the docker swarm manage node for load balancing? I mean by set the priority, so when 1 node is drain/down it will be handled by spesific node. I really want to know the answer. Thankyou
@@wintherace108 Hi Himadri, A combination of Python, golang, yaml, jq/yq, shell is recommended and must have scripting languages needed currently. You can manage most of the things with Python but having a knowledge of golang will keep you ahead in the market.
Hi Akash, I am using 1master and 2 worker nodes when i make master drain i am not able access jenkins server with the public ip of the master even containers are running in worker nodes? can you please help me out of this ASAP
Thats a good question Sree Kanth: If the Jenkins server was running as a service in the Docker Swarm, you can inspect the service and find the IP of the node running the Jenkins container. You can use the following command to do this: docker service inspect Replace with the name of your Jenkins service. This command will output information about the service, including the node ID of the node running the Jenkins container. You can then use the following command to find the IP of the node: docker node inspect --format '{{ .Status.Addr }}' Replace with the node ID you obtained from the previous command. This will give you the IP address of the node running the Jenkins container, which you can use to access the Jenkins server. Hope this helps. Let me know if the issue still exists. Happy Hacking and have a nice weekend.
Shell scripting is MUST but in certain case just shell wont be able to handle everything hence additional scripting language like python is also required.
OMG; everyone else drones on, and doesn't really do anything. Your presentation is top-notch and pragmatic. I appreciate you!
Really good. Simplest theoretical explanation and then actual implementation.
Solid knowledge and excellent teaching skills.kudos to you for a crisp and clear explanation 😉
Thanks, Alekh!! Glad that you liked it. Keep learning and keep sharing 🙂
Outstanding!!!! Thank you.
Swarm Looks quite simple to me now , thank u Akash 😄
where do you get such machines with ip that you accessing on terminal?
Simply super bro.its easy to understand and we can crack the interview point of view.
Thanks, Satheesh!! This kind of reviews pushes me to work more and make such content. Thanks again :)
Very nicely explained.thank you Akash
i have no words .. simply osm ... best excellent
So good explanation, but for a couple beginning parts. What are the nodes that you started the video with? 1 "master" and 2 "workers". What are those containers? Jenkins? Other?
Thanks for this video. It was easy to understand 🙂
Thanks. Glad you liked it.
Hi. my very first question is, where is the environment you login at the beginning at 3:35? how are the machines built? how to set the IPs? I know I have low info on it but there s lack of explanation on that part.
Thanks for the question but this is beyond the scope of this video. Setting up the Linux boxes and handling networking is a prerequisite for this and I may cover that in the future once I get time.
superb explanation!!!!
How are volumes / storage managed? What if there was data on the node that was drained? Would that be replicated as well?
Swarm itself does not do anything different with volumes, it runs any volume mount command you provide on the node where the container is running. If your volume mount is local to that node, then your data will be saved locally on that node. There is no built-in functionality to replicate data between nodes automatically.
Hi Nice Teaching,
I have installed Elasticsearch and Kibana in master and slave
How we do with HA can you pls guide me
how we can deploy the code changes in docker swarm manually ?
but right now, you are inside a master node and crearing all the applications.. But if the docker swarm cluster is somewhere else, will i be able to use it from my laptop?
ie, for kubernetes we use kubectl to login right
HI akash, I have seen your video and it was clearcut explanation. but my doubt is how to setup postgresql streaming replication setup across different servers with HA
Great content! Keep it up Akash
Thanks for your video.
How we can add the deleted (worker) node from the docker swarm back to the Docker swarm node list? Let's say we forgot to save the docker swarm token id required to add the nodes into the list
Thanks
Just run docker swarn join-token worker in the docker swarm master machine. This command will print out the command and generated token to be added in the new node or the deleted node. You need not to remember any token ID as such since it is going to recreate new token id for every new instance.
Hi , I need help in creating elastic search cluster in docker swarm in 4 hosts, can u please guide me with any example?
Hi Akash, what if my manager node goes down? Will my worker node be promoted as a manager? If yes then which IP I should use to access my application? Will there be a virtual IP? if not then how will it help me to achieve HA?
Very good video. Can the docker swarm manage node for load balancing? I mean by set the priority, so when 1 node is drain/down it will be handled by spesific node. I really want to know the answer. Thankyou
The load balancer will be able to manage the service with the nodes available even if any if the nodes are drained out.
Great demo Akash :)
Do I have to know any programming language for devops???
@@wintherace108 Hi Himadri,
A combination of Python, golang, yaml, jq/yq, shell is recommended and must have scripting languages needed currently. You can manage most of the things with Python but having a knowledge of golang will keep you ahead in the market.
Hi Akash, I am using 1master and 2 worker nodes when i make master drain i am not able access jenkins server with the public ip of the master even containers are running in worker nodes? can you please help me out of this ASAP
Thats a good question Sree Kanth:
If the Jenkins server was running as a service in the Docker Swarm, you can inspect the service and find the IP of the node running the Jenkins container. You can use the following command to do this:
docker service inspect
Replace with the name of your Jenkins service. This command will output information about the service, including the node ID of the node running the Jenkins container. You can then use the following command to find the IP of the node:
docker node inspect --format '{{ .Status.Addr }}'
Replace with the node ID you obtained from the previous command. This will give you the IP address of the node running the Jenkins container, which you can use to access the Jenkins server.
Hope this helps. Let me know if the issue still exists. Happy Hacking and have a nice weekend.
@@TechShareChannel Thanks for the reply,i will try today and all the best Akash.
if possible please make depth tutorial on Ha proxy on ubuntu machine, thanks
Noted!
Do I have to know any programming language for devops???
Shell scripting is MUST but in certain case just shell wont be able to handle everything hence additional scripting language like python is also required.
easy understand
Awesome explanation.