I have a question, do you really need a bridge ip? I mean can you just forward a packet using iptables to the corresponding veth interface of the pod? Another thing is, lets say a pod send a packet, it is received in the bridge, and in the bridge what goes first? ARP resolution, or iptables filtering?
Answering my own question: So let's say your container is interacting with another container using a TCP connection, and has to use the TUN device to get there. The connection’s reliability is already guaranteed by the upper layer protocol. Since our TUN device is using a UDP tunnel to load a website. Your browser would use TCP to connect to the port 80 of the server hosting the website. The browser has no idea your connection is going through a UDP tunnel and neither does the said TCP protocol. The TCP connection gets established and you get full advantage of all the features it has to offer. If the lower layer (The UDP stream) drops a packet, the TCP connection established inside it would notice and re-request it.
In your 4th and final scenario, containers are in different IP subnets. This does not play well as the idea of an overlay is that the containers can move between hosts and retain their IP address.
Really very nice, neat and informative presentation. I tried to follow a similar approach for one of my session but stuck at some points as I mentioned below - What is the 9000 port at 25:23min? I cant see any rule that could route traffic to tun0 interface in 25:23. But you mentioned the same in 21:17. Am I missing something?
Question! Secnario 2: You've shown veth and bridge, and it's also mentioned veth forwards traffic to other pod's veth by means of the bridge inbetween. I understand a Linux bridge operates at layer 2 of the TCP/IP which transports data as frames (as against packets in layer 3), and knows the destination to send the frames to by means of a MAC address (as against IP address used in layer 3) stored in a database in the bridge . I also understand veth interfaces have MAC addressses. So in this case, when traffic flows from one pod to another there's no need for using destination IP address. Is my assumption correct? Someone somewhere mentioned ARP comes in-between in here (which basically does the job of translating MAC to IP). I'm not sure if it is true that ARP is used here. Could someone clarify on this please? Refer: wiki.openvz.org/Virtual_Ethernet_device
For routing from one pod to another we communicate using the concept of using IP addresses. Because from pod to pod there's no need of network address translation needed. Here the packets transfer between pods is based on tcp/ip purely and uses the udp protocol. Only when the packet is meant for a pod running on in a different namespace does it require a tcp model of packet transfer which is handled by the cloud or the external network routing.
I think you're right. Somebody correct me if I'm wrong, but from my understanding, when you have a layer 2 switch (which the linux virtual bridge acts like), it routes data based solely on the ethernet frame. Meaning the bridge won't unwrap the frame any further to look for an IP header or anything. The bridge would route the packets to the container correctly, but beyond that you'd have to figure out how to get the data to the process running in the container. The linux kernel has code that associates tcp/udp ports to different processes (layer 4). I'm not sure what mechanism there is to associate raw layer 2 data with a process. Maybe using dftables.
It's really a great presentation. Everything suddenly became crystal clear.
Loving the Bristol accent. Cheers drive, lush networking stack mind!
Great presentation, thank you! I also like your humbleness and approach from 0:20 - 0:35.
Kris has to be the best you can learn anything cloud from. Had the privilege to learn as a part of his team at oracle.
Someone give this guy an award!
Excellent!!! Amazed at how you can explain this complicated stuff simply. Thanks!
What a genius way of explaining the topic! Thanks
I have a question, do you really need a bridge ip? I mean can you just forward a packet using iptables to the corresponding veth interface of the pod? Another thing is, lets say a pod send a packet, it is received in the bridge, and in the bridge what goes first? ARP resolution, or iptables filtering?
seeing this for free is a blessing. Thanks!
Great Video! The demo is very practical and illustrative for network newbees like me!
Great presentation, explains the missing chapters in many kubernetes guides
Thank you. You did an awesome job and helped me understand how to set this up on bare metal. Hats off to you sir.
Great presentation.... it’s the missing chapter in many Kubernetes books
Thank you, i cleared a lot of stuff here.
Amazing presentation! Thanks a lot!
Thanks, very helpful talk Kristen
Great packaging overview.
Great presentation to understand overlay network
It was amazing. I was in very trouble to understand this stuff by my own, but couldn't link things with each other. Now I can. Thanks
Wonderful Presentation.
Very insightful presentation! Thanks for all the hard work.
excellent explanation, well structured
Clear to the goal , Thanks Kristen
Great presentation, perfect demos ! Kudos
Excellent!!! Very detailed presentation
Great thanks Awesome presentation !
Very well presented. Thank you.
Amazing Stuff !!
Great, very well explained, thank you.
Can someone help me understand his answer to the UDP question around 23:00 , I don't understand where the reliability is coming from?
Answering my own question:
So let's say your container is interacting with another container using a TCP connection, and has to use the TUN device to get there. The connection’s reliability is already guaranteed by the upper layer protocol. Since our TUN device is using a UDP tunnel to load a website. Your browser would use TCP to connect to the port 80 of the server hosting the website. The browser has no idea your connection is going through a UDP tunnel and neither does the said TCP protocol. The TCP connection gets established and you get full advantage of all the features it has to offer. If the lower layer (The UDP stream) drops a packet, the TCP connection established inside it would notice and re-request it.
In your 4th and final scenario, containers are in different IP subnets. This does not play well as the idea of an overlay is that the containers can move between hosts and retain their IP address.
Clear !
Thanks!
Awesome stuff!
Very clear explanation! Can I find the scripts used in the video on GitHub?
AWESOME
Really very nice, neat and informative presentation. I tried to follow a similar approach for one of my session but stuck at some points as I mentioned below -
What is the 9000 port at 25:23min?
I cant see any rule that could route traffic to tun0 interface in 25:23. But you mentioned the same in 21:17. Am I missing something?
socat adds the route automatically. Any packet destined for 172.16.0.0/16 will be directed to tun0.
@@bandisandeep Not really sure if it adds that route automatically. In my case, I have to explicitly add that route to make that work
Great talk! When you say multiple nodes (Case 3)... do you mean multiple Servers like could be CIsco UCS? Thanks!
Yes, we can setup multiple such nodes.
great talk
Excellent
Why no NAT?
Awesome!
Immense thanks for this
Question!
Secnario 2:
You've shown veth and bridge, and it's also mentioned veth forwards traffic to other pod's veth by means of the bridge inbetween.
I understand a Linux bridge operates at layer 2 of the TCP/IP which transports data as frames (as against packets in layer 3), and knows the destination to send the frames to by means of a MAC address (as against IP address used in layer 3) stored in a database in the bridge . I also understand veth interfaces have MAC addressses. So in this case, when traffic flows from one pod to another there's no need for using destination IP address. Is my assumption correct? Someone somewhere mentioned ARP comes in-between in here (which basically does the job of translating MAC to IP). I'm not sure if it is true that ARP is used here. Could someone clarify on this please?
Refer: wiki.openvz.org/Virtual_Ethernet_device
For routing from one pod to another we communicate using the concept of using IP addresses. Because from pod to pod there's no need of network address translation needed.
Here the packets transfer between pods is based on tcp/ip purely and uses the udp protocol.
Only when the packet is meant for a pod running on in a different namespace does it require a tcp model of packet transfer which is handled by the cloud or the external network routing.
I think you're right. Somebody correct me if I'm wrong, but from my understanding, when you have a layer 2 switch (which the linux virtual bridge acts like), it routes data based solely on the ethernet frame. Meaning the bridge won't unwrap the frame any further to look for an IP header or anything. The bridge would route the packets to the container correctly, but beyond that you'd have to figure out how to get the data to the process running in the container. The linux kernel has code that associates tcp/udp ports to different processes (layer 4). I'm not sure what mechanism there is to associate raw layer 2 data with a process. Maybe using dftables.
Route not Root!!!!!!!
Awesome!