Man, its my first comment on youtube, I really love your videos, im a beginner and whenever i have a problem, your chanel is my first choice, keep going !
One thing would have to be mentioned in any case: If I store the secret as environment variable in the deployment, I have the possibility to access this value in the running container instance via the terminal with printenv or env in the container. Here, too, the values are then in plain text. So if a potential attacker gets access to the container, he can easily read the password for the database 🙂
I configured my configmaps, and works perfectly with my env values from VUE. But I'm trying to get this values in the frontend pod... i'm no able to do it.... Is there any extra conf ? Thanks a lot for your videos.
Hi Chris I have a question about kubernetes clusterIP service for pods as a single network point other pods can reach internally, where does its IP exist if I define one on my cluster, how the request travels from external pod to the service to retrieve data or whatever, I think that the virtual IP address for the service exists on the master and not the worker nodes since the worker node can go down and the service is still maintained, the request from the pod goes to the master who determines the service endpoint and routes the request to that IP I'm just saying man what would logically happen any clarification correcting would be really appreciated thanks for the content.
The network layer is controlled on each node by the kube-proxy service. Once you define a ClusterIP, the user-space proxy uses iptables rules which capture traffic to the Service's clusterIP and redirect´s that traffic to the proxy port which proxies the backend Pod. Hope that makes sense.
You can also pre base64 encode the secret string and put that in the secret.yaml file as well. That way he secret is not stored in plane test in the yaml file its self.
Thank You Christian!
Man, its my first comment on youtube, I really love your videos, im a beginner and whenever i have a problem, your chanel is my first choice, keep going !
Thank you so much! I'm happy that you enjoy the channel. 🤗
One thing would have to be mentioned in any case: If I store the secret as environment variable in the deployment, I have the possibility to access this value in the running container instance via the terminal with printenv or env in the container. Here, too, the values are then in plain text. So if a potential attacker gets access to the container, he can easily read the password for the database 🙂
AMAZING VIDEO!
Glad you think so!
Looking forward to the reverse proxy and ingress bits :)
Thanks! I hope you'll like it ;)
Thanks a lot , very informative
You're welcome 😀
You are awesome.
You are!
absolute champion ❤ BTW that's not how you say opaque, but it was just hilarious 😂
Haha! Thanks mate :D
In case you missed 18:47 ... Base64 is not an encryption, it is an encoding only - It does nothing to protect the password!
Thanks for sharing
B-E-A-Utiful!
I configured my configmaps, and works perfectly with my env values from VUE. But I'm trying to get this values in the frontend pod... i'm no able to do it....
Is there any extra conf ?
Thanks a lot for your videos.
Tem algum tutorial de criação de um cluster kubernetes de alta disponibilidade?
Opaque is said like "Oh-payk" :)
Yeah I realized it when looking it up after the recording 😄
Hi Chris I have a question about kubernetes clusterIP service for pods as a single network point other pods can reach internally, where does its IP exist if I define one on my cluster, how the request travels from external pod to the service to retrieve data or whatever, I think that the virtual IP address for the service exists on the master and not the worker nodes since the worker node can go down and the service is still maintained, the request from the pod goes to the master who determines the service endpoint and routes the request to that IP I'm just saying man what would logically happen any clarification correcting would be really appreciated thanks for the content.
The network layer is controlled on each node by the kube-proxy service. Once you define a ClusterIP, the user-space proxy uses iptables rules which capture traffic to the Service's clusterIP and redirect´s that traffic to the proxy port which proxies the backend Pod. Hope that makes sense.
Awesome thanks so much🙂🙏
How to mount .crt file as secret, can you please show
You need to import that to a secret, "kubectl create secret generic my-secret --from-file=config"
You can also pre base64 encode the secret string and put that in the secret.yaml file as well. That way he secret is not stored in plane test in the yaml file its self.
opaque == "Oh" - "Pake!" (rhymes with "Cake").