Great tutorial! How can I aggregate all the metrics if I have more than one worker? Each worker has its own instance of metrics, which leads me to get metrics per worker instead of aggregating all my metrics together.
@@hassanarjmandnia each worker should be a target in the prometheus scrape_config, is the issue that metrics are being overwritten by each worker colliding with the other? if you need to aggregate the metrics of multiple Prometheus service together i have a tutorial for "prometheus federation" on this channel
@@evanugarte thank u sir, can u give me a hint how i can set each worker as a target for prometheus? i don't what i should put as targets! i only have one /metrics route! and if i set each worker as target, that lead me to have a metric for each worker? the metrics don't get overwritten but with the setup i have now i only get metrics from the worker which handel the request to /metrics
@@hassanarjmandnia each worker should have a /metrics endpoint if more than 1 worker is running on a single machine, they should be exposing those metrics on different ports to have prometheus pull each worker metrics, list the workers like this - job_name: 'my-workers' static_configs: - targets: ['worker1:5000', 'worker2:5001'] an example of the above is here github.com/evanugarte/prometheus-monitoring-tutorial/blob/6a2d98b52c43ea27e23c4e39ecd14a9f56c24d73/prometheus/prometheus.yml#L5
@@evanugarte tnx for reply, u are very nice sir but i can not have a prot for my each worker, bc the whole fast api app we have here is a api for a dashbord! it is like this dashbord -> our fast api app -> db so i try to add worker id as a label in my metrics but it did not work and solve nothing Each worker has its own set of metrics, and when Prometheus scrapes the /metrics endpoint, it only sees the metrics from the last worker scraped, not the combined data from all workers.
Hi there, great video. any ideas on how i can create a distributed system that monitors and optimizes resource utilization across multiple nodes in a network.
i would use cadvisor on each "node", and then run a single prometheus container with a yaml file that lists each cadvisor instance as a target to scrape metrics from there i a different video on this channel for how to use cadvisor with prometheus
on my github is my email, it can be found with the below guide. we can discuss through email if that's ok www.nymeria.io/blog/how-to-manually-find-email-addresses-for-github-users
Short and concise. An unexpectedly good video
Exatly i was lookig for. Thankyou so much !
why don't you use start_http_server in prometheus python client doc?
@@zuowang5185 i wanted to serve the metrics on the same port that the server was listening on
start_http_server requires a new port
Great tutorial! How can I aggregate all the metrics if I have more than one worker? Each worker has its own instance of metrics, which leads me to get metrics per worker instead of aggregating all my metrics together.
@@hassanarjmandnia each worker should be a target in the prometheus scrape_config, is the issue that metrics are being overwritten by each worker colliding with the other?
if you need to aggregate the metrics of multiple Prometheus service together i have a tutorial for "prometheus federation" on this channel
@@evanugarte thank u sir, can u give me a hint how i can set each worker as a target for prometheus?
i don't what i should put as targets!
i only have one /metrics route!
and if i set each worker as target, that lead me to have a metric for each worker?
the metrics don't get overwritten but with the setup i have now i only get metrics from the worker which handel the request to /metrics
@@hassanarjmandnia each worker should have a /metrics endpoint
if more than 1 worker is running on a single machine, they should be exposing those metrics on different ports
to have prometheus pull each worker metrics, list the workers like this
- job_name: 'my-workers'
static_configs:
- targets: ['worker1:5000', 'worker2:5001']
an example of the above is here github.com/evanugarte/prometheus-monitoring-tutorial/blob/6a2d98b52c43ea27e23c4e39ecd14a9f56c24d73/prometheus/prometheus.yml#L5
@@evanugarte tnx for reply, u are very nice sir
but i can not have a prot for my each worker, bc the whole fast api app we have here is a api for a dashbord!
it is like this
dashbord -> our fast api app -> db
so i try to add worker id as a label in my metrics
but it did not work and solve nothing
Each worker has its own set of metrics, and when Prometheus scrapes the /metrics endpoint, it only sees the metrics from the last worker scraped, not the combined data from all workers.
@@hassanarjmandnia can you send what your prometheus.yml looks like? specifically the scrape_configs section
thanks alot man keep it up
Hello! I have some questions and would like to get in touch by email. Can we maintain this contact?
Hi there, great video. any ideas on how i can create a distributed system that monitors and optimizes resource utilization across multiple nodes in a network.
i would use cadvisor on each "node", and then run a single prometheus container with a yaml file that lists each cadvisor instance as a target to scrape metrics from
there i a different video on this channel for how to use cadvisor with prometheus
@@evanugarte thanks for your reply, how can i reach you aside the comment section?
Any chance?
on my github is my email, it can be found with the below guide. we can discuss through email if that's ok
www.nymeria.io/blog/how-to-manually-find-email-addresses-for-github-users
hi, if u have his email can u let him know i need his help?
Thanks you, it is example better then documentation
Github link ?
better than docs damn.
thank you so much