Hi Vikas, this looks great 🙌 I had few questions: 1) What framework are you using to launch browser instance? 2) Where do you store the auth state and how do you share that between different docker images? 3) If you are taking median of five reports to generate core web-vitals, then how do you decide out of those 5 HTML reports which one to publish at s3? Thanks !
1. Using puppeteer to launch browser and login, before auditing the given url, documentation here - github.com/GoogleChrome/lighthouse/blob/main/docs/puppeteer.md 2. In the config file, each URL has its own credentials so lighthouse can audit given URL independently. But we do we pick which URL to be audited in which docker? Lets assume we have 10 URLs in the config, and we want to audit 5 URLs parallel then we also pass the index value to docker while running it(docker run lighthouse-ci --shard=1/5) so now index=1 will pick first 2 URLs and so on. 3. Nice Question, so we don't pick each web-vital based on the median value instead we pick the compete JSON/HTML report based on the median value of the `interactive` web-vital.
Hi Vikas, this looks great 🙌
I had few questions:
1) What framework are you using to launch browser instance?
2) Where do you store the auth state and how do you share that between different docker images?
3) If you are taking median of five reports to generate core web-vitals, then how do you decide out of those 5 HTML reports which one to publish at s3?
Thanks !
1. Using puppeteer to launch browser and login, before auditing the given url, documentation here - github.com/GoogleChrome/lighthouse/blob/main/docs/puppeteer.md
2. In the config file, each URL has its own credentials so lighthouse can audit given URL independently. But we do we pick which URL to be audited in which docker? Lets assume we have 10 URLs in the config, and we want to audit 5 URLs parallel then we also pass the index value to docker while running it(docker run lighthouse-ci --shard=1/5) so now index=1 will pick first 2 URLs and so on.
3. Nice Question, so we don't pick each web-vital based on the median value instead we pick the compete JSON/HTML report based on the median value of the `interactive` web-vital.