This is incredibly informative! I knew quite a lot about image compression and graphical formats, but knowing the mechanics of what is behind it will really help me to plan ahead and get the most out of the images I edit and graphics I create. Cheers!
I've just seen them for the first time. They work really well together. I hope they do more around image compression going forward. Considering the impact images have on page load times and therefore user experience, this type of content is very important. Surma and Jake are ideal evangelists. Google we need more of this.
One caveat about the advice of "don't zoom in", if you're not on a retina display, it may still be worth zooming in 2-3 times as much as you want it displayed on the site to ensure it will still look crisp on high-dpi devices.
Yeah, things get tricky if you don't have a representative device to test on. But I'd still say that 'zoomed out to display size' is more representative than zoomed in. By zooming in you're making the artefacts physically larger than they'll actually appear to the user, which makes them easier to see
Fallen in love with squooshing, thanks to creators for sharing the intent, how it was conceived and how to get the best out of it. Its getting the 80% of the job done with 20% of the effort without becoming a coding or photoshop expert. Also had fun watching this quirky webinar.
Huge thanks from a zoomer! Understood, I'll stop zoom in my images, now... I'm campaigning for years for developers to reduce their images' size and this video actually has all you need to understand how compression works :D Image compression is important for UX because it reduces loading time, so users are happier, but what it also does is that it reduces the consumed bandwidth, which means less electricity used to carry the data, less servers' cpu load, so less servers' cooling, less disk space used, so less need to build hundreds of datacenters just to hold useless data, and so forth. Of course, this is not at the scale of a simple website that it will bring about major change. But who knows? What if every frontend developer were really picky on image file size? Would it reduce the average bandwidth consumed daily at world scale? Maybe not if users continue to consume this much videos - which leads me to think that all efforts I've put on image compression 'till now have been wasted by me just watching a 1080p 60fps video about image compression. I would love the same kind of video for video compression, by the way. Also that AVIF format is reeeaaaaally interesting.
My old boss would complain about the slightest artifact of compression, also whatever encoder photoshop uses is the best... Finally wanting transparency on photos for web... so all one can do is PNG24 save from photoshop with many, many colours. (Some of this was before AVIF and other formats that support transparency with lossy compression).
Great Talk, impressive results, thank you very much. But how do we get rid of JPEG/GIF etc. in the near future? All the frontend-folks are used to do it like they done it for decades, because it works for them. So why would they change? Doing all this image compression settings decisions by hand sucks bad if you have to decide it for every single image. I think the only way to change this "i've done it so for years"-behavior is to fully automate it - so it is actually an benefit to use it. This may be implemented server-side via CDN, CMS or HTTP server module or as Photoshop-plugin (WebPShop) etc. Are there any developments/experiments in this direction - so that once implemented there are less steps to think of? Just like: "upload that HighRes RAW Image, the server will serve whats best for the client". (It's like getting every http connection TLS secured. Unthinkable a few years ago, until the LetsEncrypt guys have automated it (and you Browser guys kinda forced it)).
Indeed, we do this in the EWWW Image Optimizer plugin for WordPress sites. Automation really makes WebP accessible for folks who don't have the time to fiddle around with every single image!
@@TsunaSawada26 hah you've just got good eyes then! It was tricky trying to come up with the right image for that bit - trying to find something that would work on most people but still be kinda obvious when it's pointed out. I tested it on about 10 people before I used it, and yeah, one of them also saw the noise first.
Jake Archibald nice nice i'm not trying to argue but the hilarious part is that I am wearing eyeglasses 🤣🤣🤣 maybe because when I looked at the picture, I focused on the weird part which is that noise 🤔🤔
Hi, when scaling down images, sharpness is lost. A large image with 25% quality setting is sharper than a small image with 75% quality setting. However, I'm afraid that my google pagespeed score and seo rating will down (as pagespeed prefers correctly sized images)
Why Google employees don't use Chromebook, everyone use MacBook, even in the summits. And they say others to use Chromebook. Dagger in the heart of marketing
I use a Chromebook for general browsing, and to watch things like Netflix on the go. However, I use a mac as my development machine, as you can see in this video. I don't think that 'stabs' any particular marketing strategy.
Great talk as always. The Zoomer joke kinda threw me off cuz it was so unexpected. Unrelated question tho, are you planning on updating PROXX in the future? Cuz there are some improvements that can be made to make the game more enjoyable (shameless self-plug: I submitted a PR with a couple of changes that I thought would be useful lmao, so check that out if you want to)
We have an eCommerce site with hundreds of product photos. Obviously, we can't hand week each photo. Currently, we upload them as png files. We have a class there that generates jpg files in various sizes for the site to use (and caches them). What is a better solution for this situation?
You can use an opensource lib for webp or use a caching service like Cloudflare etc.. Also, i would convert them to JPEG first then send them off to the Server for thumbnails etc.
How do we do the compression in real-world applications? Is there an open-source algorithm I add to my script where I just enter my image in the argument of the function and a webp image is returned? Or do I have to use some library?
@@jakearchibald Awesome. I'm sold on webp watching all these videos and after testing it myself. Was wondering if there was a simple way to include it client-side in my svelte project.
Real users are going to zoom in like that. This is the digital world where people rely on eCommerce for personal and professional procurement. Webp looks absolutely horrible and destroys brand perception for eCommerce. One would think that Google would come up with something better/other than this and not degrade ones web page score because they don't use your "make-my-picture-95+%-worse-looking" applications. Appreciated effort. Unappreciated name calling and total misuse of brand identity. Fail, Google.
Some users will zoom in, but does that mean you should serve all images at 4000x4000 just in case, making things slow for everyone? I don't think so. We have the tools to detect when users pinch-zoom (the visual viewport API), and that seems like a better cue to load in higher quality, higher resolution imagery. I don't agree with your statements around WebP, but as I've pointed out in another comment, we have AVIF now, which performs even better.
Is this open-source? I'd love to add something like this to my market company site. It's a bit of a hassle and some people always seem to forget the compression step. I understand the compression aspect no problem, but the application of this on the front end is crazy. I'd almost have to guess that it is hitting a backend api on the preview side in order for it to compress and show you the data. Otherwise, Id really love to know how the data was calculated. I've struggled finding a way to do that on the front end. Is mozJPEG a api compression service? Is that how this works? I have so many questions. Somebody help lol
@@jakearchibald oh wow thanks man! This is really cool I'm excited to see how yall did this, so I'll have to check it out. At first id have guessed maybe it was serverless, but I fell into a Web Assembly rabbit hole after watching this yesterday, and this seems like the perfect application for it. I'm hype to find out. Thanks again for this man. Great work from you and your team
"WebP supports both lossy and lossless compression; in fact, a single animation can combine lossy and lossless frames. " From the section *"How do I use libwebp with C#?"* scroll down below the code for the headline "Why should I use animated WebP? Advantages of animated WebP compared to animated GIF?" developers.google.com/speed/webp/faq
Near-lossless WebP coding can be a more powerful and generic option than going for 256 color palette. One needs to run it either a value of 40-80 (actually there are only three settings in that range, 40, 60 and 80). Values below 40 don't really work in a way that makes sense.
I definitely encourage folks to give it a try, but I'm not often happy with the results. Eg, try it with the Squoosh logo on squoosh.app, it creates 'halos' around some of the edges which are pretty noticeable. It might work for some images though.
I gave it a spin with Steve's 'team' image, and with slight loss at max the output is 24.5k. The visual looks fine, but still much larger than the palette reduced version. But yeah, it might work for other images.
@@jakearchibald In general the 'slight loss' is less aggressive at normal settings than palette loss. However, at maximal settings I suspect that "slight less" is overly aggressive. Do you know how the slight loss maps to cwebp's -near_lossless values?
@@jakearchibald then run at slight loss = 40 or slight loss = 60, but yes palette reduction will be more compression but also less predictable in quality.
This might be the only reason I havent made the switch yet - it just doesnt feel like a file-ending of an image. Hilarious how resilient the mind can be to change sometimes! 😂
The fact that the squoosh stuff is possible is nice, but its not practical. What we need is an npm package and/or webpack plugin that automates this process. Manually doing each image is not practical.
Right, but you can use Squoosh to find the optimal settings for the type of image you're expecting. Having said that, a lot of images, especially those related to site design, don't often change, so manual tweaking there is fine.
Guys - can Squoosh have some form of help or explanation. I don't need the science, I've Google for that. Just a few lines explaining sliders and tick-boxes.
Using the same WebP container for both lossy and lossless content is in my opinion a terrible design decision. It's obfuscating information that's very important to a lot of people for no good reason, making it very cumbersome to find what you're looking for when you're interested in just one of the two. That slight loss option makes it even worse. Imagine looking for lossless images and finding a WebP and figuring out that it's with the lossless codec, but then it's actually not lossless at all and you have no way of knowing until you start working with it and zoom in or whatever (which may be very important for the use case).
Doesn't PNG have the same problem? If you find a PNG, you don't know if the palette was reduced before it was encoded. I also kinda wish that webp and webp-lossless were kept seperate, but then there's a long history of this with audio and video codecs. "wav", "mp4", "mkv", "avi" - all of these container formats can contain various different codecs.
Yes, of course you can never prevent lossy preprocessing, so the responsibility to keep things clear and separated falls on the encoders. If you select that you want lossless as a base, then there really shouldn't be any secondary configuration that makes your lossless compression... well, not lossless anymore. The only time I could see it actually being relevant is if you for some reason only can decode the lossless codec and still want lossy compression. That seems far-fetched though.
Coming back to this video in 2023 and just being glad that there's 95%+ support for WebP and Safari supports it too
This is incredibly informative! I knew quite a lot about image compression and graphical formats, but knowing the mechanics of what is behind it will really help me to plan ahead and get the most out of the images I edit and graphics I create. Cheers!
Been missing you two, glad to see you well.
I've just seen them for the first time. They work really well together. I hope they do more around image compression going forward. Considering the impact images have on page load times and therefore user experience, this type of content is very important. Surma and Jake are ideal evangelists. Google we need more of this.
One caveat about the advice of "don't zoom in", if you're not on a retina display, it may still be worth zooming in 2-3 times as much as you want it displayed on the site to ensure it will still look crisp on high-dpi devices.
Yeah, things get tricky if you don't have a representative device to test on. But I'd still say that 'zoomed out to display size' is more representative than zoomed in. By zooming in you're making the artefacts physically larger than they'll actually appear to the user, which makes them easier to see
Fallen in love with squooshing, thanks to creators for sharing the intent, how it was conceived and how to get the best out of it. Its getting the 80% of the job done with 20% of the effort without becoming a coding or photoshop expert. Also had fun watching this quirky webinar.
Finally a video that goes in depth! Thank you!
Huge thanks from a zoomer! Understood, I'll stop zoom in my images, now...
I'm campaigning for years for developers to reduce their images' size and this video actually has all you need to understand how compression works :D
Image compression is important for UX because it reduces loading time, so users are happier, but what it also does is that it reduces the consumed bandwidth, which means less electricity used to carry the data, less servers' cpu load, so less servers' cooling, less disk space used, so less need to build hundreds of datacenters just to hold useless data, and so forth.
Of course, this is not at the scale of a simple website that it will bring about major change.
But who knows? What if every frontend developer were really picky on image file size? Would it reduce the average bandwidth consumed daily at world scale?
Maybe not if users continue to consume this much videos - which leads me to think that all efforts I've put on image compression 'till now have been wasted by me just watching a 1080p 60fps video about image compression. I would love the same kind of video for video compression, by the way.
Also that AVIF format is reeeaaaaally interesting.
Highly informative and obviously professionally researched. I can imagine the amount of work that went to this.
This was a fantastic video. Clear, and love the examples! Thanks!
Squoosh could export data to train an AI, I do believe someone has it built by now.
One of the most interesting videos in a while. Definitely the best of 2020 so far. Awesome work, Jake!
Loved the blogpost as well. Kudos!
15:10 "Don't put your noze to the screen" 👀😅 how did they know?
because they are GOOGLE 😂
Am I seeing HTTP 203??
Not quite :P
that was my first thought
Squoosh is so good. How to batch process several images in Squoosh?
Amazing video guys as always. HAPPY THIS TIIIIME!!!
Missed you Jake
Many revelations, thanx. WebP, Safari, Photoshop. I was thinking I'm good at image compression :)
This is great! Very spacial.
My old boss would complain about the slightest artifact of compression, also whatever encoder photoshop uses is the best... Finally wanting transparency on photos for web... so all one can do is PNG24 save from photoshop with many, many colours. (Some of this was before AVIF and other formats that support transparency with lossy compression).
Great Talk, impressive results, thank you very much.
But how do we get rid of JPEG/GIF etc. in the near future? All the frontend-folks are used to do it like they done it for decades, because it works for them. So why would they change?
Doing all this image compression settings decisions by hand sucks bad if you have to decide it for every single image. I think the only way to change this "i've done it so for years"-behavior is to fully automate it - so it is actually an benefit to use it. This may be implemented server-side via CDN, CMS or HTTP server module or as Photoshop-plugin (WebPShop) etc.
Are there any developments/experiments in this direction - so that once implemented there are less steps to think of? Just like: "upload that HighRes RAW Image, the server will serve whats best for the client".
(It's like getting every http connection TLS secured. Unthinkable a few years ago, until the LetsEncrypt guys have automated it (and you Browser
guys kinda forced it)).
CloudFlare can convert images to WebP.
Indeed, we do this in the EWWW Image Optimizer plugin for WordPress sites. Automation really makes WebP accessible for folks who don't have the time to fiddle around with every single image!
8:20 I noticed the noisy circle first though xD
anyways great talk
Thankyou for the video... learnt so much from you guys and have really improved my skills and understanding.. thank you..
I don't know but I was able to see the noise at first hand before the circle in the sky LOL
I think it's more obvious when it's scaled down Vs full size
Jake Archibald I am watching it in full size though when I saw that.
@@TsunaSawada26 hah you've just got good eyes then! It was tricky trying to come up with the right image for that bit - trying to find something that would work on most people but still be kinda obvious when it's pointed out. I tested it on about 10 people before I used it, and yeah, one of them also saw the noise first.
Jake Archibald nice nice i'm not trying to argue but the hilarious part is that I am wearing eyeglasses 🤣🤣🤣 maybe because when I looked at the picture, I focused on the weird part which is that noise 🤔🤔
Happy next time
Hi, when scaling down images, sharpness is lost.
A large image with 25% quality setting is sharper than a small image with 75% quality setting.
However, I'm afraid that my google pagespeed score and seo rating will down (as pagespeed prefers correctly sized images)
Safari now seems to support webp. JPEG XL based on FLIF and FUIF is something that I'm hoping for. It can be the best of both lossy and loseless.
Do more. I like the topics you discuss
Really great, thanks!
This is google chrome dev dream team :->
Love your discussion 😃
Why Google employees don't use Chromebook, everyone use MacBook, even in the summits. And they say others to use Chromebook. Dagger in the heart of marketing
I use a Chromebook for general browsing, and to watch things like Netflix on the go. However, I use a mac as my development machine, as you can see in this video. I don't think that 'stabs' any particular marketing strategy.
@@jakearchibald I can't believe you replied. I'm shocked. Anyways, recently I'm taking great advantage of your work. Thank you man.
Super cool info. Thanks a lot for the Squoosh. It's does lot better than photoshop
Amusing... out of all the non 203 videos, I accidentally picked the most 203-est video ever.
@Jake, What’s your take on animated WebP?
Personally, I haven't really found the need for it, or animated PNG. If I need animation I use a video.
Great talk as always. The Zoomer joke kinda threw me off cuz it was so unexpected. Unrelated question tho, are you planning on updating PROXX in the future? Cuz there are some improvements that can be made to make the game more enjoyable (shameless self-plug: I submitted a PR with a couple of changes that I thought would be useful lmao, so check that out if you want to)
31:02 yeah definitely!
Guys what are your thoughts on Cloudinary and similar services in term of image compression. The odd couple videos rock
It's there a way to automatically do this such as for user uploaded content?
We have an eCommerce site with hundreds of product photos. Obviously, we can't hand week each photo. Currently, we upload them as png files. We have a class there that generates jpg files in various sizes for the site to use (and caches them). What is a better solution for this situation?
You can use an opensource lib for webp or use a caching service like Cloudflare etc.. Also, i would convert them to JPEG first then send them off to the Server for thumbnails etc.
@@RickBeacham we're using GD from php. We're serving webp when the browser accepts it, then fall back to jpgs.
@@billtuttle7855 How are you falling back? Using HTML?
@@RickBeacham We are looking at the Accept header sent by the client's browser, and only sending webp if it specifically lists image/webp.
@@billtuttle7855 Thanks for the info! 100%
excited about webp a year ago i tested it but browser support was too varied
That's like singular value decomposition?
How do we do the compression in real-world applications? Is there an open-source algorithm I add to my script where I just enter my image in the argument of the function and a webp image is returned? Or do I have to use some library?
Squoosh uses mozjpeg, cwebp, and oxipng. You can get command line versions of all of these.
@@jakearchibald Awesome. I'm sold on webp watching all these videos and after testing it myself. Was wondering if there was a simple way to include it client-side in my svelte project.
Brilliant!
Real users are going to zoom in like that. This is the digital world where people rely on eCommerce for personal and professional procurement. Webp looks absolutely horrible and destroys brand perception for eCommerce. One would think that Google would come up with something better/other than this and not degrade ones web page score because they don't use your "make-my-picture-95+%-worse-looking" applications. Appreciated effort. Unappreciated name calling and total misuse of brand identity. Fail, Google.
Some users will zoom in, but does that mean you should serve all images at 4000x4000 just in case, making things slow for everyone? I don't think so. We have the tools to detect when users pinch-zoom (the visual viewport API), and that seems like a better cue to load in higher quality, higher resolution imagery. I don't agree with your statements around WebP, but as I've pointed out in another comment, we have AVIF now, which performs even better.
Is this open-source? I'd love to add something like this to my market company site. It's a bit of a hassle and some people always seem to forget the compression step. I understand the compression aspect no problem, but the application of this on the front end is crazy. I'd almost have to guess that it is hitting a backend api on the preview side in order for it to compress and show you the data. Otherwise, Id really love to know how the data was calculated. I've struggled finding a way to do that on the front end. Is mozJPEG a api compression service? Is that how this works? I have so many questions. Somebody help lol
Yep, it's open source. There's a github link on the site.
@@jakearchibald oh wow thanks man! This is really cool I'm excited to see how yall did this, so I'll have to check it out. At first id have guessed maybe it was serverless, but I fell into a Web Assembly rabbit hole after watching this yesterday, and this seems like the perfect application for it. I'm hype to find out. Thanks again for this man. Great work from you and your team
I like how Surma is much easier to compress than Jake
Lots of zeros compress well
OMG Jake it was so awesome! :)
Does WebP apply all these image processing to all frames of animation?
I think it also uses previous and following frames to compress individual frames even further.
"WebP supports both lossy and lossless compression; in fact, a single animation can combine lossy and lossless frames. "
From the section *"How do I use libwebp with C#?"*
scroll down below the code for the headline "Why should I use animated WebP?
Advantages of animated WebP compared to animated GIF?"
developers.google.com/speed/webp/faq
Near-lossless WebP coding can be a more powerful and generic option than going for 256 color palette. One needs to run it either a value of 40-80 (actually there are only three settings in that range, 40, 60 and 80). Values below 40 don't really work in a way that makes sense.
I definitely encourage folks to give it a try, but I'm not often happy with the results. Eg, try it with the Squoosh logo on squoosh.app, it creates 'halos' around some of the edges which are pretty noticeable. It might work for some images though.
I gave it a spin with Steve's 'team' image, and with slight loss at max the output is 24.5k. The visual looks fine, but still much larger than the palette reduced version. But yeah, it might work for other images.
@@jakearchibald In general the 'slight loss' is less aggressive at normal settings than palette loss. However, at maximal settings I suspect that "slight less" is overly aggressive. Do you know how the slight loss maps to cwebp's -near_lossless values?
@@jyz slight loss = 100 - near_lossless
@@jakearchibald then run at slight loss = 40 or slight loss = 60, but yes palette reduction will be more compression but also less predictable in quality.
Is that a Joycon presenter I spotted? I must have details
Yes
Has anyone made the misstake of downloading a .webp image and thinking it was a html-render and not the real image?
This might be the only reason I havent made the switch yet - it just doesnt feel like a file-ending of an image. Hilarious how resilient the mind can be to change sometimes! 😂
13:40 When he didn't even consider IE as a browser😅
"modern browser"
The fact that the squoosh stuff is possible is nice, but its not practical. What we need is an npm package and/or webpack plugin that automates this process. Manually doing each image is not practical.
Right, but you can use Squoosh to find the optimal settings for the type of image you're expecting. Having said that, a lot of images, especially those related to site design, don't often change, so manual tweaking there is fine.
Send jpg images with half the size of your webp images, maybe then they will switch.
"Okay Zoomer" triggered Google Assistant lol
Guys - can Squoosh have some form of help or explanation. I don't need the science, I've Google for that. Just a few lines explaining sliders and tick-boxes.
We're planning to add that
nice to see that DK26 made it into HTTP203.. kind of xD
As a graphic designer webp is horrible. Now clients send their logos as webp for print haha
me: googles how to spell spatial
Just as Chrome adds AVIF support you promote WebP...
We're planning to add AVIF to Squoosh, along with WebPv2. We'll probably do a video about that too.
@@jakearchibald can we get some link to more info about WebPv2, it's difficult to find just searching for that ?
@@cjjuszczak there isn't really anything public yet
@@jakearchibald ah, that explains it, can you tell us when it will be public ?
@@cjjuszczak there isn't a date set, sorry
This for video formats
😭 pretty plz
17:08 😂
Using the same WebP container for both lossy and lossless content is in my opinion a terrible design decision. It's obfuscating information that's very important to a lot of people for no good reason, making it very cumbersome to find what you're looking for when you're interested in just one of the two.
That slight loss option makes it even worse. Imagine looking for lossless images and finding a WebP and figuring out that it's with the lossless codec, but then it's actually not lossless at all and you have no way of knowing until you start working with it and zoom in or whatever (which may be very important for the use case).
Doesn't PNG have the same problem? If you find a PNG, you don't know if the palette was reduced before it was encoded. I also kinda wish that webp and webp-lossless were kept seperate, but then there's a long history of this with audio and video codecs. "wav", "mp4", "mkv", "avi" - all of these container formats can contain various different codecs.
Yes, of course you can never prevent lossy preprocessing, so the responsibility to keep things clear and separated falls on the encoders. If you select that you want lossless as a base, then there really shouldn't be any secondary configuration that makes your lossless compression... well, not lossless anymore.
The only time I could see it actually being relevant is if you for some reason only can decode the lossless codec and still want lossy compression. That seems far-fetched though.
they are fun haha
7:00 8:50 10:25 that was cool
BRING BACK HTTP203!
It's back!