I love this channel on TH-cam that upscales old videos of people in various situations in the past mainly driving in big cities. It's weird at the same time with the uncanny valley creeping... in. If feels a little like looking at footage through a space alien's eyes.
It's now. There are mobile phones that claims 10x or more camera zoom with AI. Basically if you shoot at the moon, you are getting a very incredible detailed moon, that resemble a telescope, but it is not your image from the lens! 😮The image is rebuilt from the AI that 'knows' exactly how the moon appears from the hearth, and based on some shooting parameters it gets, it just recreate it (like CGI) 😁
Great content as always. I particularly like your keeping this at the layman's level - easy to understand. The complexities of such enhancements to video offer unlimited possibilities for non-professional videographers through the guided steps by two of the application packages. Obviously, the free for private use version has the most appealing price, but lacks the guided touch of conversion setup through the recommended settings. All three products work, and even the most expensive offers a perpetual license for a fixed entry price. Thanks so much for sharing, I learned a lot this morning.
I can't speak to the other programs here, but I know the output you get with Topaz Labs' Video Enhance AI generally varries IMMENSELY based on both the quality of the input video and the choice of which AI model gets used. The Artemis HQ model used in this comparison is one of the safer options if you don't want to spend much time fiddling with settings as some of the other models tend to apply more aggresive processing that can result in output that looks rather artificial, but my own go-to model of choice is Proteus as it gives the user some additional settings to dial in exactly how much processing gets applied. It takes a bit of effort on the user's part, but the better results are generally worth it IMO
something interesting for photos is technology that astro-physicists use to sharpen pictures of distant stars/galaxies, but works for regular pictures. the premise is that all blurry pictures actually do contain thr pixel data for a sharp picture but, they're not in the right place. when you know the exact characteristics of the lens and know or work out how the camera settings were, you can put the pixels in the right place. in the old days we used to use a sharpen filter, which doesnt produce very good results.
I did not know such software existed and will try it with some of my older family movies, including some transferred from VHS. But I am also going to keep any original video copies because it sounds like this technology will only get better, and I bet it always will be best to start with the original when trying a new piece of future upscaling software. Thanks so much for another excellent video!
Amazing, the TV show fantasy of infinite surveillance camera resolution might come true. Usually on a series called something like CSI:Crackpot, the detective says “enhance it … enhance it” about ten times until the originally grainy image finally reveals the reflection of the perpetrator in the corner of a shiny butter dish. They weren‘t full of baloney, they were just ahead of their time!
Maybe, but it can’t insert resolution out detail that doesn’t exist in the source. Eg extrapolating a butterfly wing detail is one thing.. applying the same logic to a human face wouldn’t be admissible, unless a picturesque jumble of angular shapes when you zoom in is what the person looks like
@@Tyneras something close to recognizing what a butterfly wing is, finding other examples of a butterfly wing and filling in the missing detail. Works great for butterflies, not so much for human faces
Hate to argue but they (CSI: where ever) were *_definitely_* "full of baloney". Like _satsuke_ said you can't make _resolution out of detail that doesn't exist._ We can guess that a few dots here and there should make a line or whatever but you can't get a sharp 1920x1080 image from a random 20x10 pixel image. Not now and not in the future. We can guess that these 10 dots are the same colour as our suspect's face is and those 3 darker dots could be his hair but we could _match_ it to 1,000,000 other people as well.
@@satsuke on some new stuff actually replaces the texture from some sort built in libraries of textures so what the new enhanced, images is actually more of art collage of similar patterns sharps all stitch back to look like what was being enhanced, but the original image has gone to be replaced with copy more like hi res, painting by an artist, and less of photograph being coping just making it bigger?
@@Tyneras that's the floor it's make something new that look similar, but if it say home movie, would you want all details family members replaces with mear copies, that not even look right, they will look like person, but not someone you now?
I can certainly see the benefits of increasing AI in video production - restoration of old classic episodes of Dr Who, enhancement for law enforcement and maybe even giving Chris a rest and providing us with a Max Headroom inspired Mr Barnatt :-) voicing his lines, all the while laying in the sunny Bahamas :-) Thanks again for some great content... much appreciated!
LOL at Chris as the new Max Headroom :D :D :D And now I'm wondering if some "remasters" are really just this sort of processing. Well, no matter if we get a clearer image of those old shows for our new screens.
@@Reziac yes, some "remasters" are exactly that - that is how it was even before AI and all that - sometimes there just was no other option. In the case of movies or TV shows shot onto real film, you can re-scan the film at a higher resolution, as film tends to have very high resolution - technically, the "pixel" size of film is how big a grain of the photosensitive material is AFAIK. But many TV shows have been shot straight to magnetic tape and if it has been recorded in SDTV resolution, then that is all you will ever get. If that is the case, the only thing that can be done is add another layer of post processing to the original material and hope it does not look too terrible on the "remastered in HD" re-release.
@@KenjiUmino I remember when some TV shows were shot partly on film and partly on tape, and even on a grainy old TV there was a quality difference. (Colors looked different, for one thing.) We're lucky to have such good tech today!
I have Topaz and it has a lot of different AI models for different input types, such as interlaced video. So to be honest a more in-depth test with a variety of input sources of varying qualities would be fairer. VHS, Hi-8, DVD and film.
Sir, you are a genius, but in case someone else hasn't already said, the butterfly is a Peacock, rather than a Red Admiral. Always love watching your films.
@@ExplainingComputers On another issue, I have just bought 4 identical Dell Optiplex 3070 PC's to upgrade the machines in my office. They aren't brand new, but they aren't old either. They are high-spec machines and run like lightning. They had all been factory reset, and running Win 11 Pro. Two launched perfectly, but the other two were stuck in a loop with the Oobeeula error. I spent 5 hours trying to resolve it. I even contacted Microsoft who couldn't really help. In the end, I was forced to install Windows 10 over the top and then upgrade to Win 11 home version. I did this on device #3. Device #4 is just stuck with the error for now. This specific error is so difficult to deal with, particularly when you can't even get past startup. Many others have had the same issue. If you knew how to resolve this, it might be a topic for another video perhaps? (Have now ordered OEM backup disks from USA.)
I think the out-of-focus areas looked much more natural in the Topaz output than the AVCLabs output. They were all blotchy in the latter and that would be a deal-breaker for me.
And here we used to laugh at television dramas where the Good Guys would scale up blurry video to reveal the required dramatic image!! I'd love to see more of this sort of comparison of Useful Software that we might not normally see.
High Chris thank you for an interesting & instructional video, as you say the future may bring better refinements in the technology available. The software that stood out for me was the Waifu2x software, to be able to upscale not only low res video but still images that's of interest to me! Like you I've got an archive of video & still images that would definitely benefit from this, yes the interface does seem a bit clunky but it wouldn't take long to get to grips with it. :)
One thing that got my attention were the differences in the File Sizes before/after using the tools, where the original low 640x480 clip had the Highest size (21MB), where the Upscaled/Improved models were notably less. With Topaz creating a 1080P clip file at under 4MB.
It has to do with the export file type. The MOV was the original video and the programs all exported MP4. An MP4 is technically a compressed file while MOV is uncompressed.
@@garrettloughran2761 gotta nitpick a bit here: .MOV is a container just like .AVI, .MKV, .MP4, .M4V ....... these files can contain video and audio streams using a number of different codecs ... some containers have more strict rules as to what codecs are officially allowed. .MP4 containers will usually have AAC audio and either mpeg4 or AVC/h264 or HEVC/h265 video while containers like .MKV are more flexible (common combinations are mpeg 4, h264, h265 or even mpeg 2 (DVD) video with MP3, AAC, AC3 audio along with a couple of subtitle tracks ... but you can also go HUFFYUV video + FLAC audio if you want to go lossless to preserve every bit of quality - MKV can take it. .mov is somewhat in between and can have a number of video & audio streams encoded with a selection of different codecs at different compression rates. what i want to say is: just looking at the file extension (.mov, .mp4, .mkv) does not tell you much about how compressed or uncompressed something is. you gotta inspect it with a tool (gSpot, vlc) to know for sure.
Topaz created a low file size because he used a Constant Rate Factor of 33, which introduced a huge amount of compression. I've used Topaz to upscale many of my DVD's to 1080p and use a CRF of around 16 -17. The file sizes are much larger but everything is crisp. I then color correct with DaVinci Resolve and try to get a more manageable file size. He also used the Artemis AI Model, which doesn't always give the best results. Proteus Fine Tune is the way to go if you want everything to look great. I suggest you check kraz3yivan's channel. He uses Topaz to upscale Star Trek Deep Space 9 footage. You can see what kind of quality to can get with Topaz and DaVinci Resolve.
I must say I completely disagree with the final verdict-to me AVCLabs result looks like a moving painting. Of course it's all very subjective. Grateful that you mentioned Waifu2x, didn't know about it and great video as always!
I watched almost all of the video thinking "he's talking rubbish, the AI versions look the same as the upscaled one". Then I checked the resolution and found I was watching in 360p! Switching to 1080p made an amazing difference! I don't know if my most powerful PC is up to it, but it would be interesting to see what it can do with my Hi-8 and Digital-8 camcorder videos.
Greetings! Here I am, back again, with another comment 🙂 Thanks for this great video. I tried some of the upscaling AI tools on some old webcam videos that my kids shot of themselves many years ago in 640x480 at 15fps. The result is really decent, not only upscaled, but also noise reduced. Amazing!
It's always interesting to see footage from the late 19th/early 20th century upscaled. Something quite eerie about it, almost like watching ghosts. Perhaps because that era has moved out of living memory.
yeah conincidentaly i just watched an upscaled/colourized 1940 california video right before i saw this popup in my notifications. Amazing stuff, although the colours are off, but with how quickly things are moving in AI/graphics just a few more papers down the line and we'll have movies of the dinosaurs. Im half joking but AI will figure something out with minimal data, theres a tone of bone data, maybe AI will can rescunstruct the dinosaurs better then our artists can at the very leaste. And with more data could perhaps rewind all of lifes genome as we get to watch its morph/evolve from its single ancestor. Two minute papers channel covers a lot of the AI and graphics advancements going on right now and it seems like they outdo previouse work yearly, if not monthly now.
@@g-r-a-e-m-e- I both agree and disagree. I don't think you should mess with art, a movie for example. But with historical footage, say a ww1 battle, it can make it more relatable to modern audiences. Like all technology there's a time and a place.
A.I. generated imagery is creating quite a storm in the creative industry currently Christopher. Would be worth covering some of the big ones (Midjourney etc.)
Very impressive however I love looking at old and faded films and photos it is the nostalgia of it, a time gone by. My grand children seeing old photos asked why they were small and not so sharp, you explain it was the technology of the time.Upscaling them you have no idea of age or era.
Nice to see how some of those amazing videos on TH-cam are made. I'm talking about the ones from many decades ago that are now in colour, 4K and 60fps. Maybe in 100 years time people could be doing the same to your videos Chris in to some sort of holographic 3D version. 🤔
That was a very interesting look into AI video upscaling! Personally, I think Waifu (by the way, apologies, but "waifu" is pronounced WHY-fu.) did just as good of a job as AVCLabs' video enhancer. While Topaz does its job good, the other two knocked it out of the park! I'm hoping we'll get to see more AI related stuff on EC in the not-too-distant future. :)
@@ExplainingComputers Aw, it's okay! I tend to goof up pronunciation often as well. That and misphrase stuff. Either way, it's not something to spend any time beating yourself up about. EDIT: Forgot to say you're welcome!
"Y-Foo" indeed. Japanese phonetics are closer to Spanish or Portuguese than English. I agree that it did a great job, and the others can't compete on price. Plus the perennial-subscription business model is a nonstarter for me.
I personally liked the Waifu2x result the best. I think too many details were created in AVCLabs video than were necessary. It did not pick up on the bokeh at all, while the others enhanced it for what it is and the created nice sharpness on the subject against it. It would be a great idea to do some home video enhancement tests between these software because that's what most users would want to do with it.
Too bad he didn't shoot the video intro/outro segments on an old VHS camcorder, it would have been interesting to see the results of the AI processing on those. :)
@@LeftoverBeefcake That's definitely my use-case. I have several old vhs home recordings that I've copied onto my computer using an RCA-to-HDMI converter and an HDMI video capture (streaming) device. They are now technically 1080p videos, but are obviously very low res and need some help to look nice and be fun to watch on my TV.
Thank you so much for this video! I tried waifu a while ago and got pretty bad results, so at least this gives me hope that the paid alternatives aren't necessarily much better and that with more tweaking I could do better. I'd like to point out that AVC makes parts of the image look like they're hand-painted, which is very visible in the white spot of the wing. Waifu did that to the video I tried and AVC seems to do it even more.
Hearing the word waifu come out of Christopher's mouth was surreal. Reminds me of when my ma would say pokemans back in the early 2000s when I was a child.
Also taking into consideration on what file size you want for each video file. The more HD you have the bigger the file. Great video here i enjoyed it from start to finish. Good job done 🙂👍
Greatly enjoyed this video.👍 Whereas I would not have looked into these before, your informative and entertaining introduction of them raises interest. Kindest regards.
On Topaz labs you had the constant rate factor of the video encoder too high. Higher means lower quality and smaller size, because it's like a quantizer. It's upside-down. A good value range is 18-21.
As a retiree with a laptop, skimping by to deal with inflation and tax increases, I always seriously look at free or low-cost software. I have plenty of time between doctor appointments so speed isn't a problem. I sincerely thank you for bringing Waifu to my attention.
Really glad you made this comparison video! I have been considering image enhancement software for old recordings incorporated into new videos but I was skeptical on how well they actually worked. Also thank you for including programs that work on a Mac.
@@ExplainingComputers My intray is getting deeper. I'm still playing with the wifi BME280. It only works in my phone browsers but I may try and incorporate the spreadsheet idea.
Another traditional upscale method is nearest-neighbor which looks less blurry, but more pixelated. Nearest-neighbor is good for screen recordings, but not so much for real-life video. Nearest-neighbor also only works for exact integer-multiples.
Very nice comparison. It would be worthwhile, although take a lot more time and effort, to take an original video with two different cameras at the same time, one at low res and one at the higher res. Then compare the upscaled low-res video with the original high-res one.
I recommend Video Enhance AI. Used it to convert a ton of old S-VHS, Beta, 8mm/Di8 and DV footage. Results are generally very good, but tend to vary from shot to shot.
Of more interest is standard VHS or vga at 15fps on compact cameras, footage which although still valuable definitely disappoints, and might be improved. Chris's butterfly was perfectly acceptable as it was, so I would have preferred to have chosen lower quality;. however that might have revealed the weakness of those programs.
Greetings! Super Sunday video. Probably something I will not use but very nice indeed! Great attention to detail (no pun intended) in the video output comparison.
Ah, I don't know. The AVC Labs output is clearly trying to "sharpen" the bokeh in the background, which is making it very cartoon-ish (like someone's applied a "posterise" colour reduction filter to it). While Waifu and Topaz seem to have correctly interpreted bokeh as background and not tried to "fix" that which isn't broken (Waifu's done some smoothing that looks the most realistic to me). Indeed, I generally feel that the AVC Labs is trying too hard, and looks like someone who's cranked the "sharpen" filter on Photoshop way, way too unnaturally high. As you pointed out, it's so keen to sharpen things, it turned motion blur on the antenna into two distinct antennae (!). It's so keen to sharpen, it's actually misinterpreting the image. It's like it's trying way too hard to please - to look super-sharp - that it's actually going too far and doing the wrong thing. Leave the bokeh alone, don't "undo" motion blur, stop trying so damned hard to look the sharpest of the sharp, as it's driving the AI to error. But, ah, this is a subjective thing, so opinions will vary.
considering the amount of repeats of sd programs it is a shame that the tv companies don't use this technology or are they waiting for the tvs to do it on the fly.
The manual settings in topaz can theoretically get the best results but it takes some testing for each video. Interlaced sources are often problematic, it helps to deinterlace using avisynth(either realtime or render to a new video file).
New subscriber here, found the video very informative. Really want to get into A.I. software just to see what could be done with it. It would be nice to upscale all the videos I've collected from TH-cam over the years.
Eerrrmm, that is a *Peacock* butterfly, NOT a Red Admiral. Apart from that, this is a technology that I'm very interested in, having quite a few old low resolution videos I could make use of if I can enhance them. Definitely going to give Waifu a try ! 😄
Our local council should be using this for their poor quality CCTV footage of miscreant perps.. I sometimes think they don't upgrade because catching people incurs additional cost 🤔 sorry, 😁 I'm being very cynical today, on that note.. another great video 😁👌
the ai can't recover actual detail, the information for that just isn't there. it makes up detail that looks plausible. technique like this might be useful in some ways maybe but if you're thinking of using video enhanced in this way as some kind of legal evidence, i expect there would be problems with that
@@richard_d_bird You can recover actual detail using multiple image superresolution technique. If you have the same subject but it's moving, you can gain sub-pixel data. I don't see that Waifu2x does this, any of their supplied algorithms, but Topaz absolutely does. Unfortunately then it's the question of whether you really trust the outcome. It's generally a problem with image processing techniques, since they all produce artefacts, many of which can even look plausible, and AI turns up this problem to 11, since it's not even provable how it arrived at the outcome, what is hallucinated and what is actual source data. But it is sometimes also not such a big problem. You apprehend the suspect, and then use other means to see whether they are or aren't the culprit. Maybe there's an eyewitness, maybe there's tracking data demonstrating their movement. Then you have CCTV showing that there was only one person at the scene, that it couldn't have been someone else, and that's enough. But yeah catching people is expensive, even if you did have genuine HD footage. What about the DVR that runs the CCTV, they aren't particularly kind to the image either, and video compression can produce plausible looking but wrong detail even at higher quality grades.
"Waifu" is pronounced like "wife-ooh". Because this Japanese anime term does actually originally derive from the English word "wife". Basically, some more obsessive anime fans - or "weebs", if we were being more derisive about them - like to declare sexy anime babes to be their fictional wives, or at the very least, are declaring their favourite sexy anime babes to be of what they consider "wife material". Yes, someone is definitely having some silly fun naming their AI software "waifu" like this. But I guess we shouldn't be surprised, as open source loves a ridiculous name. Like "GIMP" or using recursive acronyms. So, yes, why not name an AI upscaler to be your fictional anime "wife"? It's an open source project. Such things are strangely "normal" in that realm. But, yeah, thought I'd give you the pronunciation and also the explanation of what it is, that might also avoid you having "waifu" typed into your Google search history for others to see and wonder what on Earth you've been getting up to. Well, unless you're into that sort of thing. No judgements.
Thanks Chris. Had been looking to install some AI video upscaling software and hadn't heard of the free Waifu2x-Extension-GUI before. Will give it a go. Hopefully it runs on Win 7. Topaz AI requires win 10/11 but AVCLabs Video Enhancer A will run on Win 7 apparently. PS It's a Peacock butterfly, not a red admiral!
The ultimate test would be to take some of the digitised standard 8 cine footage (originally shot by my grandfather) and compare the faces of my relations to those shot on 6cm x 6cm 120 roll-film of the same age and source to see if the faces of my relations are still recognisable, but I'd need a version to run on Linux Mint with an AMD graphics card, target 4k resolution. Medium format film is still the best quality if you are constrained by not having a high five-figure or low six-figure budget for the sensor, especially with a slow film in the 25-40 ASA range. It is why it was always used for outside advertising photographs (studio photographs used plate or sheet film that was even larger) that were intended for enlarging to fill posters the size of a house that came on several rolls to be pasted up, and why the camera that was chosen to go to the moon for still images was a Hasselblad - or rather two of them for each landing mission (and they are still there - free to the first person who can collect them). Any decent Linux equivalents?
The Waifu2x looked the best, to me, although it is quite slow to process. Do these have server side versions that can be used sort of like a render farm?
Thank you Chris for the real time estimation in the future. I have debated whether or not if AI Upscaling is even worth it long term due to the time to encode and the actual overall benefit besides some clarity. I can only think of a few reasons why I would upscale video with this type of software. #1 would be if the video quality was so poor and not interlaced and there are no sources for higher quality videos, I have done that with some old westerns and with the right settings (1 full day of encoding or more...), the picture quality is remarkable. Although faces still kinda look warped similar to Dall E's early versions. #2 if it was a business related video that needed to be enhanced. AI Upscaling can tell the difference between a face and a rock and whatnot. Don't expect miracles, if you convert 240p to 8k (I know it's silly, but go with it), it will not look normal whatsoever. But do it anyway...it's a lot of fun. I might play with Waifu in the future, but I'm still running a GTX 970 and my card hates me for all this upscaling....
Very interesting. You mention about being able to do this in real time at the end, the Nvidia shield claims to be able to do this already though only from 720p up to 4K so I guess lower start resolutions can’t be to far away.
I would just like to say that I found (on a much earlier version of Topaz Enhancer AI) that the suggested settings were literally *never* the best ones in any of the videos I used it on. I was using it on mostly impossible-to-find music videos and tv shows originally from VHS, and it took maybe a couple days of experimentation to get results that didn't have a lot of major-to-minor artifacting, or that ended up with less sharp results. I can tell by the UI here that they've changed a lot since then, so maybe that has changed.
I've already done a few video upscales with a different method. Topaz also have the Gigapixel AI software, which can be used to batch upscale pictures. So I extract video frames as png images using fmpeg and upscale them in Gigapixel AI. Then I again use ffmpeg to assemble the images into the upscaled video. It works really well and is reasonably fast with my Radeon RX 570 8GB taking 2-6 seconds upscaling frames from under 1280x720 to fullHD. The resulting files look better than all I've seen in this video, tho as someone else pointed out, you had CRF set to 33 in Video Enhance UI, which caused the resulting video to have a low bitrate. The higher the CRF is, the more compressed the video will be.
I just found Upscayl, a Linux first cross-platform AI upscaler. I'm surprised you didn't include it. I'm on ArcoLinux running i3-gaps. I only had to install the AMD Vulkan driver. As an avid shutterbug for a quarter century, I'd say it's mixed results that with additional tweaking can produce clear improvement. I went from 1024x768 to HD, and I'm pleased with the results. Also, there's Gimp, which I prefer from comparison.
My new graphics card has AI tensor cores for AI upscaling of games, and I tried it out of curiosity, and it works amazingly well. If it weren't for the odd bit of oddness in the sky, I probably never would have realised I was running at 720p with upscaling on. If I believed the advertising(I knew about it, but I'd used enough very ugly upscalers to be more than slightly sceptical, and had no way to test without buying) I could've saved £50 buying a lower card.
Curiously, the DLSS2 AI doesn't actually try to invent details and primarily doesn't even work on pixel data, the visible parts of the image. It's the same as TAA (temporal antialiasing) but instead of reconstructing the image at source resolution, it reconstructs it at higher resolution. Frame to frame camera offset with a magnitude of less than 1 pixel is injected into the renderer, forcing the game's renderer successively over the course of several frames to reveal the details that are renderable but would normally be hidden between the pixels. Motion vector data from the game's renderer is used to forward-propagate the rendering of previous frames to the current frame to align them. At higher resolution it would however reveal an ambiguity of detail revealed and obscured by pixels, which with naive blending results in excessive blurring and ghosting, so the tensor core accelerated AI produces a coverage mask, which tells which of the old pixels are still valid and to what amount. As inputs it receives the depth buffer and the motion vectors. So really it accelerates a known but slow function, and this is also how the training is application agnostic, since they can just generate random shapes and motion vectors to explore the complete useful space of the occlusion mask function in the training.
Mind you Chris, this technology is already available in real-time in many current games (Nvidia calls it DLSS). I think it requires the latest generation of GPUs though.
found a huge mistake. at the topaz test you choose a very high compression rate resulting in very compressed video quality. just look at the filesize. the 1080p upscale is 7 times smaller than the original file. description says that values below 17 result in better image quality. you choose 33
Waifu: A word jokingly created by using the Japanese pronunciation of wife (why - fu) which has come to refer to cute girls drawn in the style of Japanese animation. That aside, I wonder how it would look if you upscaled past your target resolution via ML and then scaled down to it normally, as opposed to just straight upscaling to your target resolution. It was a common technique when we just didn't have enough pixels in games to make good anti-aliaising solutions, and I wonder if the technique couldn't get a second lease on life in this area.
Several models are trained at 4x but I find that's often way too much, especially when enhancing still images. So I let the AI upscale at its native size then scale back down to a preferred size with Python. It helps for some images where the AI seems to try too hard.
If they can only release a version for my brain so I can see things better 😎 anyways, it's nice to see windows media player still rolling 💯🚀 Thanks Chris
I have a couple of boat loads of old video tapes - 8mm and interlaced SD as well as old VHS tapes. I forsee a bleak future involving capturing and processing it all, whilst acquiring blood clots in my legs from immobility. (I just CAN'T walk away while it processes.) So I'll have you to thank when I croak from a pulmonary embolism! ; 0 (Actually, I'm immensely enthused - at the opportunity to bring old memories to vivid life! Thank you!!)
I feel it comes down to personal preference for the most part. AVCLabs looks to smeared out for me, like a painting. It's ironic because I thought waifu would end up looking more smeared considering it's based off what was originally ment for animation & not real life images. I've used waifu for upscaling images before & it didn't look good on regular pictures last I used it but that was years ago.
I think this kind of software definitely has it's uses currently. It's commonly discussed but I don't believe it should be used for restoring older films, games and TV shows, at least at this moment in time, since it can only really approximate details that were never present in the source, which can lead to inaccuracies and smeary results. For example I tried a painting through Topaz once and the final result ended up looking more like a pencil drawing. I've always seen it as more of a tribute than a restoration personally. It is seriously impressive tech never the less. I'm also ready to eat my words in 10 years time when it becomes almost impossible to tell 😄
You can tune it to whatever outcome you want, there is a large model zoo to choose from, trained for different purposes. Conservative models just remove compression and video transmission artefacts and result in a fairly soft output, which is still cleaner and nicer to look at. Lack of artificial sharpness is what you want if you don't want to introduce hallucinations into the footage. But it is a fundamental trade-off. Also you still have to preserve the originals to re-process them when better methods become available. Or maybe the next person to process it will just achieve better outcome with the same methods available today, it's always possible as well.
Worth noting that in Topaz, Artimis Low, Medium and High, do not mean the quality of the output, but the quality of the input video. So, for your 360p video you should have used Low or Medium. The difference is notable.
Thank you for this introduction to this technology. Maybe I will try it out on my converted VHS videos.
"Minimum configurations" in Windows applications translates as "Won't actually crash, but you'd better not be in a hurry for any results".
:)
I bought and used Topaz as a tinkering amateur and really appreciate knowing what other options have become viable in the past few years. TYVM!
I love this channel on TH-cam that upscales old videos of people in various situations in the past mainly driving in big cities. It's weird at the same time with the uncanny valley creeping... in. If feels a little like looking at footage through a space alien's eyes.
Imagine a future where we are all skeptical of details in videos because we suspect they come from AI instead of being captured in the real world
Very true. This and related software raises big issues.
I'd say that time is already here
@@BigWhoopZH Last year, an imposter video of Tom Cruise, a film actor, was notable and newsworthy in its possible likeness to source. Kind regards.
It's now. There are mobile phones that claims 10x or more camera zoom with AI. Basically if you shoot at the moon, you are getting a very incredible detailed moon, that resemble a telescope, but it is not your image from the lens! 😮The image is rebuilt from the AI that 'knows' exactly how the moon appears from the hearth, and based on some shooting parameters it gets, it just recreate it (like CGI) 😁
@@ExplainingComputers The concept in capture series on BBC was quite interesting. Too bad season 2 was atrocious.
Great content as always. I particularly like your keeping this at the layman's level - easy to understand. The complexities of such enhancements to video offer unlimited possibilities for non-professional videographers through the guided steps by two of the application packages. Obviously, the free for private use version has the most appealing price, but lacks the guided touch of conversion setup through the recommended settings. All three products work, and even the most expensive offers a perpetual license for a fixed entry price. Thanks so much for sharing, I learned a lot this morning.
I can't speak to the other programs here, but I know the output you get with Topaz Labs' Video Enhance AI generally varries IMMENSELY based on both the quality of the input video and the choice of which AI model gets used. The Artemis HQ model used in this comparison is one of the safer options if you don't want to spend much time fiddling with settings as some of the other models tend to apply more aggresive processing that can result in output that looks rather artificial, but my own go-to model of choice is Proteus as it gives the user some additional settings to dial in exactly how much processing gets applied. It takes a bit of effort on the user's part, but the better results are generally worth it IMO
something interesting for photos is technology that astro-physicists use to sharpen pictures of distant stars/galaxies, but works for regular pictures. the premise is that all blurry pictures actually do contain thr pixel data for a sharp picture but, they're not in the right place. when you know the exact characteristics of the lens and know or work out how the camera settings were, you can put the pixels in the right place.
in the old days we used to use a sharpen filter, which doesnt produce very good results.
I did not know such software existed and will try it with some of my older family movies, including some transferred from VHS. But I am also going to keep any original video copies because it sounds like this technology will only get better, and I bet it always will be best to start with the original when trying a new piece of future upscaling software. Thanks so much for another excellent video!
Amazing, the TV show fantasy of infinite surveillance camera resolution might come true. Usually on a series called something like CSI:Crackpot, the detective says “enhance it … enhance it” about ten times until the originally grainy image finally reveals the reflection of the perpetrator in the corner of a shiny butter dish. They weren‘t full of baloney, they were just ahead of their time!
Maybe, but it can’t insert resolution out detail that doesn’t exist in the source.
Eg extrapolating a butterfly wing detail is one thing.. applying the same logic to a human face wouldn’t be admissible, unless a picturesque jumble of angular shapes when you zoom in is what the person looks like
@@Tyneras something close to recognizing what a butterfly wing is, finding other examples of a butterfly wing and filling in the missing detail.
Works great for butterflies, not so much for human faces
Hate to argue but they (CSI: where ever) were *_definitely_* "full of baloney". Like _satsuke_ said you can't make _resolution out of detail that doesn't exist._ We can guess that a few dots here and there should make a line or whatever but you can't get a sharp 1920x1080 image from a random 20x10 pixel image. Not now and not in the future. We can guess that these 10 dots are the same colour as our suspect's face is and those 3 darker dots could be his hair but we could _match_ it to 1,000,000 other people as well.
@@satsuke on some new stuff actually replaces the texture from some sort built in libraries of textures so what the new enhanced, images is actually more of art collage of similar patterns sharps all stitch back to look like what was being enhanced, but the original image has gone to be replaced with copy more like hi res, painting by an artist, and less of photograph being coping just making it bigger?
@@Tyneras that's the floor it's make something new that look similar, but if it say home movie, would you want all details family members replaces with mear copies, that not even look right, they will look like person, but not someone you now?
With my poor vision I would like to have a pair of AI enhanced glasses. The software you reviewed was impressive. Looking forward to your next video!
Greetings Perry. :)
I can certainly see the benefits of increasing AI in video production - restoration of old classic episodes of Dr Who, enhancement for law enforcement and maybe even giving Chris a rest and providing us with a Max Headroom inspired Mr Barnatt :-) voicing his lines, all the while laying in the sunny Bahamas :-)
Thanks again for some great content... much appreciated!
LOL at Chris as the new Max Headroom :D :D :D
And now I'm wondering if some "remasters" are really just this sort of processing. Well, no matter if we get a clearer image of those old shows for our new screens.
Here's another idea; a Vtuber model of Chris!
Now, let's g-g-g-go and take... a closer lo-o-o-o-k.
@@Reziac yes, some "remasters" are exactly that - that is how it was even before AI and all that - sometimes there just was no other option.
In the case of movies or TV shows shot onto real film, you can re-scan the film at a higher resolution, as film tends to have very high resolution - technically, the "pixel" size of film is how big a grain of the photosensitive material is AFAIK.
But many TV shows have been shot straight to magnetic tape and if it has been recorded in SDTV resolution, then that is all you will ever get.
If that is the case, the only thing that can be done is add another layer of post processing to the original material and hope it does not look too terrible on the "remastered in HD" re-release.
@@KenjiUmino I remember when some TV shows were shot partly on film and partly on tape, and even on a grainy old TV there was a quality difference. (Colors looked different, for one thing.) We're lucky to have such good tech today!
Over my head Mr. Barnatt. I don't do video production...Still, every video I watch from you, I ALWAYS learn something new!! Keep up the good work!
Always enjoy your videos. They tend to be both very educational and rather relaxing.
I have Topaz and it has a lot of different AI models for different input types, such as interlaced video. So to be honest a more in-depth test with a variety of input sources of varying qualities would be fairer. VHS, Hi-8, DVD and film.
Sir, you are a genius, but in case someone else hasn't already said, the butterfly is a Peacock, rather than a Red Admiral. Always love watching your films.
Thanks for this -- I stand corrected!
@@ExplainingComputers On another issue, I have just bought 4 identical Dell Optiplex 3070 PC's to upgrade the machines in my office. They aren't brand new, but they aren't old either. They are high-spec machines and run like lightning. They had all been factory reset, and running Win 11 Pro. Two launched perfectly, but the other two were stuck in a loop with the Oobeeula error. I spent 5 hours trying to resolve it. I even contacted Microsoft who couldn't really help. In the end, I was forced to install Windows 10 over the top and then upgrade to Win 11 home version. I did this on device #3. Device #4 is just stuck with the error for now. This specific error is so difficult to deal with, particularly when you can't even get past startup. Many others have had the same issue. If you knew how to resolve this, it might be a topic for another video perhaps? (Have now ordered OEM backup disks from USA.)
I think the out-of-focus areas looked much more natural in the Topaz output than the AVCLabs output. They were all blotchy in the latter and that would be a deal-breaker for me.
And here we used to laugh at television dramas where the Good Guys would scale up blurry video to reveal the required dramatic image!!
I'd love to see more of this sort of comparison of Useful Software that we might not normally see.
High Chris thank you for an interesting & instructional video, as you say the future may bring better refinements in the technology available. The software that stood out for me was the Waifu2x software, to be able to upscale not only low res video but still images that's of interest to me! Like you I've got an archive of video & still images that would definitely benefit from this, yes the interface does seem a bit clunky but it wouldn't take long to get to grips with it. :)
One thing that got my attention were the differences in the File Sizes before/after using the tools, where the original low 640x480 clip had the Highest size (21MB), where the Upscaled/Improved models were notably less. With Topaz creating a 1080P clip file at under 4MB.
It has to do with the export file type. The MOV was the original video and the programs all exported MP4. An MP4 is technically a compressed file while MOV is uncompressed.
@@garrettloughran2761 gotta nitpick a bit here: .MOV is a container just like .AVI, .MKV, .MP4, .M4V ....... these files can contain video and audio streams using a number of different codecs ... some containers have more strict rules as to what codecs are officially allowed.
.MP4 containers will usually have AAC audio and either mpeg4 or AVC/h264 or HEVC/h265 video while containers like .MKV are more flexible (common combinations are mpeg 4, h264, h265 or even mpeg 2 (DVD) video with MP3, AAC, AC3 audio along with a couple of subtitle tracks ... but you can also go HUFFYUV video + FLAC audio if you want to go lossless to preserve every bit of quality - MKV can take it.
.mov is somewhat in between and can have a number of video & audio streams encoded with a selection of different codecs at different compression rates.
what i want to say is: just looking at the file extension (.mov, .mp4, .mkv) does not tell you much about how compressed or uncompressed something is.
you gotta inspect it with a tool (gSpot, vlc) to know for sure.
Topaz created a low file size because he used a Constant Rate Factor of 33, which introduced a huge amount of compression. I've used Topaz to upscale many of my DVD's to 1080p and use a CRF of around 16 -17. The file sizes are much larger but everything is crisp. I then color correct with DaVinci Resolve and try to get a more manageable file size. He also used the Artemis AI Model, which doesn't always give the best results. Proteus Fine Tune is the way to go if you want everything to look great.
I suggest you check kraz3yivan's channel. He uses Topaz to upscale Star Trek Deep Space 9 footage. You can see what kind of quality to can get with Topaz and DaVinci Resolve.
I must say I completely disagree with the final verdict-to me AVCLabs result looks like a moving painting. Of course it's all very subjective. Grateful that you mentioned Waifu2x, didn't know about it and great video as always!
Chris, Thank you so much for the work you do with this Channel. It is an essential part of my every Sunday.
Thanks Daniel. :)
One tiny correction- it's a Peacock butterfly not a Red Admiral
I watched almost all of the video thinking "he's talking rubbish, the AI versions look the same as the upscaled one". Then I checked the resolution and found I was watching in 360p! Switching to 1080p made an amazing difference! I don't know if my most powerful PC is up to it, but it would be interesting to see what it can do with my Hi-8 and Digital-8 camcorder videos.
This made me smile! This is indeed one video that has to be watched at 1080p. :)
lol
Greetings! Here I am, back again, with another comment 🙂 Thanks for this great video. I tried some of the upscaling AI tools on some old webcam videos that my kids shot of themselves many years ago in 640x480 at 15fps. The result is really decent, not only upscaled, but also noise reduced. Amazing!
Great to hear of your success.
Useful knowledge without having to do all the "shovelling". Thanks for shaing.
It's always interesting to see footage from the late 19th/early 20th century upscaled. Something quite eerie about it, almost like watching ghosts. Perhaps because that era has moved out of living memory.
yeah conincidentaly i just watched an upscaled/colourized 1940 california video right before i saw this popup in my notifications. Amazing stuff, although the colours are off, but with how quickly things are moving in AI/graphics just a few more papers down the line and we'll have movies of the dinosaurs. Im half joking but AI will figure something out with minimal data, theres a tone of bone data, maybe AI will can rescunstruct the dinosaurs better then our artists can at the very leaste. And with more data could perhaps rewind all of lifes genome as we get to watch its morph/evolve from its single ancestor. Two minute papers channel covers a lot of the AI and graphics advancements going on right now and it seems like they outdo previouse work yearly, if not monthly now.
I hate that, esp the fake colour. Just leave it alone.
@@g-r-a-e-m-e- I both agree and disagree. I don't think you should mess with art, a movie for example. But with historical footage, say a ww1 battle, it can make it more relatable to modern audiences. Like all technology there's a time and a place.
A.I. generated imagery is creating quite a storm in the creative industry currently Christopher. Would be worth covering some of the big ones (Midjourney etc.)
I second this as well, would be interesting to know how it works and what the different companies are.
Very impressive however I love looking at old and faded films and photos it is the nostalgia of it, a time gone by. My grand children seeing old photos asked why they were small and not so sharp, you explain it was the technology of the time.Upscaling them you have no idea of age or era.
Nice to see how some of those amazing videos on TH-cam are made. I'm talking about the ones from many decades ago that are now in colour, 4K and 60fps.
Maybe in 100 years time people could be doing the same to your videos Chris in to some sort of holographic 3D version. 🤔
I never thought I'd hear Chris say the word 'waifu'. 😂
Next up: uwu
That was a very interesting look into AI video upscaling! Personally, I think Waifu (by the way, apologies, but "waifu" is pronounced WHY-fu.) did just as good of a job as AVCLabs' video enhancer. While Topaz does its job good, the other two knocked it out of the park! I'm hoping we'll get to see more AI related stuff on EC in the not-too-distant future. :)
Thanks for your support, most appreciated. :) Here we are on another Sunday. I often fail on pronunciation. All I can do is try! :)
@@ExplainingComputers Aw, it's okay! I tend to goof up pronunciation often as well. That and misphrase stuff. Either way, it's not something to spend any time beating yourself up about.
EDIT: Forgot to say you're welcome!
"Y-Foo" indeed. Japanese phonetics are closer to Spanish or Portuguese than English. I agree that it did a great job, and the others can't compete on price. Plus the perennial-subscription business model is a nonstarter for me.
Incidentally, it's also TOW-paz, and not TOP-az.
@@cavalrycome Like he said in his reply, all Chris can do is try.
@ExplainingComputers Please do more videos on AI software, I want to learn more about software available to the public.
I am always trying to find ways to feature AI on the channel. The problem is, such videos are rarely popular. But I will keep trying! :)
Cheers for this Chris. I am going to be (for the past x years actually) digitising a bunch of old DV tapes so this could be interesting to look at.
Hey Chris, will you be doing a video about DALL-E 2 and Stable Diffusion as well?
I personally liked the Waifu2x result the best. I think too many details were created in AVCLabs video than were necessary. It did not pick up on the bokeh at all, while the others enhanced it for what it is and the created nice sharpness on the subject against it. It would be a great idea to do some home video enhancement tests between these software because that's what most users would want to do with it.
Too bad he didn't shoot the video intro/outro segments on an old VHS camcorder, it would have been interesting to see the results of the AI processing on those. :)
@@LeftoverBeefcake That's definitely my use-case. I have several old vhs home recordings that I've copied onto my computer using an RCA-to-HDMI converter and an HDMI video capture (streaming) device. They are now technically 1080p videos, but are obviously very low res and need some help to look nice and be fun to watch on my TV.
Thank you so much for this video! I tried waifu a while ago and got pretty bad results, so at least this gives me hope that the paid alternatives aren't necessarily much better and that with more tweaking I could do better. I'd like to point out that AVC makes parts of the image look like they're hand-painted, which is very visible in the white spot of the wing. Waifu did that to the video I tried and AVC seems to do it even more.
Hearing the word waifu come out of Christopher's mouth was surreal. Reminds me of when my ma would say pokemans back in the early 2000s when I was a child.
Also taking into consideration on what file size you want for each video file. The more HD you have the bigger the file. Great video here i enjoyed it from start to finish. Good job done 🙂👍
My 1st Super Thanks.
have a wonderful day Chris :)
Thanks for your support, most appreciated. :)
Greatly enjoyed this video.👍 Whereas I would not have looked into these before, your informative and entertaining introduction of them raises interest. Kindest regards.
On Topaz labs you had the constant rate factor of the video encoder too high. Higher means lower quality and smaller size, because it's like a quantizer. It's upside-down. A good value range is 18-21.
An interesting topic - there's some very nice examples of this on 't'internet. Great version of DS9 remastered.
As a retiree with a laptop, skimping by to deal with inflation and tax increases, I always seriously look at free or low-cost software. I have plenty of time between doctor appointments so speed isn't a problem. I sincerely thank you for bringing Waifu to my attention.
Watching the paint race up and down the walls Sucks, Been there.
Really glad you made this comparison video! I have been considering image enhancement software for old recordings incorporated into new videos but I was skeptical on how well they actually worked. Also thank you for including programs that work on a Mac.
Tremendous. I'd say AVC does have the edge to my eyes. With the evenings drawing in I'm going to be busy for the next few months.
Sounds like you have a project! :)
@@ExplainingComputers My intray is getting deeper. I'm still playing with the wifi BME280. It only works in my phone browsers but I may try and incorporate the spreadsheet idea.
Another traditional upscale method is nearest-neighbor which looks less blurry, but more pixelated. Nearest-neighbor is good for screen recordings, but not so much for real-life video. Nearest-neighbor also only works for exact integer-multiples.
Very nice comparison. It would be worthwhile, although take a lot more time and effort, to take an original video with two different cameras at the same time, one at low res and one at the higher res. Then compare the upscaled low-res video with the original high-res one.
I recommend Video Enhance AI. Used it to convert a ton of old S-VHS, Beta, 8mm/Di8 and DV footage.
Results are generally very good, but tend to vary from shot to shot.
Of more interest is standard VHS or vga at 15fps on compact cameras, footage which although still valuable definitely disappoints, and might be improved. Chris's butterfly was perfectly acceptable as it was, so I would have preferred to have chosen lower quality;. however that might have revealed the weakness of those programs.
Greetings! Super Sunday video. Probably something I will not use but very nice indeed! Great attention to detail (no pun intended) in the video output comparison.
Ah, I don't know.
The AVC Labs output is clearly trying to "sharpen" the bokeh in the background, which is making it very cartoon-ish (like someone's applied a "posterise" colour reduction filter to it).
While Waifu and Topaz seem to have correctly interpreted bokeh as background and not tried to "fix" that which isn't broken (Waifu's done some smoothing that looks the most realistic to me).
Indeed, I generally feel that the AVC Labs is trying too hard, and looks like someone who's cranked the "sharpen" filter on Photoshop way, way too unnaturally high. As you pointed out, it's so keen to sharpen things, it turned motion blur on the antenna into two distinct antennae (!). It's so keen to sharpen, it's actually misinterpreting the image.
It's like it's trying way too hard to please - to look super-sharp - that it's actually going too far and doing the wrong thing. Leave the bokeh alone, don't "undo" motion blur, stop trying so damned hard to look the sharpest of the sharp, as it's driving the AI to error.
But, ah, this is a subjective thing, so opinions will vary.
As always...accurate information and demonstration.
Since we get rid of video interlacing the upscaling processing is way more easy.
Thanks for this video. This is a very exciting field of technology!
Never expected to hear "anime" on this channel! Anyway waifu2x looks promising for me as I have used the image upscaler a lot
I didn't expect waifu related content on this channel.
Very interesting. I didn’t even know this type of software existed.
Dude, this is the only video that works. Thanks for posting!
In my opinion, AVCLabs looks slightly overcooked while Topaz is a bit underdone, while Waifu2x is just right, as Goldilocks might say.
Great video as always
Love your walkthrou of the different programs.
Thanks for sharing your experience with all of us 😀
Excellent subject and great Great Results. It's amazing what AI can do. Thanks for sharing Chris.
Greeting Brian!
As always informative and the result of a great deal of work.
By the way that's not a Red Admiral, it's a Peacock Butterfly😀
Paul.
considering the amount of repeats of sd programs it is a shame that the tv companies don't use this technology or are they waiting for the tvs to do it on the fly.
The manual settings in topaz can theoretically get the best results but it takes some testing for each video. Interlaced sources are often problematic, it helps to deinterlace using avisynth(either realtime or render to a new video file).
If in Topaz Labs the Constant Rate Factor was set at 20 or 18, the quality of the video would improve.
AVCLabs 👎
Topaz Labs👍
Waifu2x 👍👍
NICE it's Sunday again thanks for taking time for making quality Video.
New subscriber here, found the video very informative. Really want to get into A.I. software just to see what could be done with it. It would be nice to upscale all the videos I've collected from TH-cam over the years.
Thanks for the sub, and welcome aboard! :)
I may try this in the future. Thank you for the content. I know what to look for in video upscaling software. Cheers!
Eerrrmm, that is a *Peacock* butterfly, NOT a Red Admiral.
Apart from that, this is a technology that I'm very interested in, having quite a few old low resolution videos I could make use of if I can enhance them.
Definitely going to give Waifu a try ! 😄
I stand corrected.
Glad someone pointed that out 👍
Our local council should be using this for their poor quality CCTV footage of miscreant perps.. I sometimes think they don't upgrade because catching people incurs additional cost 🤔 sorry, 😁 I'm being very cynical today, on that note.. another great video 😁👌
the ai can't recover actual detail, the information for that just isn't there. it makes up detail that looks plausible. technique like this might be useful in some ways maybe but if you're thinking of using video enhanced in this way as some kind of legal evidence, i expect there would be problems with that
@@richard_d_bird yeah, I think you're spot on there.. 👍
@Dick Bird -- yes, I agree. You put this very well. The AI is AI it trying to restore what it thinks the image is trying to show.
It's more of an art filter really..
@@richard_d_bird You can recover actual detail using multiple image superresolution technique. If you have the same subject but it's moving, you can gain sub-pixel data. I don't see that Waifu2x does this, any of their supplied algorithms, but Topaz absolutely does.
Unfortunately then it's the question of whether you really trust the outcome. It's generally a problem with image processing techniques, since they all produce artefacts, many of which can even look plausible, and AI turns up this problem to 11, since it's not even provable how it arrived at the outcome, what is hallucinated and what is actual source data.
But it is sometimes also not such a big problem. You apprehend the suspect, and then use other means to see whether they are or aren't the culprit. Maybe there's an eyewitness, maybe there's tracking data demonstrating their movement. Then you have CCTV showing that there was only one person at the scene, that it couldn't have been someone else, and that's enough. But yeah catching people is expensive, even if you did have genuine HD footage. What about the DVR that runs the CCTV, they aren't particularly kind to the image either, and video compression can produce plausible looking but wrong detail even at higher quality grades.
Interesting stuff. We use AI image recognition at work. It’s fascinating. Thanks as always.
Greetings on another Sunday. Will that rocket ever get into space?
@@ExplainingComputers - yup, it’s the biggest rocket, ever. Good thing they decided to put it back in the garage, we had quite the storm. 👍
It's good to see what's becoming available. I wish you had used videos with faces. That's a whole lot easier to compare than say butterfly wings.
"Waifu" is pronounced like "wife-ooh".
Because this Japanese anime term does actually originally derive from the English word "wife".
Basically, some more obsessive anime fans - or "weebs", if we were being more derisive about them - like to declare sexy anime babes to be their fictional wives, or at the very least, are declaring their favourite sexy anime babes to be of what they consider "wife material".
Yes, someone is definitely having some silly fun naming their AI software "waifu" like this.
But I guess we shouldn't be surprised, as open source loves a ridiculous name. Like "GIMP" or using recursive acronyms. So, yes, why not name an AI upscaler to be your fictional anime "wife"? It's an open source project. Such things are strangely "normal" in that realm.
But, yeah, thought I'd give you the pronunciation and also the explanation of what it is, that might also avoid you having "waifu" typed into your Google search history for others to see and wonder what on Earth you've been getting up to. Well, unless you're into that sort of thing. No judgements.
Thanks Chris. Had been looking to install some AI video upscaling software and hadn't heard of the free Waifu2x-Extension-GUI before. Will give it a go. Hopefully it runs on Win 7.
Topaz AI requires win 10/11 but AVCLabs Video Enhancer A will run on Win 7 apparently.
PS It's a Peacock butterfly, not a red admiral!
Great video Christopher. I'm not an expert, but that looks like a peacock butterfly, not a red admiral.
The ultimate test would be to take some of the digitised standard 8 cine footage (originally shot by my grandfather) and compare the faces of my relations to those shot on 6cm x 6cm 120 roll-film of the same age and source to see if the faces of my relations are still recognisable, but I'd need a version to run on Linux Mint with an AMD graphics card, target 4k resolution.
Medium format film is still the best quality if you are constrained by not having a high five-figure or low six-figure budget for the sensor, especially with a slow film in the 25-40 ASA range.
It is why it was always used for outside advertising photographs (studio photographs used plate or sheet film that was even larger) that were intended for enlarging to fill posters the size of a house that came on several rolls to be pasted up, and why the camera that was chosen to go to the moon for still images was a Hasselblad - or rather two of them for each landing mission (and they are still there - free to the first person who can collect them).
Any decent Linux equivalents?
I may be wrong, but I believe WAIFU is pronounced WHY FOO. And thanks for the VID! Now those potato-cam vids from 15 years ago can be resolved up.
Thanks for this, and sorry if I pronounced it incorrectly. Several different versions have been suggested here now. :)
@@ExplainingComputers No aploogy necessary, jargon is hardly ever transparent.
The Waifu2x looked the best, to me, although it is quite slow to process. Do these have server side versions that can be used sort of like a render farm?
I am not aware of render farm / multi-machine rendering for these packages.
Not one for novices but still very interesting and informative. Many thanks
Waifu... a word i never thought hering on this channel. :D
Thank you Chris for the real time estimation in the future. I have debated whether or not if AI Upscaling is even worth it long term due to the time to encode and the actual overall benefit besides some clarity. I can only think of a few reasons why I would upscale video with this type of software. #1 would be if the video quality was so poor and not interlaced and there are no sources for higher quality videos, I have done that with some old westerns and with the right settings (1 full day of encoding or more...), the picture quality is remarkable. Although faces still kinda look warped similar to Dall E's early versions. #2 if it was a business related video that needed to be enhanced. AI Upscaling can tell the difference between a face and a rock and whatnot. Don't expect miracles, if you convert 240p to 8k (I know it's silly, but go with it), it will not look normal whatsoever. But do it anyway...it's a lot of fun. I might play with Waifu in the future, but I'm still running a GTX 970 and my card hates me for all this upscaling....
Did you get Waifu to work, I tried it and it would not even instal but mine is an old non compliant Windows 11 machine
Very interesting. You mention about being able to do this in real time at the end, the Nvidia shield claims to be able to do this already though only from 720p up to 4K so I guess lower start resolutions can’t be to far away.
Great video on AI
I would just like to say that I found (on a much earlier version of Topaz Enhancer AI) that the suggested settings were literally *never* the best ones in any of the videos I used it on. I was using it on mostly impossible-to-find music videos and tv shows originally from VHS, and it took maybe a couple days of experimentation to get results that didn't have a lot of major-to-minor artifacting, or that ended up with less sharp results. I can tell by the UI here that they've changed a lot since then, so maybe that has changed.
I've already done a few video upscales with a different method. Topaz also have the Gigapixel AI software, which can be used to batch upscale pictures. So I extract video frames as png images using fmpeg and upscale them in Gigapixel AI. Then I again use ffmpeg to assemble the images into the upscaled video. It works really well and is reasonably fast with my Radeon RX 570 8GB taking 2-6 seconds upscaling frames from under 1280x720 to fullHD. The resulting files look better than all I've seen in this video, tho as someone else pointed out, you had CRF set to 33 in Video Enhance UI, which caused the resulting video to have a low bitrate. The higher the CRF is, the more compressed the video will be.
I just found Upscayl, a Linux first cross-platform AI upscaler. I'm surprised you didn't include it. I'm on ArcoLinux running i3-gaps. I only had to install the AMD Vulkan driver. As an avid shutterbug for a quarter century, I'd say it's mixed results that with additional tweaking can produce clear improvement. I went from 1024x768 to HD, and I'm pleased with the results. Also, there's Gimp, which I prefer from comparison.
From what I see, it is only an image upscaler. How do you get it to do video files?
@@SuperDavidEF I can't because it doesn't but it's a worthy side note, and a topic for another video.
My new graphics card has AI tensor cores for AI upscaling of games, and I tried it out of curiosity, and it works amazingly well. If it weren't for the odd bit of oddness in the sky, I probably never would have realised I was running at 720p with upscaling on.
If I believed the advertising(I knew about it, but I'd used enough very ugly upscalers to be more than slightly sceptical, and had no way to test without buying) I could've saved £50 buying a lower card.
Curiously, the DLSS2 AI doesn't actually try to invent details and primarily doesn't even work on pixel data, the visible parts of the image. It's the same as TAA (temporal antialiasing) but instead of reconstructing the image at source resolution, it reconstructs it at higher resolution. Frame to frame camera offset with a magnitude of less than 1 pixel is injected into the renderer, forcing the game's renderer successively over the course of several frames to reveal the details that are renderable but would normally be hidden between the pixels. Motion vector data from the game's renderer is used to forward-propagate the rendering of previous frames to the current frame to align them. At higher resolution it would however reveal an ambiguity of detail revealed and obscured by pixels, which with naive blending results in excessive blurring and ghosting, so the tensor core accelerated AI produces a coverage mask, which tells which of the old pixels are still valid and to what amount. As inputs it receives the depth buffer and the motion vectors. So really it accelerates a known but slow function, and this is also how the training is application agnostic, since they can just generate random shapes and motion vectors to explore the complete useful space of the occlusion mask function in the training.
Mind you Chris, this technology is already available in real-time in many current games (Nvidia calls it DLSS). I think it requires the latest generation of GPUs though.
This give a chance to view old footage at 1080P. Upscaling normally is not good as compared to AI upscaling. What else can AI help us in daily life? .
found a huge mistake. at the topaz test you choose a very high compression rate resulting in very compressed video quality. just look at the filesize. the 1080p upscale is 7 times smaller than the original file. description says that values below 17 result in better image quality. you choose 33
Waifu: A word jokingly created by using the Japanese pronunciation of wife (why - fu) which has come to refer to cute girls drawn in the style of Japanese animation.
That aside, I wonder how it would look if you upscaled past your target resolution via ML and then scaled down to it normally, as opposed to just straight upscaling to your target resolution. It was a common technique when we just didn't have enough pixels in games to make good anti-aliaising solutions, and I wonder if the technique couldn't get a second lease on life in this area.
A very interesting idea . . . I must try the up/down thing.
Several models are trained at 4x but I find that's often way too much, especially when enhancing still images. So I let the AI upscale at its native size then scale back down to a preferred size with Python. It helps for some images where the AI seems to try too hard.
Thanks for another great video! Would you know if there are any Linux based AI Up-scaling packages out there?
Pretty good video.. Though i thought the AVC package handled the blurry background worse than the other two packages.
What a great tutorial very well spoken.
If they can only release a version for my brain so I can see things better 😎
anyways, it's nice to see windows media player still rolling 💯🚀
Thanks Chris
I have a couple of boat loads of old video tapes - 8mm and interlaced SD as well as old VHS tapes. I forsee a bleak future involving capturing and processing it all, whilst acquiring blood clots in my legs from immobility. (I just CAN'T walk away while it processes.) So I'll have you to thank when I croak from a pulmonary embolism! ; 0 (Actually, I'm immensely enthused - at the opportunity to bring old memories to vivid life! Thank you!!)
Waifu is Wonderful!
This was a super informative video
Thanks. I thought I'd do something a little different this week.
@@ExplainingComputers Really appreciate ✨
I feel it comes down to personal preference for the most part. AVCLabs looks to smeared out for me, like a painting. It's ironic because I thought waifu would end up looking more smeared considering it's based off what was originally ment for animation & not real life images. I've used waifu for upscaling images before & it didn't look good on regular pictures last I used it but that was years ago.
I think this kind of software definitely has it's uses currently. It's commonly discussed but I don't believe it should be used for restoring older films, games and TV shows, at least at this moment in time, since it can only really approximate details that were never present in the source, which can lead to inaccuracies and smeary results. For example I tried a painting through Topaz once and the final result ended up looking more like a pencil drawing.
I've always seen it as more of a tribute than a restoration personally. It is seriously impressive tech never the less.
I'm also ready to eat my words in 10 years time when it becomes almost impossible to tell 😄
You can tune it to whatever outcome you want, there is a large model zoo to choose from, trained for different purposes. Conservative models just remove compression and video transmission artefacts and result in a fairly soft output, which is still cleaner and nicer to look at. Lack of artificial sharpness is what you want if you don't want to introduce hallucinations into the footage. But it is a fundamental trade-off.
Also you still have to preserve the originals to re-process them when better methods become available. Or maybe the next person to process it will just achieve better outcome with the same methods available today, it's always possible as well.
Worth noting that in Topaz, Artimis Low, Medium and High, do not mean the quality of the output, but the quality of the input video. So, for your 360p video you should have used Low or Medium. The difference is notable.
Ah, very interesting. Thanks for this.