You probably would have had better results using the Proteus model. In my experience Artemis heavily smooths and makes things look 'painted'. The newer Proteus model is genuinely amazing though and offers much more control over denoising, sharpening, anti-aliasing, etc.
Right? This is classic LTT half-assed results. I use VEAI in combo with several other programs while reworking videos and you can take 480x270 macroblocked to hell video and have it come out damn near DVD quality or better. Depends on the source and how much effort you put in. Honestly, his first video is more than high enough quality for extremely good upscale results. But that takes effort, I guess. Tbf these are the same people using ZFS on top of unraid (or were...) and constantly lose their data.
yep auto feature on proteus works so good. No need to adjust any settings that way... Although IMO removing the noise completely takes away the details in the video. linus' old video looks like it was smoothened way too much. They Should've atleast previewed the sample for a minute before they rendered the whole video.
The issue is not the resolution or the interlacing on that first video. It’s the compression. 240p with zero compression would probably do a lot better.
Yeah, I was thinking the same. Compression creates distortions like shown on the video, and enlarging the video by adding pixels also makes the distortions more visible. Would be interesting if they downsampled something that they have uncompressed to 320x240, and try the same method as here.
Isn't it all just adding up though? First it has to deal with a serious lack of pixels, then it's interlaced so every frame only consists of half the image and then there are compression artefacts all over the place on top of the interlaced frames.
@@klasta69 Sort of, but the compression is essentially creating new information that confusing the AI. If we simply remove data, such as removing every second line or reducing 8 pixels to 2 pixels, we can do a reasonably good job of interpreting what those missing pixels might look like. Compression artifacts make that much much harder since the AI has a hard time telling the difference between what is real detail and what is "detail" created through compression.
as a user of Topaz for 3 years now that is totally true, a raw high bitrate 720p has more potential than a 1440p low bitrate video, it's all about bitrate
I feel like the real breakthrough will be when AI is able to incorporate external references. For example, upload the 320x240 video, but also the NCIX logo, a few good pictures (or 3d model) of the cooler and motherboard, and maybe even a photo of Linus. Then it can use that data to extrapolate more intelligently.
this ai has helped me with CGI render times. simply rendering at a lower resolution and then upscaling with the ai, saved me hundreds of hours of rendering.
@@SomeRandomPiggo not sure what settings ive used but it def only takes like 5-7 seconds per frame. im mostly upscaling 720p/1080p to 4k. (using a rtx 2080)
@@SomeRandomPiggo do realize that for rendering a CGI scene you need to calculate lighting/shadows, the objects and their textures while with upscaling it's just pre-existing pixels
My issue with these algorithms is that they are never as good as your brain at interpreting what you are seeing, and the changes they make end up further obscuring some details from the original footage.
2klicksphilip made an in-depth video on his use of Topaz Gigapixel AI. He specifically goes over how to bypass the negative effects of interlacing when using the upscaler. His video is called "Upscaling my Videos using AI". He could probably masterfully upscale your first video. Definitely try to get in touch with him.
Unusually if you were upscaling something, you would go scene by scene and pick the best method for what was being shown, and then re-edit them back together. Not just pick one mode and then hope for the best. It is cool tech though.
I use this software and it does not work like that as it renders one file at a time (you can queue files each with their own optimisation settings). If you were to manually split each scene then yes I agree.
I use Topaz' AI upscaling for a lot of my 3D renders. It takes a lot of time to render 4K or even 6-8k video. It saves me so much time to upscale rather than wait DAYS for the render in a higher resolution. It's not perfect, but it's great for any online content that will otherwise be compressed.
Please do more of this type of video! Would be really cool to see you guys look into the tech being used to remaster historical footage such as what was used in They Shall Not Grow Old
Proteus is really the way to go with VEAI, at least most of the time. It lets you fine tune all sorts of things, from how sharp the image should be and how much noise needs to be removed to the strength of compression artifact removal. Also a CRF of 0 is insane, you'd be fine with 14 or even higher.
Proteus Fine Tune is 100% the way to go. Use the suggested settings to get you started then play with every slider individually. It definitely takes time to get some really solid results, but it is possible! Only CRF 0 if you're running it through multiple times, but then at that point, just use 16Bit Uncompressed Tiff for maximum quality 🙃
@@NickByers-og9cx definitely! artemis seems like it's just proteus with set settings for each scenario. you just have to mess around with proteus to get good results. also sometimes it's impossible to get solid results with some footage, and the only thing you can do is remove compression artifacts and set the resolution upscale value at 100%
Linus a co-worker and I did a project like this on and off after work in 2019 with a few of topaz studios programs and a video split and merge program we found in a blender forum. It was still very time-consuming using his Ryzen 5 2600 with a 2070 super and my 9900k with a 2080Ti taking just shy of 1 month processing a batch of frames for 8-14hours hours every other night. This was so we could still use our machines to game and create. The stock footage was an old 240 or 360p video of an infamous Holiday Special involving space to watch together for a holiday party. We did this by breaking out every frame and testing what looked best upscaled, sharpened, and denoised with a few different environments from low light, high light, low, and high action areas over a week. We then had few other coworkers help judge what looked best for the settings we would then run. In the end it was upscaled to 4k per frame with our magic settings for different scenes then merged it back together. NEVER AGAIN lol
Hah, definitely wasn't the right model for it, BUT Video Enhance AI is freaking awesome. I used it on VHS tapes and had decent results on some tapes w/ the DV/Analog methods Also FW900 my forever unattainable holy grail T_T
Have you tried VapourSynth before? It's really popular in the anime community for restoring or improving anime. But I guess you can use it on anything.
I've played with this software. The best results I've had was in incremental treatments. I have tested this on 2005 clamshell phone videos and it's pretty incredible. For me a good option was first use the high compression setting and *keep* original resolution (it really helps). Then I go for blur or a more advanced setting and then once more while increasing resolution as well. At least 2 or 3 passes will bear great results. But you need a very good machine...it will run on a gtx970 but you can't expect to do more than maybe 10-15min clips as it's gonna take a day per pass. They also have a picture enhancing software and it's very very very good. And when it's not, it's still okayish but still good all in all. Can't wait to see that tech in 5 years.
From my experience when you upscale a video, or really anything at all, it’s better to go incrementally. If you start with a 480p video you’ll get better results upscaling from480->720->1080->4K then trying to go straight from 480->4K. I deal with converting analog and digital videos everyday in my work.
Do you have some samples to demonstrate? You could upload them right to TH-cam. It would be good to see someone who knows what they're doing instead of sponsored amateurs just saying a product is amazing.
@@EJD339 I’m an independent contractor who does live visuals for shows and festivals but I have a degree in electronic integrated arts where the bulk of my research relates to emerging and obsolete technology more specifically focusing on analog and digital computers. I work a lot with analog video synthesizers which mostly operate in ntsc(some pal) and have done tons of digitizing and upscaling of analog content and have tried tons of different hardware and software based upscalers. The bulk of my findings regardless of what type of upscaler you use is that working incrementally typically yields a better result in image clarity compared to just going straight from an sd image to 4K. Currently I’m developing digital software that replicates analog phenomena.
@@VeI_2.0 Not necessarily. Especially if there's digital noise and artifacts in the original video. Upscaling those all at once causes a ton of messy results. But cleaning them up a bit and then upscaling again can make the process better. However, you can easily end up losing details that way, so it's pretty tricky.
This makes me wonder what it would look like with old gaming videos, since it's trained primarily to upscale real life footage Alternatively (and possibly even better) what if it could be trained to specifically remove h.264 artifacts?
Don't know if it's the same algorithm or a competing one but I've seen upscaled 60fps anime and it looks horrible. It looks like they played the footage through vasaline.
@@hiurro That's because almost all animations are intended and drawn for specific frame rates. If you run something like that through the same algorithm that is trained on IRL footage which doesn't have an "FPS" then it'll obviously give dogshit results. It'll probably be a lot different in games though.
You guys could do a whole thing on these history documentaries that upscale and add color to REALLY old footage. Highly recommend the WW2 documentary called Apocalypse
I just lost a bunch of paintings (along with my apartment) in a catastrophic flood in my town (Lismore, Australia). Some of them were for an exhibit I have coming up, and I was able to use a few programs of the Topaz software suite to upscale and sharpen some of the pictures I had which weren't on my phone (phone and camera got destroyed by the mud and water). It has absolutely saved me as I've been able to print the paintings I had, and work further with them making works which reference the flood itself.
I've been using topaz's suite for about 6 months or so. I've mainly been recovering old TH-cam videos (muppets) and vacation footage I took on a really low quality digital camera back in the early 00's. The key to getting decent output is realizing the limitations of the software and keeping your expectations in line. Except for special cases like animation, you can't really expect it to do a quality render at more than double the existing resolution. The film grain option is there to hide the problems. Were conditioned to like the film grain effect, and it hides things like the blurry carpet. For the carpet scene I probably would have split the original video apart and dealt with a different cuts what's different settings were possibly even different models. Interlacing on a low res piece of footage just basically the kiss of death for it. The software can somewhat improve the interlacing it can improve the video watch ability, but it's not going to make it look great. I've been working on the old Madonna music video "like a virgin" I have put way too many hours into that. I have it up to the point when you're watching it that you're not looking through the fog to try to figure out what's in the scene, but I'm hitting this uncanny valley limit where I make it better and the stuff that's in the fog doesn't look euclidean anymore. AI up sampling is absolutely amazing, but it's only good if you don't scrutinize it. It's the same with still images. The overall effect of the picture is stunning, but if you up sample something load of 4K and then start zooming in you start noticing weird artifacts. If you don't look for them they're fine.
I was an early adopter of the Topaz AI suite, it's amazing how far they've come since the version from only a few years ago. in a few years i wouldn't be surprised if this was good enough to use in production
You didn't talked about the remastered footage having "glitches" on the firetruck ladder around 10:19 ! How could anyone miss that 🤣. That is awful when you notice it !
Yeah, this should've looked way better. Seems like they didn't take much time to research the topic properly. I expected them to actually reach out to someone who can train a model just for them based on their newer videos.
Does anyone else find the bits in the recent few videos, where Linus does the South Park impression of a Canadian, absolutely nostalgic and hilarious? Haven't seen any mention of that in the comments. It is so spot on as well, good job.
you know what I would like to see and possibly be a great idea for a video. Remake your first video almost shot for shot today. Would be fun to see Linus attempt to redo his first video, same script, maybe a newer cooler parts laid out meticulously on the bench, etc. Maybe even wear a reproduction NCIX polo shirt, or a LTT take on it. Also of course use that same NCIX intro (or LTT remake).
As a professional photographer, been using Denoise AI and Sharpen AI for *years.* Gigapixel is the real deal, or as close to it as you can get. Also, they offer a one-time license purchase, none of that subscription garbage. I'll happily pay for that.
@@AdamIverson That's okay with me, honestly. You still get lifetime use, and if at any point you'd like to get new features and functionality you can pay for that again.
@@captainvyom463 that's what I'm saying. I thought it was bullshit at first too, but it's not really the same until you use it on one of your own images. Once you see the effect it can have on your own shots, damn is it impressive. Nothing can ever be as good as getting it right in camera, but nobody is perfect and this helps bridge the gap a bit.
Linus, I know you'll probably never see this, but you don't know how happy watching your videos makes me. I went through a pretty rough breakup back during COVID. I was working from home and living far away from my family and so I dived into building a gaming PC and somehow managed to stumble across your channel before I had made any major purchases. Watching your videos gave me a huge step up to have a very positive experience when piecing together my rig, and now I greatly enjoy your channel for information and entertainment. It's one of very few comfort channels that I'll often go back and watch old clips now that I'm in a much better place in life. Thank you for what you do!
I’ve always been curious of what horrors you would find if you used something like Topaz or DLSS on games from the 3rd, 4th, or 5th generation and upscaled it to 4K. It would be morbidly interesting to see what the AI would create and mutilate with such a specifically low resolution to work with.
I guess it would be horrible because the models are trained with more natural frames. If someone trained an AI with sharper and more pixelated images, could it look good?
Linus doing the South Park Canadian flappy mouth thing made me laugh too hard, guy... Any chance of a 2022 Scrapyard Wars thing? (of course it'll all be about GPUs...).
As a computer vision scientist I think it would be interesting for a lot of people if you did a video on how video super resolution/"enhancement" (including dlss for example) is actually done with machine learning. The basic concepts are actually not that complicated to explain (making a good implementation is ofcourse a whole other beast) and i imagine that this would be a good fit for the many in this audience.
4:55 Also when it come to interlacing, you don't see both fields at the same time. They don't get stitched together like you showed in the video, it's one field at a time.
@@SuperCartoonist That will do the wrong thing if there is motion on-screen. If there is no motion on screen, it would also halve the resolution. Though Linus kind of glossed over the whole time difference between fields.
VEAI has several different methods for dealing with interlaced videos. Some are for properly encoded interlaced sources, others are there to deal with an interlaced source that was improperly re-encoded to a progressive video. I've had very mixed results. It really just depends on how craptastically your source content was dealt with in the past.
@@jamesphillips2285 Some of the VEAI models for dealing with interlaced content will, in fact double the frame rate to deal with this time difference you're talking about.
I've been using VEAI for about a year or so, and it's hit & miss when it comes to upscaling. I mostly do music videos, but I've done a full movie in pretty great results. It's usually best to get rid or artefacts before you upscale, or they will be upscaled as well. Frame rate upscaling is still in its early phases. It will often add artefacts if you pay close attention. When it works well though, it's awesome.
Looking at Linus's first video I realised I have been watching him for almost half of my life. I did my only gaming rig when I started Uni, using his advice, and about 13 years, later, married, 2 kids, the whole shebang, still watching Linus. The crazy part, apart from my family, there isn't anyone in my life right now that I know for that long.
Topaz is awesome! I have been using it for about a year. It has been getting many updates that are improving it each time. Sadly my GPU is kinda limited to take advantage with videos more than 720p Upscaling. But in future, it might do the trick.
I have the Topaz upscaler for photos. The results, when you tweak it, saved a ton of images from a HDD crash where I had lower res thumbnails. Pretty good stuff.
I use it for digital DND tokens and upscaling certain assests in vtt builder when necessary. Also used it to upscale ultrawide or super ultra 1080p to 5140x1440. Magnificent results and topaz keeps getting better
YES! Very cool this finally got coverage on your channel! I wrote to Topaz (presumably before development began) asking them to make this software, and and I got to private test the alpha version of the AI. It was an online service only at that time, so no front end GUI running locally. Cool to see how far its come since then! Gigapixel at the time was ok, but slow and not temporally coherent, which was the main thing that prompted me to write to them after seeing the results of the older FMA 2003 upscale attempt Also, I don't know every exact detail the coming D-VHS video will cover, but during your research and writing, would you take a quick look at LD-decode, even if you don't mention it in the video? Its a really interesting project, one that I haven't undertaken myself, but VERY intriguing IMO. Its for VHS too, not just laserdisc. cheers :)
It's kind of freaky to see how far Linus (and his team as a whole) have come over the years. Linus's entire personality has changed completely since the start of the channel.
Too bad he still doesn't understand computers. That LIEnus video was a real bombshell. Lol I wonder what April 1st video gag they'll go with this year?
There's a TH-cam Channel called 2 Minute Papers that perfectly showcases quite the variety of video upscaling. I believe pattern recognition now does a whole lot better by recreating us from ultra-realistic 3D models and by treating the pixels like a mega-zoom lens for capturing the fine details like the James Webb Telescope whereas every small movement is an enhancement the more time passes.
Nothing beats image per image scaling and denoising by hand but, it seems ok for a 240p. I guess denoising lower res and retrying for 4k would have worked better.
From my experience it certainly is better. Recently dealt with garbage 480p and did a few passes to bring it up to "palatable". Denoise->Chronos->Arthemis.
2 ปีที่แล้ว
Next time, call The Corridor Crew. They're kings of Special Effects. Great work as usual LTT! Cheers from Brazil!
I'm pretty sure some fans have tried doing this with the software. I think it's called Project Defiant. Results are mixed, the early seasons don't look as good as the quality of the original DVD source wasn't great to begin with.
Topaz labs is pretty amazing. I upscaled a movie of my grandparents from the 1930's and it did an excellent job. My relatives were all impressed and amazed
Wait. The Firetruck video was randomly being recommended to me not too long ago. I wonder if a popular creator engages with one of their old videos a lot it gets promoted
I can imagine that techs such as this could likely be implemented as part of video compression algorithms in the future, if the algorithm can be optimised to run in real time, or if hardware improved to such extent that AI upscaling can be done in real time. So rather than trying to upscale a low res video, it'll be more about being able to compress the video by reducing the image to lower resolution, and then upscaling the video while it's being viewed, presumably it could mean that it can retain higher quality image than with current compression technologies for smaller size.
Actually, how this will work is a Coder and decoder AI will be trained together. The Coder AI will make stream of Data and the decoder AI will then try to recreate the original using that data. This will allow the coder AI to know what data the decoder AI is able to guess at and the decoder AI will learn to guess at data from the Coder AI. They will also be limited in the amount of time they can spend per frame using a given minimum hardware. However, they can be trained to do a better job given better hardware and more time using the same data. Some trade off on a bit of extra data for better result for better hardware could also be made.
Love that this was done on a laptop. I am way more excited about this knowing it's done on average customer products not a monster computer, they do other projects on! Thank you for making this video!
ya a labtop with a 3070 in it. thats not consumer friendly in the slightest considering how hard it currently is to even get that gpu anywhere near msrp atm.
I love this software. I used it to upscale to a old movie that was only released on DVD (480p) to 1080p and the results were awesome. It's a movie that my family loves and I shared the results with them during a movie night. The overall consensus was a lot of praise from the family. I'd highly recommend a card with more vram if you can. I've seen the application use north of 12 gigs of vram when I really push it.
I wish that they spoke more on the reason why remakes like Music Videos can look so well, were because some were shot on Film, which gives the AI more info to work with than Linus' compressed Digital Video or any TV Signals of the time.
When it's shot on film you don't need AI. You just get the original reels and scan them at a higher resolution. See: Wham's Last Christmas vs Smashmouth's All Star. This guy has a good long form video about it: th-cam.com/video/rVpABCxiDaU/w-d-xo.html
@@smith7602 That's missing the point entirely. Upscaling isn't remastering at all. They're two different things. If you have the source you make a new master from that source. If you have a compressed/digital version you upscale it. Or in other words, if you upscale the source then it's no longer the source, it's an upscale.
In a way, what this software is doing is 'deepfaking' a higher resolution video. At least, they both use similar machine learning (I refuse to call it AI) techniques .
You guys should make a swacket that only has a large LTT tag on the seam that you can notice for a minimalist look and for people who dont like wearing clothes with graphics on them. I know GMM does a minimalist line and the swacket idea is great.
My bet is that in the future, remastering this video will wind up looking pretty darn good. I would think logically that instead of doing 1 frame at a time, it really needs to take the entire video as aggregate and process things into objects so that it identifies your lips and combines all clips with your lips and your eyes and your shirt, etc.
In theory you could run multiple AI upscaler passes and models that work better on different parts of a scene and rotoscope them together. Would be much more time consuming, but for a bigger budget project could probably reach a much higher quality than a single model applied to an entire scene.
I've been playing around with Video Enhanced AI privately for many months now. It's not magic but you can achieve amazing results for many use cases. Here are some of my best practices: Scaling above 2x is not feasible for most cases. This behavior also scales with the initial resolution. So the 320x200 video Linus used first simply does not have enough information to scale it up by much. For low-res videos you might have to stick to 1.5x scaling only, or even less. If you give it more information to work with, the results will be much better. So 720p to 1080p should be feasible. 1080p to 1440p works very nice. I don't have any experience with 4K since I have no 4K displays at home, sorry. I have also modernized a lot of old anime files. It cleans up old block artifacts quite nicely and you can upscale 480p to 720p (1.5x) very nicely. (For example.) Use Artemis Medium for animation, or Artemis Low if the quality is really bad. Also, I am using CRF of 13 and then re-compress the files with Handbrake since the video codec options are so limited in VEAI. There is really no need to use CRF 0 or anything below 10 for that matter. It just makes the files much bigger. Thrown away the audio, recompress nicely with Handbrake and put everything back together with MKV tools. It's a bit more work, but the results are much nicer. :) What Topaz really has to improve is cropping and scaling to specific values. Old videos from the analog era have black borders (overscan) on the left and right sides. So getting such a video to a specific, clean horizontal resolution is a pain in the ass. :(
We almost came to the same conclusion on everything and I can attest that 1080p hell even 720p depending on source to 4K can be really really nice. I use CRF 12 myself and almost only use Artemis High Quality, it doesn't smooth out things so much and when using pristine source files in their respective resolution gives great results !
A trick I found to work well is to upscale to 4k and then use a decent quality downscale to go to your desired resolution. Seems to give better results when upscaling dvds than just upscaling directly to 1080p
We use Topaz tools all the time at our print shop when clients send us photos from their phones (or even from their DSLRs). Pretty much everything that comes directly from a client, rather than in-house photography, is going through one or more Topaz tools before printing.
When I was writing Avisynth scripts to upscale 288p VCDs, it worked best by denoising, doubling resolution, increasing framerate, and doubling resolution again.
For that don't know it; you can also get upscaled 60fps or even 144fps in real-time. Use SVP to increase frame-rate in real-time (artifacts get masked with frame blending, it looks good and consistent). You can also use madVR renderer to upscale the image in real-time with pretty damn good quality. When viewing his first clip -- it will already look a whole lot better in real-time!
It has been quite a while since something literally made me spit whatever I was drinking out of laughter. The south park bit at 5:27 is the reason I am cleaning the coffee off my screen. Well done!
i used the topaz software to upscale old tv shows from 480i to 4k p and i found that adding back the film grain really sold it. It masked the weird smoothness of surfaces that the software likes to produce. It was nice, but not worth buying the software after the free trial and hammering my gpu for ten times the runtime of each episode. It also didn't let me start multiple jobs at once to use different GPUs at the same time, but maybe that feature just wasn't available in the trial version. A few more years and i expect this kind of sofware will really hit its stride, once quick and dirty version outperform built in upscalers of TVs and Monitors this could really cut down on bandwidth use for streaming services for example.
While you can't set more than one job at a time inside VEAI itself, you can actually just run multiple instances of VEAI simultaneously and can just set each instance to run a different job on a different GPU (or even multiple jobs on the same GPU, VEAI doesn't make optimal use out of a lot of higher end hardware but if one job is only using half or less a GPU's processing power a second instance on the same card works great).
I've had no complaints with SVP (Smooth Video Project) so far, I've had only one really bad scene in hours of watching with v3, and that runs in real time for motion interpolation. Only problem is you're going to need a beefy CPU (or GPU if compatible) if you want to do both at once.
The only real world use i found sofar using multiple upscaling "ais" also the one from the video, is for animated/drawn art and videos. I can recommend anime4k for old DVD only releases like Drawn Together that look horrible on any modern TV and does not have any HD release. It takes a week or 2 even on beefy hardware but the result is nice.
Media Player Classic does come with a pretty decent upscaler for animated content and can handle 4k upscaling at full speed. It's very likely not quite as sharp as anime4k but that won't be too noticeable if you're watching it from a couch, and it doesn't need a week of preparation.
Upscaling and changing animations to 60fps is really awful for animation. Animations aren't like video games, their entire art style is made to run at a certain frame rate. The drawing techniques get entirely screwed up when you change the fps.
I'd use FFMPEG to get an image sequence and just process each frame using the least bad upscaler in Upscayl, then recombine them again with FFMPEG. and with today's processing power it wouldn't take days to do it.
I've been using VEAI for a couple of months an WOW I'm left speechless. I didn't think you can screw a video about it that much. Even a 10 minute fiddling with the setting could produce so much better results. I can't believe that a channel like this would be able to pick the worst possible settings to show off a product. I mean seriously? I know what this software is capable of and I couldn't reproduce results this bad. I'm really disappointed in you guys. Hope you'll make an update video on this while actually trying because it's a great piece of software that I think more people should be aware of.
I would have appreciated seeing this in action with the original no TH-cam processed footage. Also what I find funny about the “Never going to give you up” video is that was shot on film. Now a days a retransfer of the old film reel may have been even better.
Eventually, the kind of AI upscaling tools used in Get Back will be available to the public. There, they fed in better quality audio and it picked out the individual voices and sounds onto separate tracks and cleaned them all up. It could do something similar for video. If you feed in hours of footage of Linus, it can know what the low res version of his eyes translates to in high res. Basically, deep faking (or as they are starting to call it, deep restoration). It'll be a little while before that's available and a long while before its affordable for home use. But it is coming.
So there I was. Minding my own black arse business watching some LTT. When all of a sudden Linus hits me with the south Park Canadian impersonation and with Linus being Canadian himself it was murder on my humor gland. I cackled and wheezed myself to near death and damaged my laugh box. That was far funnier than it had any legal right to be. 😂
I don't care of him being canadian. I laughed my ass off at the "My arms look like they're from Half-Life 1" part, cos I've lived through those days of blocky polygon graphics and knew exactly what he meant. You gotta be an old school gamer to enjoy such joke.
Having worked extensively with Topaz Labs software, I can tell you from experience that upscaling low-resolution images is quite tricky. In my own projects, I found that for still images, I had to combine the upscaled versions with hand-painted additions that would correct the strange decisions their AI made. With video, I had to really mess around with the settings to produce something acceptable. This typically meant dialing everything down and forsaking finer detail in exchange for accuracy. I did some video restoration in combination with another AI-powered tool that added color to black-and-white film reels where this worked well. I'm excited to see how this technology will evolve over time.
Yeah, Topaz VE is amazing. I was filming some interviews for my company last fall, and it wasn't until the editing stage that I noticed that I accidentally shot all the footage at 640 by 480... Luckily, I was able to upscale everything with VE and I even wondered if I should shoot everything this way, saving space on my memory card and upscale only the footages I'll need later :)
You probably would have had better results using the Proteus model. In my experience Artemis heavily smooths and makes things look 'painted'. The newer Proteus model is genuinely amazing though and offers much more control over denoising, sharpening, anti-aliasing, etc.
Footage as low as 720p. Dude, 720p is still high resolution for me. 😂
Right? This is classic LTT half-assed results. I use VEAI in combo with several other programs while reworking videos and you can take 480x270 macroblocked to hell video and have it come out damn near DVD quality or better. Depends on the source and how much effort you put in.
Honestly, his first video is more than high enough quality for extremely good upscale results.
But that takes effort, I guess.
Tbf these are the same people using ZFS on top of unraid (or were...) and constantly lose their data.
yep auto feature on proteus works so good. No need to adjust any settings that way... Although IMO removing the noise completely takes away the details in the video. linus' old video looks like it was smoothened way too much. They Should've atleast previewed the sample for a minute before they rendered the whole video.
Hi Harry o/
like your videos harry :)
The issue is not the resolution or the interlacing on that first video. It’s the compression. 240p with zero compression would probably do a lot better.
Yeah, I was thinking the same. Compression creates distortions like shown on the video, and enlarging the video by adding pixels also makes the distortions more visible. Would be interesting if they downsampled something that they have uncompressed to 320x240, and try the same method as here.
Isn't it all just adding up though?
First it has to deal with a serious lack of pixels, then it's interlaced so every frame only consists of half the image and then there are compression artefacts all over the place on top of the interlaced frames.
@@klasta69 Sort of, but the compression is essentially creating new information that confusing the AI. If we simply remove data, such as removing every second line or reducing 8 pixels to 2 pixels, we can do a reasonably good job of interpreting what those missing pixels might look like. Compression artifacts make that much much harder since the AI has a hard time telling the difference between what is real detail and what is "detail" created through compression.
as a user of Topaz for 3 years now that is totally true, a raw high bitrate 720p has more potential than a 1440p low bitrate video, it's all about bitrate
So it's not even 240p, its 240i
4:10
I feel like the real breakthrough will be when AI is able to incorporate external references. For example, upload the 320x240 video, but also the NCIX logo, a few good pictures (or 3d model) of the cooler and motherboard, and maybe even a photo of Linus. Then it can use that data to extrapolate more intelligently.
That would be scary.
That would be epic and very cool
So vcd will look 4k
Isn't that kinda what dlss does using motion vectors
That is a great point and a way to bring information into the system that is not present i the video!!
this ai has helped me with CGI render times. simply rendering at a lower resolution and then upscaling with the ai, saved me hundreds of hours of rendering.
very interesting method, love it. i bet it gives a sort of distinct "style" to your product too bc of the ai randomness.
i would have thought that would take longer lmaoo
@@SomeRandomPiggo not sure what settings ive used but it def only takes like 5-7 seconds per frame. im mostly upscaling 720p/1080p to 4k. (using a rtx 2080)
@@dylanlockemp3 not bad at all, some cycles frames can take as long as a few minutes for me lmao
@@SomeRandomPiggo do realize that for rendering a CGI scene you need to calculate lighting/shadows, the objects and their textures while with upscaling it's just pre-existing pixels
My issue with these algorithms is that they are never as good as your brain at interpreting what you are seeing, and the changes they make end up further obscuring some details from the original footage.
Very interesting observation
2klicksphilip made an in-depth video on his use of Topaz Gigapixel AI. He specifically goes over how to bypass the negative effects of interlacing when using the upscaler. His video is called "Upscaling my Videos using AI". He could probably masterfully upscale your first video. Definitely try to get in touch with him.
man that guy does everything
I think Him and Taran would get along very well
interlacing not only makes the AI do weird things, it also kills the compression quality.
Reminder for myself to check that later
@@talha.4983 Nah not really, he does csgo, upscaling and lens flare. And, he's (soon going to be) the first case-unboxing millionaire.
Unusually if you were upscaling something, you would go scene by scene and pick the best method for what was being shown, and then re-edit them back together. Not just pick one mode and then hope for the best. It is cool tech though.
I use this software and it does not work like that as it renders one file at a time (you can queue files each with their own optimisation settings). If you were to manually split each scene then yes I agree.
@@Dimmers do you use it at work or just as a hobby?
@@Dimmers Yes, you would manually split it.
@@Jehty_ work
@@Dimmers what kind of work is that?
Because I struggle to see where this tech would be used.
10:34 the ladder on the firetruck 4k is going nuts
Now Linus Tech Tips is Rick Rolling us, getting influenced by Mrwhosetheboss
Lmao
He's to dangerous to be left alive
A certified mrwhosetheboss classic
@@tmcg225 indeed!
and I see nothing wrong with that
I use Topaz' AI upscaling for a lot of my 3D renders. It takes a lot of time to render 4K or even 6-8k video. It saves me so much time to upscale rather than wait DAYS for the render in a higher resolution. It's not perfect, but it's great for any online content that will otherwise be compressed.
now if only GPUs could do that in real time while you're playing a game. oh wait... that's DLSS
@@tim3172 don't soil this comment section with facts!
@@tim3172 I hope for a future where we can get results like Proteus on DLSS 7.0, if it's still called DLSS by then
Please do more of this type of video! Would be really cool to see you guys look into the tech being used to remaster historical footage such as what was used in They Shall Not Grow Old
Ah, you mean the one that Peter Jackson did?
Proteus is really the way to go with VEAI, at least most of the time. It lets you fine tune all sorts of things, from how sharp the image should be and how much noise needs to be removed to the strength of compression artifact removal. Also a CRF of 0 is insane, you'd be fine with 14 or even higher.
yeah, a crf value of 0 means the video is completrly uncompressed. looks like they didn't understand/know much about what they were talking
Proteus Fine Tune is 100% the way to go. Use the suggested settings to get you started then play with every slider individually. It definitely takes time to get some really solid results, but it is possible!
Only CRF 0 if you're running it through multiple times, but then at that point, just use 16Bit Uncompressed Tiff for maximum quality 🙃
imagine a youtuber talking about color accuracy and stuff yet knowing nothing about video encoding >
@@NickByers-og9cx definitely! artemis seems like it's just proteus with set settings for each scenario. you just have to mess around with proteus to get good results. also sometimes it's impossible to get solid results with some footage, and the only thing you can do is remove compression artifacts and set the resolution upscale value at 100%
I usually run a crf of 16.
Linus a co-worker and I did a project like this on and off after work in 2019 with a few of topaz studios programs and a video split and merge program we found in a blender forum. It was still very time-consuming using his Ryzen 5 2600 with a 2070 super and my 9900k with a 2080Ti taking just shy of 1 month processing a batch of frames for 8-14hours hours every other night. This was so we could still use our machines to game and create. The stock footage was an old 240 or 360p video of an infamous Holiday Special involving space to watch together for a holiday party. We did this by breaking out every frame and testing what looked best upscaled, sharpened, and denoised with a few different environments from low light, high light, low, and high action areas over a week. We then had few other coworkers help judge what looked best for the settings we would then run. In the end it was upscaled to 4k per frame with our magic settings for different scenes then merged it back together. NEVER AGAIN lol
that's a hella smart technique you did !!
1 month?! that's crazy!
0:35
Wow, that was a clever way to rickroll us...
Hah, definitely wasn't the right model for it, BUT Video Enhance AI is freaking awesome. I used it on VHS tapes and had decent results on some tapes w/ the DV/Analog methods
Also FW900 my forever unattainable holy grail T_T
Hey EposVox :)
I'm curious - Did you use a broadcast-tiet VCR when sourcing the footage?
Have you tried VapourSynth before? It's really popular in the anime community for restoring or improving anime. But I guess you can use it on anything.
anybody know how to convert black and white to colour ?
Yes and no. Ai has its limits. Also he mad e a mistake in doing 4k upscale
I've played with this software. The best results I've had was in incremental treatments. I have tested this on 2005 clamshell phone videos and it's pretty incredible.
For me a good option was first use the high compression setting and *keep* original resolution (it really helps). Then I go for blur or a more advanced setting and then once more while increasing resolution as well. At least 2 or 3 passes will bear great results.
But you need a very good machine...it will run on a gtx970 but you can't expect to do more than maybe 10-15min clips as it's gonna take a day per pass.
They also have a picture enhancing software and it's very very very good. And when it's not, it's still okayish but still good all in all. Can't wait to see that tech in 5 years.
2:22
Linus (internally): don't say acid don't say acid
0:57 MOTHER OF INDIE HORROR! WHAT IS THAT?
5:30 Never thought I'd see the day a Southpark Linus would be so good! 🤣
this is how i see all canadians
god AI has gotten much better already
1:05 in the video, there is a very slight sound of the Samsung notification chime. 😎👍
YUHH
I thought that was my old phone and I walked over to it to find that it was dead
From my experience when you upscale a video, or really anything at all, it’s better to go incrementally. If you start with a 480p video you’ll get better results upscaling from480->720->1080->4K then trying to go straight from 480->4K. I deal with converting analog and digital videos everyday in my work.
Do you have some samples to demonstrate? You could upload them right to TH-cam. It would be good to see someone who knows what they're doing instead of sponsored amateurs just saying a product is amazing.
You don't have to answer this but what do you do for your job that requires this?
@@EJD339 I’m an independent contractor who does live visuals for shows and festivals but I have a degree in electronic integrated arts where the bulk of my research relates to emerging and obsolete technology more specifically focusing on analog and digital computers. I work a lot with analog video synthesizers which mostly operate in ntsc(some pal) and have done tons of digitizing and upscaling of analog content and have tried tons of different hardware and software based upscalers. The bulk of my findings regardless of what type of upscaler you use is that working incrementally typically yields a better result in image clarity compared to just going straight from an sd image to 4K. Currently I’m developing digital software that replicates analog phenomena.
Strange. I'd thought more conversions would lead to a loss of quality?
@@VeI_2.0 Not necessarily. Especially if there's digital noise and artifacts in the original video. Upscaling those all at once causes a ton of messy results. But cleaning them up a bit and then upscaling again can make the process better. However, you can easily end up losing details that way, so it's pretty tricky.
Proteus fine tune almost always gives me a better result, even when then defaulting to auto variables
This makes me wonder what it would look like with old gaming videos, since it's trained primarily to upscale real life footage
Alternatively (and possibly even better) what if it could be trained to specifically remove h.264 artifacts?
Don't know if it's the same algorithm or a competing one but I've seen upscaled 60fps anime and it looks horrible. It looks like they played the footage through vasaline.
didnt they use some kind of upscaling in the infamous gta trilogy ''remaster''? didnt work too well
@@hiurro That's because almost all animations are intended and drawn for specific frame rates. If you run something like that through the same algorithm that is trained on IRL footage which doesn't have an "FPS" then it'll obviously give dogshit results. It'll probably be a lot different in games though.
no need for it, there's already things like dpir, you can kinda easily remove simple artifacts like from h264
There are other softwares for this too.
You guys could do a whole thing on these history documentaries that upscale and add color to REALLY old footage. Highly recommend the WW2 documentary called Apocalypse
do you know any software that adds colour to black and white videos ?
Old footage is just taken from the film print though not really upscaled
I just lost a bunch of paintings (along with my apartment) in a catastrophic flood in my town (Lismore, Australia). Some of them were for an exhibit I have coming up, and I was able to use a few programs of the Topaz software suite to upscale and sharpen some of the pictures I had which weren't on my phone (phone and camera got destroyed by the mud and water). It has absolutely saved me as I've been able to print the paintings I had, and work further with them making works which reference the flood itself.
5:04 just casually flexing with that mint condition fw900
oohhhgh i w,ant it
i guess that a review of that monitor is coming soon
I've been using topaz's suite for about 6 months or so. I've mainly been recovering old TH-cam videos (muppets) and vacation footage I took on a really low quality digital camera back in the early 00's.
The key to getting decent output is realizing the limitations of the software and keeping your expectations in line. Except for special cases like animation, you can't really expect it to do a quality render at more than double the existing resolution. The film grain option is there to hide the problems. Were conditioned to like the film grain effect, and it hides things like the blurry carpet. For the carpet scene I probably would have split the original video apart and dealt with a different cuts what's different settings were possibly even different models.
Interlacing on a low res piece of footage just basically the kiss of death for it. The software can somewhat improve the interlacing it can improve the video watch ability, but it's not going to make it look great.
I've been working on the old Madonna music video "like a virgin" I have put way too many hours into that. I have it up to the point when you're watching it that you're not looking through the fog to try to figure out what's in the scene, but I'm hitting this uncanny valley limit where I make it better and the stuff that's in the fog doesn't look euclidean anymore.
AI up sampling is absolutely amazing, but it's only good if you don't scrutinize it. It's the same with still images. The overall effect of the picture is stunning, but if you up sample something load of 4K and then start zooming in you start noticing weird artifacts. If you don't look for them they're fine.
I was an early adopter of the Topaz AI suite, it's amazing how far they've come since the version from only a few years ago. in a few years i wouldn't be surprised if this was good enough to use in production
You didn't talked about the remastered footage having "glitches" on the firetruck ladder around 10:19 ! How could anyone miss that 🤣.
That is awful when you notice it !
Yeah, I can see it zipping back and forth; looks like it's having trouble judging the speed of repeating objects.
it reminds me of old z-fighting textures lol
Yeah, this should've looked way better. Seems like they didn't take much time to research the topic properly. I expected them to actually reach out to someone who can train a model just for them based on their newer videos.
basically because they upscaled the artifacts as well
@@MyNameIsBucket that was caused by the 60 fps conversion though, not because of the upscaling
It’s amazing how far Linus has come
He is the nerdy pew die pie from You tube...
@@velardechelo yes pew die pie from You tube
it's pizza time
@@iBlaze69 You tube from pew die pie
Yeah, he used to look so blurry.
The craziest thing about this video is that sick shoulder roll at 2:55.
Linus just summarized my experience as an image processing ML engineer in 12 minutes. Good job guys!
Does anyone else find the bits in the recent few videos, where Linus does the South Park impression of a Canadian, absolutely nostalgic and hilarious? Haven't seen any mention of that in the comments. It is so spot on as well, good job.
For interpolation the best thing imo is something called tvp, it’s ran by a single person but it produces results far better than chronos models
Interesting. Will have to look into it.
"Corporate needs you to find the difference between the two pictures "
Me watching this on my phone's 480p display:
"They're the same picture "
you know what I would like to see and possibly be a great idea for a video. Remake your first video almost shot for shot today. Would be fun to see Linus attempt to redo his first video, same script, maybe a newer cooler parts laid out meticulously on the bench, etc. Maybe even wear a reproduction NCIX polo shirt, or a LTT take on it. Also of course use that same NCIX intro (or LTT remake).
A handy tip for upscaling videos like that old one is to upscale it but then view it at the original size.
0:38 WAIT.. that exist? Hold on.. Pause the video and going to search for it NOW (sorry Linus, your first video in 4K can wait.. .)
As a professional photographer, been using Denoise AI and Sharpen AI for *years.* Gigapixel is the real deal, or as close to it as you can get. Also, they offer a one-time license purchase, none of that subscription garbage. I'll happily pay for that.
[This comment was sponsored by Gigapixel]
@@SteveDice21 Not really, You have no idea how many unusable shots I've salvaged with Denoise and Sharpen AI, definitely a must have
While it's true that it's a one-time license purchase, after certain year, they will stop updating your software until you pay to renew the license.
@@AdamIverson That's okay with me, honestly. You still get lifetime use, and if at any point you'd like to get new features and functionality you can pay for that again.
@@captainvyom463 that's what I'm saying. I thought it was bullshit at first too, but it's not really the same until you use it on one of your own images. Once you see the effect it can have on your own shots, damn is it impressive. Nothing can ever be as good as getting it right in camera, but nobody is perfect and this helps bridge the gap a bit.
Linus, I know you'll probably never see this, but you don't know how happy watching your videos makes me. I went through a pretty rough breakup back during COVID. I was working from home and living far away from my family and so I dived into building a gaming PC and somehow managed to stumble across your channel before I had made any major purchases. Watching your videos gave me a huge step up to have a very positive experience when piecing together my rig, and now I greatly enjoy your channel for information and entertainment. It's one of very few comfort channels that I'll often go back and watch old clips now that I'm in a much better place in life.
Thank you for what you do!
I’ve always been curious of what horrors you would find if you used something like Topaz or DLSS on games from the 3rd, 4th, or 5th generation and upscaled it to 4K. It would be morbidly interesting to see what the AI would create and mutilate with such a specifically low resolution to work with.
I guess it would be horrible because the models are trained with more natural frames. If someone trained an AI with sharper and more pixelated images, could it look good?
Linus doing the South Park Canadian flappy mouth thing made me laugh too hard, guy...
Any chance of a 2022 Scrapyard Wars thing? (of course it'll all be about GPUs...).
If you boost footage from years ago into higher quality, it could really mess up the footage.
Gigapixel AI (for still images) have same limitations but when it works results are incredible.
As a computer vision scientist I think it would be interesting for a lot of people if you did a video on how video super resolution/"enhancement" (including dlss for example) is actually done with machine learning. The basic concepts are actually not that complicated to explain (making a good implementation is ofcourse a whole other beast) and i imagine that this would be a good fit for the many in this audience.
Linus is always 4k in my mind
4:55 Also when it come to interlacing, you don't see both fields at the same time. They don't get stitched together like you showed in the video, it's one field at a time.
He's taking about how the software stitches both fields into a progressive upscale.
@@sietherine I would double one field.
@@SuperCartoonist That will do the wrong thing if there is motion on-screen. If there is no motion on screen, it would also halve the resolution.
Though Linus kind of glossed over the whole time difference between fields.
VEAI has several different methods for dealing with interlaced videos. Some are for properly encoded interlaced sources, others are there to deal with an interlaced source that was improperly re-encoded to a progressive video. I've had very mixed results. It really just depends on how craptastically your source content was dealt with in the past.
@@jamesphillips2285 Some of the VEAI models for dealing with interlaced content will, in fact double the frame rate to deal with this time difference you're talking about.
I've been using VEAI for about a year or so, and it's hit & miss when it comes to upscaling. I mostly do music videos, but I've done a full movie in pretty great results. It's usually best to get rid or artefacts before you upscale, or they will be upscaled as well. Frame rate upscaling is still in its early phases. It will often add artefacts if you pay close attention. When it works well though, it's awesome.
Looking at Linus's first video I realised I have been watching him for almost half of my life. I did my only gaming rig when I started Uni, using his advice, and about 13 years, later, married, 2 kids, the whole shebang, still watching Linus. The crazy part, apart from my family, there isn't anyone in my life right now that I know for that long.
Topaz is awesome! I have been using it for about a year. It has been getting many updates that are improving it each time. Sadly my GPU is kinda limited to take advantage with videos more than 720p Upscaling. But in future, it might do the trick.
I have the Topaz upscaler for photos. The results, when you tweak it, saved a ton of images from a HDD crash where I had lower res thumbnails. Pretty good stuff.
I use it for digital DND tokens and upscaling certain assests in vtt builder when necessary. Also used it to upscale ultrawide or super ultra 1080p to 5140x1440. Magnificent results and topaz keeps getting better
YES! Very cool this finally got coverage on your channel! I wrote to Topaz (presumably before development began) asking them to make this software, and and I got to private test the alpha version of the AI. It was an online service only at that time, so no front end GUI running locally. Cool to see how far its come since then! Gigapixel at the time was ok, but slow and not temporally coherent, which was the main thing that prompted me to write to them after seeing the results of the older FMA 2003 upscale attempt
Also, I don't know every exact detail the coming D-VHS video will cover, but during your research and writing, would you take a quick look at LD-decode, even if you don't mention it in the video? Its a really interesting project, one that I haven't undertaken myself, but VERY intriguing IMO. Its for VHS too, not just laserdisc.
cheers :)
It's kind of freaky to see how far Linus (and his team as a whole) have come over the years. Linus's entire personality has changed completely since the start of the channel.
Too bad he still doesn't understand computers. That LIEnus video was a real bombshell.
Lol I wonder what April 1st video gag they'll go with this year?
@@XenZenSen wdym he doesn't understand computers?
@@jorisramanauskas780 watch their april fools video from 2020
Is this the original comment that was copied by Nub?
Copied comment?
Nice! I used topaz at the start of the pandemic to upscale family vhs tapes. I stored them on Plex so the family can watch them whenever they please.
yo that is a nice project
There's a TH-cam Channel called 2 Minute Papers that perfectly showcases quite the variety of video upscaling. I believe pattern recognition now does a whole lot better by recreating us from ultra-realistic 3D models and by treating the pixels like a mega-zoom lens for capturing the fine details like the James Webb Telescope whereas every small movement is an enhancement the more time passes.
From my own tests the Proteus model seems to give the best results, most of the time atleast. Depends on the resolution.
Nothing beats image per image scaling and denoising by hand but, it seems ok for a 240p. I guess denoising lower res and retrying for 4k would have worked better.
From my experience it certainly is better. Recently dealt with garbage 480p and did a few passes to bring it up to "palatable". Denoise->Chronos->Arthemis.
Next time, call The Corridor Crew. They're kings of Special Effects. Great work as usual LTT! Cheers from Brazil!
I want nothing more than Star Trek DS9 to finally get the upscaling that it deserves. I think this would actually make this feasible!
I'm pretty sure some fans have tried doing this with the software. I think it's called Project Defiant. Results are mixed, the early seasons don't look as good as the quality of the original DVD source wasn't great to begin with.
DS9 is my fave Trek of all. I approve of your comment. 🖖
There's a guy on the Topaz VEAI forum that details exactly how he went about doing it.
the source is quite dark. but as my favourite Trek show I really with they would
It would be a lot better to just scan the 35mm elements at 4K than trying to upscale the video masters.
Love this program. Used it to upscale LOTR to 4K, and Star Trek Voyager to 4K as well.
Given Linus firetruck video was 35gb, how big was lotr after upscale? Like 5tb lol
Topaz labs is pretty amazing. I upscaled a movie of my grandparents from the 1930's and it did an excellent job. My relatives were all impressed and amazed
Wait. The Firetruck video was randomly being recommended to me not too long ago. I wonder if a popular creator engages with one of their old videos a lot it gets promoted
*_Same._*
Hey Verlis hope you are doing well.
@@justbubble9766 Persistent dislike botting bringing well down to neutral
@@Verlisify Dislikes don't even exist anymore
@@Verlisify Sorry to hear that sir
I can imagine that techs such as this could likely be implemented as part of video compression algorithms in the future, if the algorithm can be optimised to run in real time, or if hardware improved to such extent that AI upscaling can be done in real time.
So rather than trying to upscale a low res video, it'll be more about being able to compress the video by reducing the image to lower resolution, and then upscaling the video while it's being viewed, presumably it could mean that it can retain higher quality image than with current compression technologies for smaller size.
Actually, how this will work is a Coder and decoder AI will be trained together. The Coder AI will make stream of Data and the decoder AI will then try to recreate the original using that data. This will allow the coder AI to know what data the decoder AI is able to guess at and the decoder AI will learn to guess at data from the Coder AI. They will also be limited in the amount of time they can spend per frame using a given minimum hardware. However, they can be trained to do a better job given better hardware and more time using the same data. Some trade off on a bit of extra data for better result for better hardware could also be made.
Love that this was done on a laptop. I am way more excited about this knowing it's done on average customer products not a monster computer, they do other projects on! Thank you for making this video!
With a starting price of $4k+ it's not an average customer product.
ya a labtop with a 3070 in it. thats not consumer friendly in the slightest considering how hard it currently is to even get that gpu anywhere near msrp atm.
@@zeroa69 dude laptops with rtx 3070 are £989. Not utterly expensive.
I love this software. I used it to upscale to a old movie that was only released on DVD (480p) to 1080p and the results were awesome. It's a movie that my family loves and I shared the results with them during a movie night. The overall consensus was a lot of praise from the family. I'd highly recommend a card with more vram if you can. I've seen the application use north of 12 gigs of vram when I really push it.
What software?
I wish that they spoke more on the reason why remakes like Music Videos can look so well, were because some were shot on Film, which gives the AI more info to work with than Linus' compressed Digital Video or any TV Signals of the time.
When it's shot on film you don't need AI. You just get the original reels and scan them at a higher resolution. See: Wham's Last Christmas vs Smashmouth's All Star.
This guy has a good long form video about it:
th-cam.com/video/rVpABCxiDaU/w-d-xo.html
@@smith7602 but technically film does have a resolution. Which is the silver grains. Lower resolution means more light sensitive
@@smith7602 That's missing the point entirely. Upscaling isn't remastering at all. They're two different things. If you have the source you make a new master from that source. If you have a compressed/digital version you upscale it. Or in other words, if you upscale the source then it's no longer the source, it's an upscale.
Imagine being sponsored by two companies in one video.
I'd actually like to see you guys attempt to upscale the footage using a deepfake. That'd be really interesting.
what? deepfakes work for the faces, unless ur trying to say deepfake the face to the newer him for higher quality
@@vexnity460 nah, i think he's saying deepfake an old video (which might not work at all, i have no idea) then upscale it
@@vexnity460 Linus mentioned doing it in the video. Any and all interpretations of what that could mean would be cool to see attempted.
In a way, what this software is doing is 'deepfaking' a higher resolution video. At least, they both use similar machine learning (I refuse to call it AI) techniques .
@@benphillips2947 ok
You guys should make a swacket that only has a large LTT tag on the seam that you can notice for a minimalist look and for people who dont like wearing clothes with graphics on them.
I know GMM does a minimalist line and the swacket idea is great.
My bet is that in the future, remastering this video will wind up looking pretty darn good. I would think logically that instead of doing 1 frame at a time, it really needs to take the entire video as aggregate and process things into objects so that it identifies your lips and combines all clips with your lips and your eyes and your shirt, etc.
Can anyone tell me how the frame rate conversion is better than optical flow ?
it jus is
8:25
In theory you could run multiple AI upscaler passes and models that work better on different parts of a scene and rotoscope them together. Would be much more time consuming, but for a bigger budget project could probably reach a much higher quality than a single model applied to an entire scene.
0:28 the neatest rickroll i've ever seen. Wow, you just can't see that coming, so sneaky!
The improvement is unreal! It just shows how far you have come as a youtuber, I mean its just baffling, great job LTT!
I've been playing around with Video Enhanced AI privately for many months now.
It's not magic but you can achieve amazing results for many use cases. Here are some of my best practices:
Scaling above 2x is not feasible for most cases. This behavior also scales with the initial resolution. So the 320x200 video Linus used first simply does not have enough information to scale it up by much. For low-res videos you might have to stick to 1.5x scaling only, or even less.
If you give it more information to work with, the results will be much better. So 720p to 1080p should be feasible. 1080p to 1440p works very nice.
I don't have any experience with 4K since I have no 4K displays at home, sorry.
I have also modernized a lot of old anime files. It cleans up old block artifacts quite nicely and you can upscale 480p to 720p (1.5x) very nicely. (For example.)
Use Artemis Medium for animation, or Artemis Low if the quality is really bad.
Also, I am using CRF of 13 and then re-compress the files with Handbrake since the video codec options are so limited in VEAI. There is really no need to use CRF 0 or anything below 10 for that matter. It just makes the files much bigger. Thrown away the audio, recompress nicely with Handbrake and put everything back together with MKV tools.
It's a bit more work, but the results are much nicer. :)
What Topaz really has to improve is cropping and scaling to specific values. Old videos from the analog era have black borders (overscan) on the left and right sides. So getting such a video to a specific, clean horizontal resolution is a pain in the ass. :(
We almost came to the same conclusion on everything and I can attest that 1080p hell even 720p depending on source to 4K can be really really nice. I use CRF 12 myself and almost only use Artemis High Quality, it doesn't smooth out things so much and when using pristine source files in their respective resolution gives great results !
A trick I found to work well is to upscale to 4k and then use a decent quality downscale to go to your desired resolution. Seems to give better results when upscaling dvds than just upscaling directly to 1080p
We use Topaz tools all the time at our print shop when clients send us photos from their phones (or even from their DSLRs). Pretty much everything that comes directly from a client, rather than in-house photography, is going through one or more Topaz tools before printing.
"Interlacing was a technique..." Oh it very much still is, almost every TV channel still broadcasts in 1080i and 480i, it is awful.
You need to upscale the Frame Rate first, before upscaling the resolution
When I was writing Avisynth scripts to upscale 288p VCDs, it worked best by denoising, doubling resolution, increasing framerate, and doubling resolution again.
It's faster processing time to do it that way for sure, but I think it's a toss up of which one you do first for higher quality.
Increasing frame rate is a lot more difficult, so doing that first is way faster
Overall, this is by far a faster method.
For that don't know it; you can also get upscaled 60fps or even 144fps in real-time. Use SVP to increase frame-rate in real-time (artifacts get masked with frame blending, it looks good and consistent). You can also use madVR renderer to upscale the image in real-time with pretty damn good quality. When viewing his first clip -- it will already look a whole lot better in real-time!
10:35 those jitters on the Ladder!!
It’s worth pointing out that uncompressed footage will always do better than any compression. 👍🏼
Young Linus looks like someone that would recommend me a Celeron for gaming.
@@Kigoz4Life better than Peter Peter
@@Kigoz4Life Sebastian is his surname. His full name is Linus Gabriel Sebastian.
Your comment got stolen by a verified bot. Let's report the comment stealer.
Least of my concerns. This isn't my social security or something important being stolen so it's no big deal, just infuriating. Lol
This is a copy bot
It has been quite a while since something literally made me spit whatever I was drinking out of laughter. The south park bit at 5:27 is the reason I am cleaning the coffee off my screen. Well done!
i used the topaz software to upscale old tv shows from 480i to 4k p and i found that adding back the film grain really sold it. It masked the weird smoothness of surfaces that the software likes to produce. It was nice, but not worth buying the software after the free trial and hammering my gpu for ten times the runtime of each episode. It also didn't let me start multiple jobs at once to use different GPUs at the same time, but maybe that feature just wasn't available in the trial version.
A few more years and i expect this kind of sofware will really hit its stride, once quick and dirty version outperform built in upscalers of TVs and Monitors this could really cut down on bandwidth use for streaming services for example.
While you can't set more than one job at a time inside VEAI itself, you can actually just run multiple instances of VEAI simultaneously and can just set each instance to run a different job on a different GPU (or even multiple jobs on the same GPU, VEAI doesn't make optimal use out of a lot of higher end hardware but if one job is only using half or less a GPU's processing power a second instance on the same card works great).
I've had no complaints with SVP (Smooth Video Project) so far, I've had only one really bad scene in hours of watching with v3, and that runs in real time for motion interpolation. Only problem is you're going to need a beefy CPU (or GPU if compatible) if you want to do both at once.
Alternatively, everyone gets faster internet and we INCREASE the bandwidth of streaming services!!
I'm just using MADVR with a Lanczos upscale, deinterlace and 450% zoom.
The only real world use i found sofar using multiple upscaling "ais" also the one from the video, is for animated/drawn art and videos. I can recommend anime4k for old DVD only releases like Drawn Together that look horrible on any modern TV and does not have any HD release. It takes a week or 2 even on beefy hardware but the result is nice.
Media Player Classic does come with a pretty decent upscaler for animated content and can handle 4k upscaling at full speed. It's very likely not quite as sharp as anime4k but that won't be too noticeable if you're watching it from a couch, and it doesn't need a week of preparation.
And you can use Flowframes for interpoling your footage
Upscaling and changing animations to 60fps is really awful for animation. Animations aren't like video games, their entire art style is made to run at a certain frame rate. The drawing techniques get entirely screwed up when you change the fps.
@@Jarekthegamingdragon I have seen that. It looks god-awful when you interpolate animation to 60 fps.
It usually ends up ruining the animations in my experience. It ruins the timing of everything
To be fair, the Astley video looks like it was shot on film, which has much higher resolution than video.
Now I know how Linus would look like as an Alternate from the Mandela County!
Linus looks so cute in that hoodie
Bro
I'd use FFMPEG to get an image sequence and just process each frame using the least bad upscaler in Upscayl, then recombine them again with FFMPEG. and with today's processing power it wouldn't take days to do it.
2:47 😮😮
(turn on subtitles)
LMG's English subtitles: "My mom is like over here!"
I've been using VEAI for a couple of months an WOW I'm left speechless. I didn't think you can screw a video about it that much. Even a 10 minute fiddling with the setting could produce so much better results. I can't believe that a channel like this would be able to pick the worst possible settings to show off a product. I mean seriously? I know what this software is capable of and I couldn't reproduce results this bad. I'm really disappointed in you guys. Hope you'll make an update video on this while actually trying because it's a great piece of software that I think more people should be aware of.
I'd love to see what you could do with the same source file, is it still on his TH-cam video list?
I would have appreciated seeing this in action with the original no TH-cam processed footage.
Also what I find funny about the “Never going to give you up” video is that was shot on film. Now a days a retransfer of the old film reel may have been even better.
Eventually, the kind of AI upscaling tools used in Get Back will be available to the public. There, they fed in better quality audio and it picked out the individual voices and sounds onto separate tracks and cleaned them all up. It could do something similar for video. If you feed in hours of footage of Linus, it can know what the low res version of his eyes translates to in high res. Basically, deep faking (or as they are starting to call it, deep restoration). It'll be a little while before that's available and a long while before its affordable for home use. But it is coming.
no need, GFPGAN can handle that problem already.
So there I was. Minding my own black arse business watching some LTT. When all of a sudden Linus hits me with the south Park Canadian impersonation and with Linus being Canadian himself it was murder on my humor gland. I cackled and wheezed myself to near death and damaged my laugh box. That was far funnier than it had any legal right to be. 😂
That was hilarious
I don't care of him being canadian. I laughed my ass off at the "My arms look like they're from Half-Life 1" part, cos I've lived through those days of blocky polygon graphics and knew exactly what he meant. You gotta be an old school gamer to enjoy such joke.
@@finlandjourney6065 I was 11 when HL1 came out so I know , grew up with thoes graphics
@@mikcnmvedmsfonoteka bruh... You should have grown up on the zx spectrum like me. Quiiiiite a big difference these days xD
Linus, that’s why you should always shoot in raw format video. You get all the camera information, no compression and metadate
Having worked extensively with Topaz Labs software, I can tell you from experience that upscaling low-resolution images is quite tricky. In my own projects, I found that for still images, I had to combine the upscaled versions with hand-painted additions that would correct the strange decisions their AI made. With video, I had to really mess around with the settings to produce something acceptable. This typically meant dialing everything down and forsaking finer detail in exchange for accuracy. I did some video restoration in combination with another AI-powered tool that added color to black-and-white film reels where this worked well. I'm excited to see how this technology will evolve over time.
I can't believe I've been on your channel long enough to remember the firetruck video when it was first published. Jeesh, it's been a while
I've been using this program for years! Love it, and it's always getting better!
1:06 did anyone else hear the android notification 💀💀💀
Yeah, Topaz VE is amazing. I was filming some interviews for my company last fall, and it wasn't until the editing stage that I noticed that I accidentally shot all the footage at 640 by 480... Luckily, I was able to upscale everything with VE and I even wondered if I should shoot everything this way, saving space on my memory card and upscale only the footages I'll need later :)