Dude, I need just a smidge of your motivation. I swear everytime I get on TH-cam you've uploaded a new, super high-quality and educational video!!! Keep it up 💯
Hi there, I have an error, it writes: CUDA out of memory. Tried to allocate 4.69 GiB (GPU 0; 8.00 GiB total capacity; 6.23 GiB already allocated; 0 bytes free; 6.53 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF. Do you have any idea? Do I need to install virtual environment? Thanks
Super awesome, as always thank you so much for this, Nick! Have been waiting for this one! :) This is really really useful in so many situations. One of the main examples that I often see is it being used in games, usually in the form of Nvidia DLSS or AMDs FidelityFX, but it has many other applications, too. For example in digitizing old books, posters, art, etc. Scanners used for this are already high-res but I can see this being very helpful, for example if letters in an old book are hard to read/decipher I guess every bit of resolution counts. And these tools are doing a great job at it. Together with methods to DeOldify/colorize images, it is so amazing what we can do nowadays. As I mentioned before, I find it so fascinating and so wonderful that it ultimately is so easy to (mostly) reconstruct information that was previosly completely lost to us. All those old black&white images or video footage, for example. It sounds weird but I think it helps realize that the past wasn't b&w or sepia and 360p, if you know what I mean. In many ways, for them the world looked the same as it does to us today, and they used colors to convey information just as we do today, and saw their environment in many ways with the same eyes as us. I love the fact that this information, their messages are not lost after all.
Agreed, it's ridiculously cool what's possible with this. DeOldify tut coming this weekend btw! Was planning on doing DAIN as well and trying to stack them all together!
that's basically how they trained it, the problem is not the complexity of the task, but like with most of ml projects, the enormous amount of data and processing power involved to train it
I dont have a gpu but when I run with CPU it shows Torch not compiled with CUDA enabled error and If I install with cuda then it shows no driver found error what to do?
I've come to the conclusion (being a bit cynical) that the best results are obtained by taking a good photo, then reducing it in size, and adding a blur, then put it back through these (various) engines to obtain what you started with. On the other hand, with a genuinely originally bad image, with all the personal photos I've tried, of various quality and size the results are worse (much worse) than using the good old paint shop 5 in built resizer (that presumably interpolates, and does so very well). Or failing that just using a sharpen option.
hi it works but when i try to upscale some bigger images i get this error: RuntimeError: CUDA out of memory. Tried to allocate 4.57 GiB (GPU 0; 8.00 GiB total capacity; 4.78 GiB already allocated; 1.49 GiB free; 4.78 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF but i have definitly enough vram!! can someone help me please?
I typed "git clone" but received a message from the command prompt (running as an admin) that "git" is not a valid command. Edit: I ended up just downloading the .zip and extracted it anyway, just curious why I couldn't use "git" on my PC. Edit 2: I get the same issue with "pip3", is there some sort of prerequisite for these commands to work? Edit 3: OK, I'm making these edits incase someone else has the same issues. I was able to get "pip" to work after installing Python directly from their website AND did a restart on the PC. It's downloading Pytorch now. Last Edit: Awesome tutorial! Your methodical way of explaining things has helped me keep up with what you're saying and doing! I was able to get this working and it is really cool to see just how far we've come in terms of computing abilities.
Unfortunately, ESRGAN can only handle small size images. It will not process any image larger than 1k. So clear limitation there. Good tutorial there Nick!
very good tutorial. tried some old pictures of mine and some other stuff. results were always worse than upscaling in photo editing software. would be nice to know how to tweak the results. they look like if you apply a sharpening filter to a picture for 50 times in a row
I am facing the following problem. My laptop does not have a GPU, when installing pytorch in Compute Platform I choose CPU. But when I run test py I get the following errorI am facing the following problem. My laptop does not have a GPU, when installing pytorch in Compute Platform I choose CPU. But when I run test py I get the following error: Torch not compiled with CUDA enabled. How to fix it?
Hey, I got error with the CPU version of torch, The error is "Torch not compiled with CUDA enabled", I have installed the same version of torch for GFPGAN that works fine without CUDA, please confirm if ESRGAN specifically demands CUDA.
Is there a cleanup-only option? The X4 .pth file leaves a checked pattern all over the picture and does not clarify the details of real pictures. Great video.
The thing works great, but only with icon size pictures, if I try with an image larger than 500x500 it returns an error of out of allocated memory. I have a gtx 1650 4gb, it's not enough? I run deepfacelab nicely and a lot of games, what I'm missing here?
i just figured it out; open the test.py script and change torchdevice(cuda) to torchdevice(cpu), save and run the command in terminal again... BTW thanks @Nicholas
Very interesting video! Have you looked into serpentAI for RL in complex games? I've seen a couple call of duty examples on yt but no comprehensive tutorials anywhere
@@NicholasRenotte 😂I was attempting to make a bot for Minecraft but kept running into issues. Hoping you could could make a tutorial for it at some point 😄
Great project! But the upscaling amplifies JPEG compression artefacts, so better use lossless formats instead, such as PNG. Maybe we need a model that was trained with those artefacts and "knows" how to deal with them!
What do you need to install before cloning the github? When I try the git clone I get this error: 'git' is not recognized as an internal or external command, operable program or batch file.
I found that I can just download and extract the git hub instead of cloning it from the command prompt. But for the next step I also get an error: 'pip' is not recognized as an internal or external command, operable program or batch file.
The command git and the attribute clone are only present if you installed git on your machine. The same is with pip it comes with python. Also check your system envrionment variables in case you installed the software allready. As I wrote to Lusifer Sofia: I can offer you free help over discord.
If I have choose between rtx 3060 12 GB and rtx 3060 TI 8 GB which one should we go for according to your experience I don't have too much big budget. What does your experience say.
Both are good, ti will be faster but non ti will be able to handle more data per batch. Check this out: gpu.userbenchmark.com/Compare/Nvidia-RTX-3060-Ti-vs-Nvidia-RTX-3060/4090vs4105
Can you make a tutorial on how to make deep learning super-resolution from scratch? It would be much appreciated also I need a full tutorial on GAN as well. 😂😁
Have you used the even more simplified “Real-ESRGAN-Sber.ipynb”? I am a Ob and can’t figure out setting up the upload/download file location part it the rest of it is all automated.
Hi nich! Could you please help me to set up the software enviornment to run this ESRGAN. Currently i hvae completed your python series which have helped me a lot to get started..!! Currently i have anaconda installed in my pc and the model i have downloaded. Could you please tell me what are the various softwares i have to install for running this ESRGAN.
Thanks for this tuto helped me a lot, even if i cant deal python im very very beginer and my english is medium so ... anyway its working for me with on windows but ... i still have a question , when i download the others models they are not working, the file is in "pth" format does that mean "pth" models work with python only ? Also do you know how to use different size of upscale without python like x1 x2 x3 i tried but it looks like its not working " it working only on animevideomodel x2 & x3 yes but unfortunately no x1 " do you have any information about that thanks a lot in advance
this is really amazing - had 1 question- how does it compare with online tools which boast of doing a low resolution to high resolution ? And also what advantage does the GAN have if we try to use tools like Photoshop to do the same task?
You should take 4k image resize it to 640x480, do upscale with this AI model and show us the result. We want see the difference. It works not as expected.
How can i do this with a specific jpeg b64 string? (so not a jpg file but directly b64 string image..... eg : "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAASABIAAD/4QBMRXhpZgAATU0AKgAAA...... it does seem to go wrong when decaling to 50,50 usinf tensorflwo lite model ? then picture will get way worse instead of better...? May be you know a way forward?
Great video. Thank you. Followed your instructions to a T, but unfortunately it doesn't seem to work without a GPU. Gave me problems regarding line 2 first, importing mkl...then I managed to update/upgrade mkl, but then problems again on line 19 demanding Cuda / Nvidia. Gave up for the moment. Great idea but no time to explore more. Will wait until it's more easily accessible and works out of the box.
hey, i got this error. do you know how to fix it Traceback (most recent call last): File "C:\Users\user\Desktop\AI tools\ESRGAN\test.py", line 17, in model = model.to(device) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch n\modules\module.py", line 1145, in to return self._apply(convert) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch n\modules\module.py", line 797, in _apply module._apply(fn) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch n\modules\module.py", line 820, in _apply param_applied = fn(param) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch n\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda\__init__.py", line 247, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from www.nvidia.com/Download/index.aspx
You need a NVIDIA GPU because of the CUDA-Support. You can use GFPGAN instead of ESRGAN and modify the code to run on the CPU. But be aware its very slow in in comparison.
Dude, I need just a smidge of your motivation. I swear everytime I get on TH-cam you've uploaded a new, super high-quality and educational video!!! Keep it up 💯
💯
Hahahah, I hype the crap out of myself before I start recording 😂 life is good though with all this tech so it tends to gee me up even more!
3:21 I had it paused, walked away to grab a beer. Sat back down, pressed play, nearly spit out it out laughing... Awesome Nicolas
Hahahaha, I watched it back and I was like wtf am I doing 😂
Hi there, I have an error, it writes: CUDA out of memory. Tried to allocate 4.69 GiB (GPU 0; 8.00 GiB total capacity; 6.23 GiB already allocated; 0 bytes free; 6.53 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF. Do you have any idea? Do I need to install virtual environment? Thanks
Super awesome, as always thank you so much for this, Nick! Have been waiting for this one! :)
This is really really useful in so many situations. One of the main examples that I often see is it being used in games, usually in the form of Nvidia DLSS or AMDs FidelityFX, but it has many other applications, too. For example in digitizing old books, posters, art, etc. Scanners used for this are already high-res but I can see this being very helpful, for example if letters in an old book are hard to read/decipher I guess every bit of resolution counts. And these tools are doing a great job at it.
Together with methods to DeOldify/colorize images, it is so amazing what we can do nowadays. As I mentioned before, I find it so fascinating and so wonderful that it ultimately is so easy to (mostly) reconstruct information that was previosly completely lost to us. All those old black&white images or video footage, for example. It sounds weird but I think it helps realize that the past wasn't b&w or sepia and 360p, if you know what I mean. In many ways, for them the world looked the same as it does to us today, and they used colors to convey information just as we do today, and saw their environment in many ways with the same eyes as us. I love the fact that this information, their messages are not lost after all.
Agreed, it's ridiculously cool what's possible with this. DeOldify tut coming this weekend btw! Was planning on doing DAIN as well and trying to stack them all together!
@@NicholasRenotte Sounds super awesome, looking forward to that! :)
that really damn long
Congrats on hitting 100k subscribers earlier this morning!
You are my hero! The video explaning the fundamental and tutorial very well.Thank you so much!
why not downscale one image before applying I.A upscale, then compare ?
that's basically how they trained it, the problem is not the complexity of the task, but like with most of ml projects, the enormous amount of data and processing power involved to train it
Thank you so much! I needed it in my project, and now I know how to do it!
I dont have a gpu but when I run with CPU it shows Torch not compiled with CUDA enabled error and If I install with cuda then it shows no driver found error what to do?
I've come to the conclusion (being a bit cynical) that the best results are obtained by taking a good photo, then reducing it in size, and adding a blur, then put it back through these (various) engines to obtain what you started with. On the other hand, with a genuinely originally bad image, with all the personal photos I've tried, of various quality and size the results are worse (much worse) than using the good old paint shop 5 in built resizer (that presumably interpolates, and does so very well). Or failing that just using a sharpen option.
hi it works but when i try to upscale some bigger images i get this error:
RuntimeError: CUDA out of memory. Tried to allocate 4.57 GiB (GPU 0; 8.00 GiB total capacity; 4.78 GiB
already allocated; 1.49 GiB free; 4.78 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
but i have definitly enough vram!! can someone help me please?
yeah can someone help please? Mr. Nicholas?
Thanks Nick! Your videos are absolutely awesome, this one is particularly great. Keep up the good work mate!
Got the notification and here I am
Ayoooo, thanks dude!
I typed "git clone" but received a message from the command prompt (running as an admin) that "git" is not a valid command.
Edit: I ended up just downloading the .zip and extracted it anyway, just curious why I couldn't use "git" on my PC.
Edit 2: I get the same issue with "pip3", is there some sort of prerequisite for these commands to work?
Edit 3: OK, I'm making these edits incase someone else has the same issues. I was able to get "pip" to work after installing Python directly from their website AND did a restart on the PC. It's downloading Pytorch now.
Last Edit: Awesome tutorial! Your methodical way of explaining things has helped me keep up with what you're saying and doing! I was able to get this working and it is really cool to see just how far we've come in terms of computing abilities.
Thanks so much for the tutorial, dude. You gained one more subscriber.
Unfortunately, ESRGAN can only handle small size images. It will not process any image larger than 1k. So clear limitation there. Good tutorial there Nick!
very good tutorial.
tried some old pictures of mine and some other stuff. results were always worse than upscaling in photo editing software.
would be nice to know how to tweak the results. they look like if you apply a sharpening filter to a picture for 50 times in a row
WoW insane, I love your vids man
In college but as notification came seeing this now in my college break 😂
Ayyyy, nice!
i tried it only work good on the sample image, its juts shit on any other image u throw at it!
Amazing as always 🔥🤩
Thanks a mill Meet!!
Now all those CSI episodes make sense
"ENHANCE" *images enhance magically to 16k*
Nicely explained , thank you very much
Lovely tutorials. Keep up the good work and thanks for your knowledge.
how can i customize the scaling that i want this program to do for me?
bro thanks thanks to you my task is done, big thank you
you are amazing
I am facing the following problem. My laptop does not have a GPU, when installing pytorch in Compute Platform I choose CPU. But when I run test py I get the following errorI am facing the following problem. My laptop does not have a GPU, when installing pytorch in Compute Platform I choose CPU. But when I run test py I get the following error: Torch not compiled with CUDA enabled. How to fix it?
Man this was an absolutely awesome tutorial video, I just implemented it myself and the results were incredible. Thanks so much!
YESSSS, awesome work @chrisber!
What bro should i do know
Normally it is used the technique of zoom using bilinear interpolation.
Amazing 🤩 man... As always you are the best...
HEYYYYY! Thanks so much @Irugal!
Great stuff. Thank you so much
Unfortunately it says me, RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
Ok
Edit : Thank you for the heart sir
Thanks for posting this :)
Awesome, man! Thanks.
Anytime!
Hey, I got error with the CPU version of torch, The error is "Torch not compiled with CUDA enabled", I have installed the same version of torch for GFPGAN that works fine without CUDA, please confirm if ESRGAN specifically demands CUDA.
I have the same problem :(
No CUDA no ESRGAN. Use GFPGAN instead.
Is there a cleanup-only option? The X4 .pth file leaves a checked pattern all over the picture and does not clarify the details of real pictures. Great video.
Is there something for classification from a description of something?
This is totally different area for me since I have never coded in my life! I was still able to follow along fairly well. Thank you.
Can you also teach us to train custom dataset on ESRGAN
Thanks man!
The thing works great, but only with icon size pictures, if I try with an image larger than 500x500 it returns an error of out of allocated memory. I have a gtx 1650 4gb, it's not enough? I run deepfacelab nicely and a lot of games, what I'm missing here?
Great Sir
Thanks for the knowledge! one request, can you do a video on super pixel segmentation would be very useful.
Awesome. Does it work with videos as well?
since you did this video a year ago, would you say a year later that ESRGAN is still #1? If not, what would you recommend?
esrgan is 6 years old lmao it has been obsolete since long before this video was made
Hi Nicholas, Do you think it is scientifically meaningful to enlarge different types of images using SRGAN trained with a different dataset?
Your videos are awesome man, keep going :D
Thanks a mil @Ameen!
CPU => AssertionError: Torch not compiled with CUDA enabled.
Too bad 😅
just got the same error @Nicholas Renotte.. im also using cpu
Fixed it by setting device to cuda in test.py 😃
i just figured it out; open the test.py script and change torchdevice(cuda) to torchdevice(cpu), save and run the command in terminal again... BTW thanks @Nicholas
Very interesting video! Have you looked into serpentAI for RL in complex games? I've seen a couple call of duty examples on yt but no comprehensive tutorials anywhere
Omg, I was meant to study this afternoon now I'm hooked on the possibilities of this.
@@NicholasRenotte 😂I was attempting to make a bot for Minecraft but kept running into issues. Hoping you could could make a tutorial for it at some point 😄
Awesome!
ma man u the one, really awesome.
great tut
Great project! But the upscaling amplifies JPEG compression artefacts, so better use lossless formats instead, such as PNG. Maybe we need a model that was trained with those artefacts and "knows" how to deal with them!
no link to models?
Beautiful
What do you need to install before cloning the github?
When I try the git clone I get this error:
'git' is not recognized as an internal or external command, operable program or batch file.
I found that I can just download and extract the git hub instead of cloning it from the command prompt.
But for the next step I also get an error:
'pip' is not recognized as an internal or external command,
operable program or batch file.
The command git and the attribute clone are only present if you installed git on your machine. The same is with pip it comes with python. Also check your system envrionment variables in case you installed the software allready. As I wrote to Lusifer Sofia: I can offer you free help over discord.
Why my cmd do not recognise the git keyword while cloning
If I have choose between rtx 3060 12 GB and rtx 3060 TI 8 GB which one should we go for according to your experience I don't have too much big budget. What does your experience say.
Both are good, ti will be faster but non ti will be able to handle more data per batch. Check this out: gpu.userbenchmark.com/Compare/Nvidia-RTX-3060-Ti-vs-Nvidia-RTX-3060/4090vs4105
@@NicholasRenotte thank you 😊
heavily recommend 12GB, I have 8GB and the batches allowed by the hardware are relatively small
@@Powercube7 thank you
Can you make a tutorial on how to make deep learning super-resolution from scratch? It would be much appreciated also I need a full tutorial on GAN as well. 😂😁
Working on it! Pulling my hair out trying to get it to work atm.
@@NicholasRenotte Thank you 😀
hi please where is the model? on drive it's not available
Could you please help me how to train ESRGAN from scratch and how expensive it is?
Have you used the even more simplified “Real-ESRGAN-Sber.ipynb”? I am a Ob and can’t figure out setting up the upload/download file location part it the rest of it is all automated.
Hi nich! Could you please help me to set up the software enviornment to run this ESRGAN. Currently i hvae completed your python series which have helped me a lot to get started..!! Currently i have anaconda installed in my pc and the model i have downloaded. Could you please tell me what are the various softwares i have to install for running this ESRGAN.
Hi! Nice tut. I got a question: if i try to process bigger img my gpu runs out of memory and the script crashes... is there a workaround?
same issue and I have 4090 24gb
Can you also make a video on object tracking using deepSort.
Amazing video🔥
Agreed, that would be lit!
Thanks for this tuto helped me a lot, even if i cant deal python im very very beginer and my english is medium so ... anyway its working for me with on windows but ... i still have a question , when i download the others models they are not working, the file is in "pth" format does that mean "pth" models work with python only ?
Also do you know how to use different size of upscale without python like x1 x2 x3 i tried but it looks like its not working " it working only on animevideomodel x2 & x3 yes but unfortunately no x1 " do you have any information about that thanks a lot in advance
this is really amazing - had 1 question- how does it compare with online tools which boast of doing a low resolution to high resolution ? And also what advantage does the GAN have if we try to use tools like Photoshop to do the same task?
This particular model is not better than photoshop.
The new version of Photoshop seems to support neural network filter, including super-resolution function, similar to the principle of video.
Great stuff and in combination with deoldify u may get better older pictues. btw: I like the sample of the Redbull ahead of mercedes 😁
HENKKK STOPPPP, I'm getting to comments late and we're coming off some 'not exactly great runs' this weekend. 😂😅
I'm a newbee and I was wondering how I can approach a problem where I have to assess the damages (tears, scratches, spots etc.) of old photos.
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
my processor is intel i3. So running will cause any problem?
You should take 4k image resize it to 640x480, do upscale with this AI model and show us the result. We want see the difference. It works not as expected.
There's something backhanded about 13 mins into a 21 min video telling you reveal you're skipping a step and that we should watch a 28 minute video.
Can I modify the same code for satellite imagery?
Hey nick, how about this kind of project but for videos as well? I have some low res video I would love to enhance
Ya gotta get back to it! Should be easy enough to parse a bunch of images through and recompile as a vid @Czarlie!
Thanks champ !!
How can i do this with a specific jpeg b64 string? (so not a jpg file but directly b64 string image..... eg : "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAASABIAAD/4QBMRXhpZgAATU0AKgAAA...... it does seem to go wrong when decaling to 50,50 usinf tensorflwo lite model ? then picture will get way worse instead of better...? May be you know a way forward?
19:14 The first which comes me into mind was CIFAR10 (32 by 32 px) or CelebA (64 by 64 px) 😄😋
Does it work with videos?
do we need GPU for this?
Can this be used within webassembly ?
My issue is most of these pre-trained models smooth out images losing their inherent, interesting and important details.
Is there a way to tune this model?
Hey Nick, can you please tell me how to run this on google colab?
Hi, it doesn't work without CUDA> Error message is AssertionError: Torch not compiled with CUDA enabled
Apparently you can set it to run on GPU in test.py if you don't have CUDA installed.
If you have an Nvidia card and Nvidia experience account, you can use same login on their developer portal to download cuda.
Great video. Thank you. Followed your instructions to a T, but unfortunately it doesn't seem to work without a GPU. Gave me problems regarding line 2 first, importing mkl...then I managed to update/upgrade mkl, but then problems again on line 19 demanding Cuda / Nvidia. Gave up for the moment. Great idea but no time to explore more. Will wait until it's more easily accessible and works out of the box.
Use Google Colab instead, they have machines you could use
Got the same problem. You can take a look at the "test.py" file and change the "device" to "cpu". It works for me.
@@mostechroom9780 thank you!
@@lemauhieu3037 thank you!
@@mostechroom9780 Are there any tutorials on how to use the Real-ERSGAN Colab? .. I'm a complete beginner here and understand 0 things about code :')
Will it work similarly for low-resolution text images? like traffic sign, house numbers, etc
Doubt it will perform as well.
Sir please make a video on tensorflow direct ml
getting NIVIDA need to install
how to download and install
hey, i got this error. do you know how to fix it
Traceback (most recent call last):
File "C:\Users\user\Desktop\AI tools\ESRGAN\test.py", line 17, in
model = model.to(device)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch
n\modules\module.py", line 1145, in to
return self._apply(convert)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch
n\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch
n\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch
n\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda\__init__.py", line 247, in _lazy_init
torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from www.nvidia.com/Download/index.aspx
Where is the superes folder?
This video is HD resolution, but it looks quite crisp, I see what you have done 😁
Why are you upscaling images that are already fairly large?
Start with tiny images and showcase what it can (or can't) actually do.
Very helpfull :))
should try it out on images of UFO or Loch Ness Monster :)
How do you do a film from the 1800s?
not having any gpu. i installed pytorch on cpu, and it throws an error. any help.
You need a NVIDIA GPU because of the CUDA-Support. You can use GFPGAN instead of ESRGAN and modify the code to run on the CPU. But be aware its very slow in in comparison.
It runs out of memory with a normal sized image. I don't get it.
same
Can you make a video of image super resolution using swinir transformer?