fantastic tutorial cody! would love to see a follow up related to image optimization, as Vercel's Image optimization becomes prohibitively expensive very quickly!
Have you considered react form libraries? Because I recall reading the docs and they mentioned that custom forms cause unnecessary rerenders so better to use a library that tackles that. Would love to hear your thoughts if you've looked into it and still chose this approach
I wonder if this can affect the vercel's bill by sending the entire file to api route. Instead can we just generate the presigned url and send it to client and upload the file from there.
Yes, it affects the bill. There's a better way, most data storages allows to create signed/presigned upload url, which you generate on your server, then send it back to client and then client uploads the file to bucker/r2.
Yes, I thought that as well until I learned anyone can ddos your private bucket and you’ll be charged a lot of money. This gives you more control over abstraction away the bucket and also prevents the scenario where someone uploads files but the second part of the request never goes through causing orphaned files
Yes, on larger files you’d want a presigned url. The issue I have with presigned urls is that it exposes your bucket name and anyone can ddos your bucket and run up a large bill. Also with R2 there isn’t a way to limit file size, so technically someone could just keep uploading huge files to your bucket.
I am curious why you opted for this storage service. As a starter kit might it be a more simple and upload efficient option to use something like UploadThing? The only options I’m familiar with are Upload Thing, Cloudinary, and the Convex storage so I’m truly interested in your reasoning and thought process.
For storage, you won’t get as close to “free” than with S3 or R2. Almost any production application I know uses S3 for storing files. Convex even uses s3 behind the scenes, they just abstract it away. I’m pretty sure upload thing also uses s3.
@@WebDevCody Yes, I figured S3 was the case. Most of my projects are very small, either test cases or hobby projects so the simplicity of upload thing or convex is easy, but at large scale would probably go directly to S3 once I was comfortable with it.
Great video, thank you for sharing You did the ref stuffs to reset the file input after a successful upload, because there is no way to reset it than making it's equal undefined using the ref
Thank you so much for this video. I tried to do this by uploading multiple images and videos but somehow I can’t do that when I deploy to production because of vercel’s body limit size. Even after trying presignedURLs, I still get the same error. From your experience, do you think creating an external nodejs server for this would solve the issue?
@@WebDevCody thank you so much for the reply. I’ll implement this night again. Maybe I was doing it wrong before. Had to follow your image gallery video with next-cloudinary as an alternative. Your videos really help me 🤩🤩
@ The maximum object size is around 5TB. The maximum upload size is 5GB per part, with 10000 maximum parts. So a multipart upload is possible. Do users upload more than 5 TB?
@@mxpf26 I'm saying with presigned urls, you can't limit the content-length when using r2, or at least I haven't found a way to do that. This means any malicious user could spam upload your bucket with tons of data which you will end up paying for.
fantastic tutorial cody! would love to see a follow up related to image optimization, as Vercel's Image optimization becomes prohibitively expensive very quickly!
Ayeeee you got it figured out. Great video my friend
Good job babe! Love ya!
WTF
@@MohamedElguarir it’s ok. I can say that to him, I’m his parole officer.
Thanks babe love you!
Literally what I was looking for! Thanks so much
Have you considered react form libraries?
Because I recall reading the docs and they mentioned that custom forms cause unnecessary rerenders so better to use a library that tackles that. Would love to hear your thoughts if you've looked into it and still chose this approach
I wonder if this can affect the vercel's bill by sending the entire file to api route. Instead can we just generate the presigned url and send it to client and upload the file from there.
I have tried doing this and still vercel doesn’t let me upload larger files. I’ve been thinking of creating my own node server instead
@@joshuaroland9876 you will be serverless and you will be happy
@@joshuaroland9876you can upload large file, using presigned url
Yes, it affects the bill. There's a better way, most data storages allows to create signed/presigned upload url, which you generate on your server, then send it back to client and then client uploads the file to bucker/r2.
I’m not deploying to vercel, but yes you’d probably most definitely want a presigned post to your bucket directly to save on bandwidth
Wouldn’t it be better to create presigned urls and let the user directly upload to the bucket?
Yes, I thought that as well until I learned anyone can ddos your private bucket and you’ll be charged a lot of money. This gives you more control over abstraction away the bucket and also prevents the scenario where someone uploads files but the second part of the request never goes through causing orphaned files
interesting how you organize some parts of the application logic in this "use-cases" folder, have you talked about that in any recent video?
It’s related to clean architecture or layered architecture. I think I have a few videos about it.
I come from express and got used to needing multer for any upload needs. Why don't you need it here?
what about timeouts, bigger files ?
I think using presigned uploads are much better + you'll save bandwidth ==> saving you $$
Yes, on larger files you’d want a presigned url. The issue I have with presigned urls is that it exposes your bucket name and anyone can ddos your bucket and run up a large bill. Also with R2 there isn’t a way to limit file size, so technically someone could just keep uploading huge files to your bucket.
@@WebDevCody why, does you access key get exposed too?
I am curious why you opted for this storage service. As a starter kit might it be a more simple and upload efficient option to use something like UploadThing? The only options I’m familiar with are Upload Thing, Cloudinary, and the Convex storage so I’m truly interested in your reasoning and thought process.
For storage, you won’t get as close to “free” than with S3 or R2. Almost any production application I know uses S3 for storing files. Convex even uses s3 behind the scenes, they just abstract it away. I’m pretty sure upload thing also uses s3.
@@WebDevCody Yes, I figured S3 was the case. Most of my projects are very small, either test cases or hobby projects so the simplicity of upload thing or convex is easy, but at large scale would probably go directly to S3 once I was comfortable with it.
Great video, thank you for sharing
You did the ref stuffs to reset the file input after a successful upload, because there is no way to reset it than making it's equal undefined using the ref
I can’t figure out a way to reset the file input without a ref 🤷♂️
Thank you so much for this video. I tried to do this by uploading multiple images and videos but somehow I can’t do that when I deploy to production because of vercel’s body limit size. Even after trying presignedURLs, I still get the same error. From your experience, do you think creating an external nodejs server for this would solve the issue?
A presigned url shouldn’t have an upload limit, or if it does it’s a few GB. I’d try that again.
@@WebDevCody thank you so much for the reply. I’ll implement this night again. Maybe I was doing it wrong before. Had to follow your image gallery video with next-cloudinary as an alternative. Your videos really help me 🤩🤩
@@joshuaroland9876 just make sure that you upload files to presigned url using client. You can do it 🤛.
@@pawixu thank you. I’ll return here once I get it sorted🦾
It would be nice to add a functionality to crop the image to a certain size
sorry, but I think this is not a good way,
using resigned URLs is the best way,
What if the size of the image is too big ?
Thank you!!!
Better way would be to use signed url to do client side Upload. More Secure and less overhead, but you do need an endpoint to sign the url.
R2 doesn’t support a max file size from what I’ve seen, so technically any user could upload gb of data directly to your bucket
@ The maximum object size is around 5TB. The maximum upload size is 5GB per part, with 10000 maximum parts. So a multipart upload is possible. Do users upload more than 5 TB?
@@mxpf26 I'm saying with presigned urls, you can't limit the content-length when using r2, or at least I haven't found a way to do that. This means any malicious user could spam upload your bucket with tons of data which you will end up paying for.
Meme type 😂 love it