Hello, Can you please make sure you have the correct permission on your passed file, password file can be used in either of two ways /etc/passwd-s3fs [0640] $HOME/.passwd-s3fs [0600]
Hello Kiran, I dont think this would work in that case but you can check out a new service by AWS using which you can use S3 as a file system. aws.amazon.com/blogs/storage/the-inside-story-on-mountpoint-for-amazon-s3-a-high-performance-open-source-file-client/
Thank You for the Video. But I have an issue, I have setup SSE-KMS for my S3 bucket and after mounting I'm unable to open files from my bucket on ec2 instance. Even though I have given KMS permissions to the IAM user and also I'm able to open these KMS encrypted bucket files through that user on AWS console, but unable to open files from mounted bucket. How to configure this ? Thanks in advance.
Hi Manish, Yes, it should. It will easier for me to answer if you can provide more details on exactly which timestamp you don't see the content of S3 on drive Nevertheless, if you scroll to the last few minutes of the video, you would see the content.
Hello, Yes, you can add MFA for root user if you have access to the root account or you can leave this as it is a recommendation but not a mandatory step.
Hello Md, I would be more than happy to help you out. Can you please confirm which step you are exactly stuck at at the moment? Is there any error you are facing right now?
@@TechTutorialswithPiyush when I try to make a directory inside s3fs it could not make any directory in side s3 instead it make directory in the file system which is wired.. I used this command “ sudo mkdir -p /s3/directory name” but some wired thing it doesn’t make any directory inside s3 . I don’t know what to do now
@@Md-xw6ni I am not sure if you have followed the tutorial correctly. Mkdir would create the directory on your filesystem and not on S3 bucket, this is the expected behaviour. I suggest you to please follow the video from beginning till end and let me know if you still face any issues.
It is a very good solution for small files, but if you want to upload or work with 100GB or 200GB backups, I see that it is a slow process, in this case, what should be activated or what type of configuration should be followed?
Hello Sergio, I totally agree with you! This process has some limitations and shouldn't be used in case of huge files. In your case, you can use multipart upload and AWS cli sync command as a cronjob that sync those files at regular intervals
Thanks very much for the video, I already have a Bucket loaded via Bitbucketpipeline. I want to know if the synchronisation is bidirectional. What I mean is, if the bucket files are updated will he also sync with the ec2 reposatory ?
Great tutorial! A question though - at the very end of your video, when you're configuring a new line in the fstab file using vi, how are you getting onto a new line? I tried moving my cursor to the end of the top line (keyboard: End) and hitting enter just the same as you but that doesn't work for me. The internet has 1,000 things to say about what the problem is and I've never used vi/vim before so I'm not sure what to do here
Thank you very much @Alec for the feedback! In vi editor when you open a file it opens in escape mode that means its readonly and you can do certain operations like dd to delete the complete line Shift + G to move the cursor to the last line :linenumber and then enter to go to a particular line e.g :1 to go to the first line delete key or x key to delete a character etc To insert or update anything, you have to enter into the insert mode which you can by pressing i key and then you see --Insert-- at the bottom of your screen. once, you enter the insert mode you can add/update anything or use backspace to delete as well like in a text editor The other way to enter into insert mode is by pressing SHIFT + A which opens the file in insert mode and move the cursor at the end of current line( This is what I used) I hope this would made few things clear, working in vi editor sometimes confuses you but if you practise it enough and make it a habit to use certain shortcuts like I have mentioned above, it is really easy and fun working with.
Hello, The default region is the region in which my EC2 server was provisioned, you can check the location of your EC2 server and use the same region in your s3fs. Hope it helps!
Hi Piyush, you are doing a great job, but a quick question. Why didn't you use IAM role and used user here. Role is generally used to communicate b/w AWS services. I tried with a role instead of user and didn't need a workaround during mount. It worked perfectly.
Thank you. Recorded this video around 3 years back, at that time it had some issues, s3fs was not compatible with s3 hence I used that as a workaround. Glad it worked for you
Hi This video is very helpful but there is one issue in the last of video. s3fs is mount successfully but after few second it automatically unmount, which is not successfully sync automatically from local folder. Can you please give me solution.
Hi Devendra, I used the same methods multiple times and never faced this issue. Can you please also add the entry in /etc/fstab so that mount becomes persistent after the server reboot as well. Can you please try to unmount and mount again.
Hello Devendra, Can you please try to add any other entry using the same command and see if it works? We need to seperate if its an issue with fstab or the command itself.
Hello Sir, Super video it ..I was thinking to store my Issabel Asterisk IVR recording to S3 bucket directly mounting this way. But only thing is I am thinking about size it everyday I have 4-5GB of recording data and its keeps increasing day by day. Moreover we will do daily playback of those recording for quality assurance pirpose. So what do you suggest in this my case..Should I use this mount method or any other alternate solution? If I use mount method is there any chance it will dismount auto in future or disppear, if so it will give us big problem. I don't wish to use my Issabel EC2 instance itself to store recording data as it is ssd and data are huge so cost cutting wise. Thanks for youe suggestion sir.
Hello, Sorry for the delay in response! S3FS has limitations and would not be an excellent choice for larger files. One way is to use EFS attached to the EC2 instance to store the recordings/files and then use datasync to sync EFS to S3. This avoids reliance on instance store (non-persistent) / EBS (Single AZ risk) - Once setup, DataSync is fairly straightforward and has very little to zero ops. The pricing was worth it ($0.0125 per gigabyte (GB) copied) You can rotate the files on the ec2 instance and set the datasync to not delete it on the target - it can put the files straight into IA / Glacier, etc (You can also set lifecycle policies to save the cost) - feel free to check the docs for datasync and EFS. I hope I have answered your question. Regards, Piyush
Great work man you really went throgh the steps thoroughly. I cant seem to get the mount point to be persistent between reboots though. Any suggestions?
Thank you very much Selorm for your valuable feedback! I guess I covered the part, you need to add an entry in the fstab so that your mount will remain persistent even after the reboot. Feel free to copy the command from description section and let me know if you still faces the issue.
Hi anna Thanks for video, 1.) I can only seeing my s3fs mount point with root user only 2.) From ubuntu user I can't CD into the s3bucket folder only root user uave access to it kindly please help with this issue
Hello Aswin, Thanks for your feedback! If you are not able to cd or see your bucket folder that means it doesn't have the required permissions. Please provide the below permissions: sudo chmod -R 777 bucketfolder or using the root user and then you should be good. Also, make sure you are using the correct command from the description, where UID is the user id that you are granting access to.
Great tutorial Piyush! One question though, if I create an empty bucket and mount it on an empty folder on the EC2 instance, is it possible to use the bucket as storage without using EC2 instance storage resources? or does whatever you put into the "mounted" bucket count towards your EC2 instance storage? Thanks in advance for any and all input! -Orallo
Thank you very much Orallo for your kind words, I am glad it was helpful. If you are mounting your S3 bucket then you would be charged as per S3 storage and data transfer cost just like you get charged as per EFS cost if you attach EFS to your instance. I hope that would have cleared your doubt. Great question by the way! Happy learning!
Thanks for your feedback Magdi. Actually this video is very old, in fact I published it when I started my channel, I have improved a lot since then 😁 If you watch any of my latest videos, you won't face this issue
@@TechTutorialswithPiyush still very good and useful. Should I start in aws or azure for starting learning microservices. I need something free tier to improve my self
@@nakhla3 Thank you so much 🙏😊 You can go ahead with either of those or even gcp however, AWS has already a well established market and a lot of expertise in the market , it will be easier to get a job with Azure or GCP so I would suggest to go with Either Azure or GCP
Can you please paste the command which you ran? alternatively, you can join our Facebook community, it will be easy to triage the issue there facebook.com/groups/1015771332531944
Excellent video bro..
appreciated a lot...you're genius
you're most welcome :)
im getting this error s3fs: credentials file /home/ubuntu/.psswd_s3fs should not have others permissions. after 12:54 in video
Hello, Can you please make sure you have the correct permission on your passed file, password file can be used in either of two ways
/etc/passwd-s3fs [0640]
$HOME/.passwd-s3fs [0600]
Great tutorial. Easy to follow and understand. I found it very helpful
Thank you very much Arul for the appreciation!
Excellent work! Thank you for posting this
Thank you very much Zardoz for your valuable feedback :)
How do you do this if the EC2 instance is setup with an instance profile and we cannot create additional credentials such as iam user.
Hello Kiran, I dont think this would work in that case but you can check out a new service by AWS using which you can use S3 as a file system.
aws.amazon.com/blogs/storage/the-inside-story-on-mountpoint-for-amazon-s3-a-high-performance-open-source-file-client/
Thank You for the Video.
But I have an issue, I have setup SSE-KMS for my S3 bucket and after mounting I'm unable to open files from my bucket on ec2 instance.
Even though I have given KMS permissions to the IAM user and also I'm able to open these KMS encrypted bucket files through that user on AWS console, but unable to open files from mounted bucket. How to configure this ?
Thanks in advance.
Can you try to use the below service
aws.amazon.com/blogs/aws/mountpoint-for-amazon-s3-generally-available-and-ready-for-production-workloads/
Thanks Piyush, It worked :)
P.S. Earlier I was using s3fs for mounting.
Good to know that it should. S3fs have some limitations I guess you were hitting one of those
Fantastic video! Thank you so much!
You’re most welcome! I am glad it was helpful 😊
can we add s3 as a volume for ecs tasks?
Hey Iqbal, I wouldn't suggest doing it. S3FS already has a lot of limitations, with containers it would not work very well.
@@TechTutorialswithPiyush Hi, how about EFS?
will it display all the contents of s3 on ec2 mount drive? I don't see it in your video.
Hi Manish, Yes, it should. It will easier for me to answer if you can provide more details on exactly which timestamp you don't see the content of S3 on drive Nevertheless, if you scroll to the last few minutes of the video, you would see the content.
For the iam dash board mine is saying add mfa for root user what settings did you use to make this
Hello, Yes, you can add MFA for root user if you have access to the root account or you can leave this as it is a recommendation but not a mandatory step.
I’m having so much issues to mount the bucket on my terminal is there anyway can you help me
Hello Md, I would be more than happy to help you out. Can you please confirm which step you are exactly stuck at at the moment? Is there any error you are facing right now?
@@TechTutorialswithPiyush when I try to make a directory inside s3fs it could not make any directory in side s3 instead it make directory in the file system which is wired.. I used this command “ sudo mkdir -p /s3/directory name” but some wired thing it doesn’t make any directory inside s3 . I don’t know what to do now
@@Md-xw6ni I am not sure if you have followed the tutorial correctly. Mkdir would create the directory on your filesystem and not on S3 bucket, this is the expected behaviour. I suggest you to please follow the video from beginning till end and let me know if you still face any issues.
It is a very good solution for small files, but if you want to upload or work with 100GB or 200GB backups, I see that it is a slow process, in this case, what should be activated or what type of configuration should be followed?
Hello Sergio, I totally agree with you! This process has some limitations and shouldn't be used in case of huge files. In your case, you can use multipart upload and AWS cli sync command as a cronjob that sync those files at regular intervals
Thanks very much for the video,
I already have a Bucket loaded via Bitbucketpipeline. I want to know if the synchronisation is bidirectional.
What I mean is, if the bucket files are updated will he also sync with the ec2 reposatory ?
Great question! Yes it works both ways, make sure you perform enough testing in your test environment to promote it to your production
Great tutorial! A question though - at the very end of your video, when you're configuring a new line in the fstab file using vi, how are you getting onto a new line? I tried moving my cursor to the end of the top line (keyboard: End) and hitting enter just the same as you but that doesn't work for me. The internet has 1,000 things to say about what the problem is and I've never used vi/vim before so I'm not sure what to do here
Thank you very much @Alec for the feedback!
In vi editor when you open a file it opens in escape mode that means its readonly and you can do certain operations like
dd to delete the complete line
Shift + G to move the cursor to the last line
:linenumber and then enter to go to a particular line e.g :1 to go to the first line
delete key or x key to delete a character etc
To insert or update anything, you have to enter into the insert mode which you can by pressing i key and then you see --Insert-- at the bottom of your screen.
once, you enter the insert mode you can add/update anything or use backspace to delete as well like in a text editor
The other way to enter into insert mode is by pressing SHIFT + A which opens the file in insert mode and move the cursor at the end of current line( This is what I used)
I hope this would made few things clear, working in vi editor sometimes confuses you but if you practise it enough and make it a habit to use certain shortcuts like I have mentioned above, it is really easy and fun working with.
Silly me :) Pressed "o" for insert mode, added the line, the saved the file with ":wq" - thanks for the thorough & easy-to-follow guide!
@@alecnicolaysen9972 you’re most welcome :)
how did you find region for the user
Hello, The default region is the region in which my EC2 server was provisioned, you can check the location of your EC2 server and use the same region in your s3fs. Hope it helps!
Fruitfull information sir thanks
Thank you very much sir for the motivation! :)
Hi Piyush, you are doing a great job, but a quick question. Why didn't you use IAM role and used user here. Role is generally used to communicate b/w AWS services. I tried with a role instead of user and didn't need a workaround during mount. It worked perfectly.
Thank you. Recorded this video around 3 years back, at that time it had some issues, s3fs was not compatible with s3 hence I used that as a workaround. Glad it worked for you
Hi This video is very helpful but there is one issue in the last of video. s3fs is mount successfully but after few second it automatically unmount, which is not successfully sync automatically from local folder. Can you please give me solution.
Hi Devendra, I used the same methods multiple times and never faced this issue. Can you please also add the entry in /etc/fstab so that mount becomes persistent after the server reboot as well. Can you please try to unmount and mount again.
@@TechTutorialswithPiyush hi I added in /etc/fstab but not working after reboot
Hello Devendra, Can you please try to add any other entry using the same command and see if it works? We need to seperate if its an issue with fstab or the command itself.
Perfect video ...
Thank you very much brother :)
Hello Sir,
Super video it ..I was thinking to store my Issabel Asterisk IVR recording to S3 bucket directly mounting this way. But only thing is I am thinking about size it everyday I have 4-5GB of recording data and its keeps increasing day by day. Moreover we will do daily playback of those recording for quality assurance pirpose. So what do you suggest in this my case..Should I use this mount method or any other alternate solution? If I use mount method is there any chance it will dismount auto in future or disppear, if so it will give us big problem. I don't wish to use my Issabel EC2 instance itself to store recording data as it is ssd and data are huge so cost cutting wise.
Thanks for youe suggestion sir.
Hello,
Sorry for the delay in response!
S3FS has limitations and would not be an excellent choice for larger files.
One way is to use EFS attached to the EC2 instance to store the recordings/files and then use datasync to sync EFS to S3. This avoids reliance on instance store (non-persistent) / EBS (Single AZ risk) - Once setup, DataSync is fairly straightforward and has very little to zero ops. The pricing was worth it ($0.0125 per gigabyte (GB) copied)
You can rotate the files on the ec2 instance and set the datasync to not delete it on the target - it can put the files straight into IA / Glacier, etc (You can also set lifecycle policies to save the cost) - feel free to check the docs for datasync and EFS.
I hope I have answered your question.
Regards,
Piyush
Great work man you really went throgh the steps thoroughly.
I cant seem to get the mount point to be persistent between reboots though.
Any suggestions?
Thank you very much Selorm for your valuable feedback!
I guess I covered the part, you need to add an entry in the fstab so that your mount will remain persistent even after the reboot. Feel free to copy the command from description section and let me know if you still faces the issue.
@@TechTutorialswithPiyush So I've added the entery in fstab but the mount is still not persistent between reboots
Maybe something is wrong with the fstab entry
Does it works for windows?
Yes it works on windows as well but it has different instructions. Feel free to check out the Github repo of s3fs fuse
Hi anna
Thanks for video,
1.) I can only seeing my s3fs mount point with root user only
2.) From ubuntu user I can't CD into the s3bucket folder only root user uave access to it
kindly please help with this issue
Hello Aswin, Thanks for your feedback! If you are not able to cd or see your bucket folder that means it doesn't have the required permissions. Please provide the below permissions:
sudo chmod -R 777 bucketfolder or using the root user and then you should be good.
Also, make sure you are using the correct command from the description, where UID is the user id that you are granting access to.
Thank you.
You're welcome buddy!
great man u r
Thank you so much brother for your wonderful feedback!
Great!
You're welcome 😊🙏
Great tutorial Piyush!
One question though, if I create an empty bucket and mount it on an empty folder on the EC2 instance, is it possible to use the bucket as storage without using EC2 instance storage resources? or does whatever you put into the "mounted" bucket count towards your EC2 instance storage?
Thanks in advance for any and all input!
-Orallo
Thank you very much Orallo for your kind words, I am glad it was helpful. If you are mounting your S3 bucket then you would be charged as per S3 storage and data transfer cost just like you get charged as per EFS cost if you attach EFS to your instance. I hope that would have cleared your doubt. Great question by the way!
Happy learning!
Did not work for me!
Hi Indresh, can you please share more details about the issue you are facing? In which step you are stuck? Are you facing any errors?
next time please zoom 300%. thx
Thanks for your feedback Magdi. Actually this video is very old, in fact I published it when I started my channel, I have improved a lot since then 😁 If you watch any of my latest videos, you won't face this issue
@@TechTutorialswithPiyush still very good and useful. Should I start in aws or azure for starting learning microservices. I need something free tier to improve my self
@@nakhla3 Thank you so much 🙏😊 You can go ahead with either of those or even gcp however, AWS has already a well established market and a lot of expertise in the market , it will be easier to get a job with Azure or GCP so I would suggest to go with Either Azure or GCP
s3fs: unable to access MOUNTPOINT /home/ubuntu/bucket: Permission denied
getting above error
Can you please paste the command which you ran? alternatively, you can join our Facebook community, it will be easy to triage the issue there
facebook.com/groups/1015771332531944