Hi Nahi! I'm not aware of an "easy" way to enforce tags upon object creation or to list out untagged objects. But here are the various APIs available: docs.aws.amazon.com/AmazonS3/latest/userguide/object-tagging.html. Using the PUT and GET calls, it seems like you could write some custom logic/scanning to do what you want.
I encountered this video because I was working on a personal project with some components in S3 and could not figure out why for the life of me I could not access a JSON file in an S3 bucket even though the permissions were open, so I figured I was loosing my mind and needed to re-learn the fundamentals of S3 and watched this video. It turns out that as you said at 8:10 this is because the object is not public. I CAN'T BELIEVE I OVERLOOKED THIS AND THAT NO AMOUNT OF GOOGLING OR CHATGPT TROUBLESHOOTING CONSIDERED THIS. I DIDN'T SEE THE OPEN BUTTON AT THE TOP OF S3 AND I FEEL SO FOOLISH FOR MISSING THIS. Thank you SO MUCH for this tutorial you have no idea how much of a headache you cured me of.
Oh, YAY!!! I'm so glad it helped. S3 permissions can be complicated, and this particular point is not at all obvious, so it's not just you. 🤓 Glad you were able to figure it out, and thanks for posting in case it helps someone else! 🙏🌟💪
Finally found a channel for AWS beginners. I love how clear and concise you disseminate the information, though I bet is a ton of work to make it this way. So kudos to you. My question is, can I use S3 to back up my EC2 Bitnami Wordpress E commerce website, instead of the more expensive AWS Full back up of the entire instance? What will be the pro's and con's of using S3 vs AWS Backup. Thanks
Thanks for the kind words, Software Solutions! 🤓🙏🌟 Much appreciated. AWS Backup is a proper backup solution, so you can schedule backups, have policies around retention of old backups, do monitoring and alerting, reporting and so on (and all from a single interface). Some companies have regulatory/audit reasons they need something like this. But it can be a little overkill depending on what you have. If your Wordpress site is just a collection of files, then S3 would be an inexpensive alternative. Specifically, the "Infrequent Access" tier will be the most cost-effective, assuming you don't need to get to the files frequently: aws.amazon.com/s3/storage-classes/. But "regular" S3 would also work just fine (you'll have to set up S3 lifecycle rules to get things into the Infrequent Access tier). Hope that helps!
Thank you for the info. I tried the AWC Backup and it worked great. Once I backed up, restoring it gave me a ton of choices which I pushed to a new instance to verify everything worked as intended. So it's like a Staging the way I understand it. Then I just terminated the other instance. I hope this is a cost-effective way of creating and restoring backups. The only thing was that I needed to change the domain name to point to the newly restored instance, so I was wondering if there is an automated way to do that? I tried the estimate calculator and the price seems too good to be true, but I guess time will tell. @@TinyTechnicalTutorials
I guess I should have asked...it doesn't sound like you're using WordPress on Amazon Lightsail? You have your own EC2 instance where you've installed files, database, etc. that you need to run WordPress?
@@TinyTechnicalTutorials Yes I am using my own EC2 instance. I end up using the AWS Backup, which back's up the entire instance and creates restore points on a Vault, actually pretty easy to do, just hopefully the price will be reasonable. I only plan to do manual backups when needed. Thanks for all the info. I was looking for a video on your channel about Cloud Watch, didn't found :(
Cool...glad you found something that will work! To track costs, you can check out Cost Explorer: ruclips.net/video/xTIR5cvOfPc/видео.html. That'll give you a breakdown of actual costs, and you can also do forecasted costs too. 😎
Awesome tutorial!!! U now have a fan! I love the way you simplify things and that's what I need as I'm just starting my AWS journey! Thank you and Cheers!!
Hi, I'm starting as a Jr data engineer and your videos helped me a lot. I would like to learn about Redshift and maybe how to connect with PowerBi or Snowflake. I have been facing problems with IAM roles corresponding to crawlers so that's also something I would like to see in a video.
Oooh, great suggestion! I've added it to my list for videos. In the meantime, let me try with a few bullet points below. -Block storage: This type of storage has been around forever. You can loosely think of it as a hard drive, where data is split into "blocks" and stored (so a single document might be split into 100 blocks before it's stored). In AWS, this would be the Elastic Block Store (EBS), which again can be thought of as a hard drive for your EC2 instance. An EBS volume can only be used by a single EC2 instance (just like your laptop's hard drive can only be used by your laptop). -Object storage: Here, data is NOT split into blocks, but is stored as a full "object" (which includes the data and its metadata). This one can be a little confusing because you're storing "files" (documents, images, videos, logs, etc.) in object storage. For example, in AWS, S3 is what you use for object storage. And you upload FILES to S3. But it's called OBJECT storage because of how things work behind the scenes (storing it as the full object). Note that there are no folders in S3 (even though it looks that way through the UI). The "folder" is just a prefix to the file name (for example, "/images/cat.jpg" would be a file name). -File storage: Here, files are also stored as full files (not split), AND inside of a folder. You can think of this as a filing system (think physical filing cabinets, where you place documents in folders). In AWS, file storage is done with Elastic File System (EFS). EFS can be used by (i.e., data can be read and written) multiple EC2 instances at the same time. It can also be used by the Elastic Container Service and Lambda functions. This is different than block storage, which can only be used by a single EC2 instance at a time. Hope that helps? I know they all sound kind of similar! For more detail, this article does a nice job of explaining: aws.plainenglish.io/block-storage-object-storage-file-storage-71438131abc4
@@TinyTechnicalTutorials Thanks. It was indeed very helpful. So in nutshell, the way they differ is how they store information on the hard disk each with their respective pros and cons.
Thank you for this video! Can you share how can we transfer files from a hosting company to amazon S3 and explain the process? We would love to see that :D
Hi Mehdi! :) Glad you liked the video! Just to make sure I understand your question...are you talking about a one-time transfer to S3? Or doing it on an ongoing basis (and writing code to do it)?
@@TinyTechnicalTutorials I'll explain to you the issue; We have a manga website, it has a large number of files, and the hosting that our website is currently on can't provide us with the space needed. So I searched about storing and stuff I found amazon s3. Now when we post manga on the website, its files are added to the hosting, we want to transfer these files or store them on amazon s3. We have no idea how to start, amazon has some complicated resources if we want to read their articles, not easy at all. I would love to hear from you your perspective on this situation. Thank you in advance :D
Hey Mehdi - I haven't needed to do this myself, but since you're needing to access S3 from an external site, the S3 REST API is probably the way to go (versus the SDK). This WILL require you to write some code to compute the authentication signature for the request. I'm not finding a ton of great examples out there, but maybe these will help? -The best walk-through I could find that incorporate the IAM and API Gateway bits: hevodata.com/learn/amazon-s3-rest-api-integration/ -Another example, simpler than above: bluegrid.io/using-rest-api-to-upload-files-to-the-s3-bucket/ -Java code/example for uploading a file: riptutorial.com/java/example/31783/upload-file-to-s3-bucket -The S3 REST API documentation: docs.aws.amazon.com/AmazonS3/latest/userguide/RESTAPI.html
This was an excellent tutorial for getting files and folders into S3. Nice work! I just wish you had extended the tutorial slightly and shown us how to share access to the files and folders with a user group. This would close out the loop and we would have our own functioning file storage repository for our team. Is there any chance you would be willing to do this? Greetings from Scotland!
Greetings, Tartan! :) Thanks for the nice comment--so glad you enjoyed the video! Yes, I can definitely look at doing a video about sharing on S3. To make sure I'm addressing your scenario correctly, do (or will) the folks on your team have IAM accounts? Or you're needing to share outside of your organization?
@@TinyTechnicalTutorials Thank you so much for responding to my inquiry. Ideally, I don't want my team to have IAM accounts. Here is my Christmas wish... A webpage to act as a front-end portal to an S3 document repository (knowledge bank) with a simple username and password where they can view, upload, and download files as required. I've been working on this all afternoon and have finally got 1x user connected using IAM to an S3 bucket. But, it's messy and my users can surf around inside AWS Services, which although limited is not what I was hoping for. This is why I would prefer no IAM accounts if this is at all feasible. I apologise for putting this challenge before you. It's just that I really enjoyed your tutorial and it worked perfectly until the end. Thank you sincerely!
@Tartan Rambo - Massive apologies...I just discovered this response has been sitting in my "held for review" bucket for 6 days! Gah! Not sure if you still need some help with this, but check out the two options in this article: dev.to/idrisrampurawala/share-your-aws-s3-private-content-with-others-without-making-it-public-4k59. The presigned URL should get you around the IAM issue. I haven't personally tried the CloudFront option, but it might be another good solution for what you're doing. Let me know what you end up doing!
Thanks so much, Protons! :) I've added this to my list for future videos, but in the meantime (if you haven't already found it), this should hopefully get you started: aws.amazon.com/sdk-for-java/
Hi Gil - Yes! You can host a static website from an S3 bucket. If you have content that doesn't change much (maybe a personal portfolio/resume, for example), this can be an inexpensive way to go. However, as far as good/bad, it kind of depends on your use case. If you need support for HTTPS/SSL, you won't be able to do that with S3 (you'll need to integrate with the AWS's Content Delivery Network called CloudFront; this will also give you caching, so you'll get better performance). You "pay as you go," so if you get TONS of traffic to the site, it could get expensive (caching with CloudFront can help with this). So lots of things to think about! :) Here's some more info to get you started: docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
Hello, thanks for a clear demonstration, however, I have one question about S3, how do you perform access permissions so that it's possible to move data from S3 to redshift?
Hi Mwanthi! Thanks for watching! 😊 Generally speaking, the way to grant access from one service to another is through an IAM role. Here's a walk-through specific to Redshift and S3 that might help: www.dataliftoff.com/iam-roles-for-loading-data-from-s3-into-redshift/.
Hi Abhishek! 👋 I actually have a Cloud Practitioner course hosted on Zero to Mastery. Check out the link in the video description if you're interested (there's also a discount code in there). 🤓
Nice tutorial, thanks! PLEASE explain a way to share a FOLDER containing (public) videos for example. All via a browser. The "other side" should be able to see the list of videos/files in that folder and open and or download each one if needed.
Thanks so much, skegen! Glad you enjoyed it. :) As far as sharing an entire S3 bucket, there's not really way to share/display it as a "folder" (like a Dropbox or Google Drive or something). But you'd need to open up the bucket for public read access, which will let people access any of the objects in the bucket using the object's URL (here's a good video for that: ruclips.net/video/s1Tu0yKmDKU/видео.html. It uses an older version of the Portal, but hopefully you can still follow). If you wanted to give them a view of everything in the bucket (like a list of files), you'd need to do that programmatically (using the SDK to get a list of all objects, then adding the links to a web page). Hope that helps! It does seem like there should be a simpler way, but I've yet to find it!
@@TinyTechnicalTutorials Hi and Thanks again for your kind and pro response. Really appreciate it. 1. I tried to do it with a public bucket (as in the link you provided) but the other side activates a download operation with zero file size. 2. I'm trying now a probably better approach (since the target is a specific user) to create a Link/User/Password to give to "the other side" in order for him/her to logon to S3 BUT, to see only that particular folder with the files, Read Only, NO way to go up in the tree structure, no way to delete or add more files. Just to view the content of the folder and be able to download a selected file. CAN YOU HELP ME WITH THIS?
Hi again skegen! Generally, if you want to block public access on the bucket (a best practice), but still allow someone to download files, you'll need to use a presigned URL. You may have already figured this out (and maybe that's what you're talking about with the "link/user/password" comment. The URL grants temporary access to the file. I have a short video for that if it's helpful: ruclips.net/video/DVc9VRt-7IQ/видео.html.
Hi Melvin! Yes, there have been some UI updates since this video was made (it's a never-ending challenge! 😊). I'll add this one to my list to update. Thanks for watching!
Hi again Melvin! 😊 Not really. Probably the closest I have is how to find which resources are being used and then delete them: ruclips.net/video/8BwDrzeHOks/видео.html. And here's the documentation from AWS on how to delete things from S3 specifically: docs.aws.amazon.com/AmazonS3/latest/userguide/delete-objects.html
Good Video! Does Amazon replicate/backup the S3 buckets automatically to provide the high durability in case there was an Availability or Regional failure and we have to activate DR site?
Hi moe! 😊 You can set up cross-region replication for S3, which will create a copy of your data in a second region. But it doesn't happen by default (since costs will obviously be higher). Here's more info if you need it: docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html. Thanks for watching!
@@TinyTechnicalTutorials Thanks for the reply. I knoew about cross-region replication but I wanted to confirm that we need to set this up in case of DR plan for regional failure. Does this mean that in case of AZ failure AWS will recover the S3 bucket automatically and we dont have to worry about setting our own S3 or other storage backup (EBS, EFS, FSx)?
Apologies...I read your question too quickly! 😊 By default, S3 replicates data across three AZs (with the exception of the One Zone-Infrequent Access storage class). So within one region, you've got good redundancy by default, and depending on your use case, that might be sufficient for DR. But if the entire region goes down, then you'd be in trouble (so would want to activate cross-region replication). Hope that helps! Here's a good blog post that talks more about DR and S3: aws.amazon.com/blogs/storage/architecting-for-high-availability-on-amazon-s3/
Question if I may? If you have an EC2 instance with say a web server and a MySQL system would you store the SQL database on an S3 bucket or directly on the EC2 server. Is data stored on EC2 retained in event reboot etc? You also mentioned elastic file system. Thanks
Hi arch1e! Yes, it's possible to install MySQL on the same EC2 instance that you're using as your web server. (S3 is only for objects/files, so it wouldn't be appropriate for the database). The "hard drive" of an EC2 instance is called Elastic Block Store, and that's where your actual data would be stored. And yes, it survives reboots, just like a hard drive would on any other server. But for the database, you can also use the Relational Database Service (RDS), which will put your database on an entirely separate server that Amazon manages (so you don't have to install it, update it, patch it, etc.). I have a short video on RDS if it's helpful: ruclips.net/video/vp_uulb5phM/видео.html.
@@TinyTechnicalTutorials hi thanks for the reply, that makes sense. Yep I’d seen RDS as an option. I guess easier maintenance (performance not really an issue for the environments Im considering for AWS) but at a slightly higher cost than putting on an EC2 server that you’d already need for the web server. The shear number of options is frightening! Appreciate your reply. Thanks
Hi there! 😊 Here's some good info the FAQs (aws.amazon.com/s3/faqs/): The total volume of data and number of objects you can store in Amazon S3 are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5 GB. For objects larger than 100 MB, customers should consider using the multipart upload capability: docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html.
Hi Trivacker-Zatie! Thanks for watching! :) There are several different storage classes available for S3, and you can transition data between them to save money. By default, things are uploaded to S3 Standard storage, and will stay there until you delete them. Here's more info if you're interested: docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html
Hi youtube007! Pricing can vary quite a bit, depending on how much you're storing and in what "tier." But generally speaking, it's the cheapest storage option in AWS. Here's the official pricing page: aws.amazon.com/s3/pricing/
Thanks for the feedback, Vaibhav! I'll add this topic to my list of videos to make...the various ways to access objects in S3. I appreciate the comment! :)
hi I'm Ariful Islam leeton im software engineer and members of the international organization who and members of the international telecommunications and investors public and private sector
What else do you want to learn about AWS? Let me know below in the comments!
AWS Systems manager pls!
Thank you. Is there a way to enforce tags during object creation and list out untagged existing objects?
I've added it to my list!
Hi Nahi! I'm not aware of an "easy" way to enforce tags upon object creation or to list out untagged objects. But here are the various APIs available: docs.aws.amazon.com/AmazonS3/latest/userguide/object-tagging.html. Using the PUT and GET calls, it seems like you could write some custom logic/scanning to do what you want.
How to use an EC2 Instance to modify an S3 Bucket :O using a python script.
You are the best. Thank you so much. Slow, thorough, and intuitive. The calming voice doesn't hurt either. Much appreciated.
I encountered this video because I was working on a personal project with some components in S3 and could not figure out why for the life of me I could not access a JSON file in an S3 bucket even though the permissions were open, so I figured I was loosing my mind and needed to re-learn the fundamentals of S3 and watched this video.
It turns out that as you said at 8:10 this is because the object is not public. I CAN'T BELIEVE I OVERLOOKED THIS AND THAT NO AMOUNT OF GOOGLING OR CHATGPT TROUBLESHOOTING CONSIDERED THIS. I DIDN'T SEE THE OPEN BUTTON AT THE TOP OF S3 AND I FEEL SO FOOLISH FOR MISSING THIS.
Thank you SO MUCH for this tutorial you have no idea how much of a headache you cured me of.
Oh, YAY!!! I'm so glad it helped. S3 permissions can be complicated, and this particular point is not at all obvious, so it's not just you. 🤓 Glad you were able to figure it out, and thanks for posting in case it helps someone else! 🙏🌟💪
I wish aws acamedy explained this the way you do, amazing! Never die please
What a nice comment!!! I'm so glad it was helpful. Thanks for watching! 🙏🌟🤓
Today I found your channels and my views of AWS has improved tremendously. Thank you very much.
Oh, I'm so glad! Thanks for watching! 🤓🙏🌟
I have watched a couple of your videos and you so amazing. Thanks very much for these free but useful tutorials
You're very welcome! Thanks for the nice comment! :)
Thanks for the clear explanations of the topics you covered so far. Would you touch on Terraform and how it is used.
keep them coming. good tutorials
Awwww...thanks for watching, and for such a nice comment (and sorry for the slow response)! 🥰🔥
Fantastic Video. It is helping my familarity with the AWS platform.👍
Yay! Glad it helped! :) Thanks for watching.
Finally found a channel for AWS beginners. I love how clear and concise you disseminate the information, though I bet is a ton of work to make it this way. So kudos to you. My question is, can I use S3 to back up my EC2 Bitnami Wordpress E commerce website, instead of the more expensive AWS Full back up of the entire instance? What will be the pro's and con's of using S3 vs AWS Backup. Thanks
Thanks for the kind words, Software Solutions! 🤓🙏🌟 Much appreciated.
AWS Backup is a proper backup solution, so you can schedule backups, have policies around retention of old backups, do monitoring and alerting, reporting and so on (and all from a single interface). Some companies have regulatory/audit reasons they need something like this. But it can be a little overkill depending on what you have. If your Wordpress site is just a collection of files, then S3 would be an inexpensive alternative. Specifically, the "Infrequent Access" tier will be the most cost-effective, assuming you don't need to get to the files frequently: aws.amazon.com/s3/storage-classes/. But "regular" S3 would also work just fine (you'll have to set up S3 lifecycle rules to get things into the Infrequent Access tier). Hope that helps!
Thank you for the info. I tried the AWC Backup and it worked great. Once I backed up, restoring it gave me a ton of choices which I pushed to a new instance to verify everything worked as intended. So it's like a Staging the way I understand it. Then I just terminated the other instance. I hope this is a cost-effective way of creating and restoring backups. The only thing was that I needed to change the domain name to point to the newly restored instance, so I was wondering if there is an automated way to do that? I tried the estimate calculator and the price seems too good to be true, but I guess time will tell. @@TinyTechnicalTutorials
I guess I should have asked...it doesn't sound like you're using WordPress on Amazon Lightsail? You have your own EC2 instance where you've installed files, database, etc. that you need to run WordPress?
@@TinyTechnicalTutorials Yes I am using my own EC2 instance. I end up using the AWS Backup, which back's up the entire instance and creates restore points on a Vault, actually pretty easy to do, just hopefully the price will be reasonable. I only plan to do manual backups when needed. Thanks for all the info. I was looking for a video on your channel about Cloud Watch, didn't found :(
Cool...glad you found something that will work! To track costs, you can check out Cost Explorer: ruclips.net/video/xTIR5cvOfPc/видео.html. That'll give you a breakdown of actual costs, and you can also do forecasted costs too. 😎
Good tutorial, easy to understand, helpful; especially the duration, just nice 👍.
I'm so glad you liked it! Thanks for watching! 😊
Awesome tutorial!!! U now have a fan! I love the way you simplify things and that's what I need as I'm just starting my AWS journey! Thank you and Cheers!!
Yay! I'm so glad you enjoyed it. Welcome to the channel!! :)
Hi, I'm starting as a Jr data engineer and your videos helped me a lot. I would like to learn about Redshift and maybe how to connect with PowerBi or Snowflake. I have been facing problems with IAM roles corresponding to crawlers so that's also something I would like to see in a video.
Hi Eduardo! 👋 I'm so glad the videos are helping! I'll add these topics to my list for future videos. Thanks for suggesting them! 😊
short and sweet as usual.
That's what I'm going for! 😊🙏
Thank you for your well presented.
Awwww...thanks for watching, and for such a nice comment (and sorry for the slow response)! 🥰🔥
Thank you so much, great technologist
Awww, thanks so much! :)
This is such great content. Please keep it up
Thanks so much for the nice comment, Jay! Glad you're enjoying it!
Thank you so much for good information.
You're very welcome! Thanks for watching! :)
Could you pls create a short video on the differences between block, file and Object storage? Thanks
Oooh, great suggestion! I've added it to my list for videos. In the meantime, let me try with a few bullet points below.
-Block storage: This type of storage has been around forever. You can loosely think of it as a hard drive, where data is split into "blocks" and stored (so a single document might be split into 100 blocks before it's stored). In AWS, this would be the Elastic Block Store (EBS), which again can be thought of as a hard drive for your EC2 instance. An EBS volume can only be used by a single EC2 instance (just like your laptop's hard drive can only be used by your laptop).
-Object storage: Here, data is NOT split into blocks, but is stored as a full "object" (which includes the data and its metadata). This one can be a little confusing because you're storing "files" (documents, images, videos, logs, etc.) in object storage. For example, in AWS, S3 is what you use for object storage. And you upload FILES to S3. But it's called OBJECT storage because of how things work behind the scenes (storing it as the full object). Note that there are no folders in S3 (even though it looks that way through the UI). The "folder" is just a prefix to the file name (for example, "/images/cat.jpg" would be a file name).
-File storage: Here, files are also stored as full files (not split), AND inside of a folder. You can think of this as a filing system (think physical filing cabinets, where you place documents in folders). In AWS, file storage is done with Elastic File System (EFS). EFS can be used by (i.e., data can be read and written) multiple EC2 instances at the same time. It can also be used by the Elastic Container Service and Lambda functions. This is different than block storage, which can only be used by a single EC2 instance at a time.
Hope that helps? I know they all sound kind of similar! For more detail, this article does a nice job of explaining: aws.plainenglish.io/block-storage-object-storage-file-storage-71438131abc4
@@TinyTechnicalTutorials Thanks. It was indeed very helpful. So in nutshell, the way they differ is how they store information on the hard disk each with their respective pros and cons.
Yep, you got it! :)
Very nice tutorial.
Thanks so much!! 😊
Thank you for this video!
Can you share how can we transfer files from a hosting company to amazon S3 and explain the process? We would love to see that :D
Hi Mehdi! :) Glad you liked the video! Just to make sure I understand your question...are you talking about a one-time transfer to S3? Or doing it on an ongoing basis (and writing code to do it)?
@@TinyTechnicalTutorials I'll explain to you the issue; We have a manga website, it has a large number of files, and the hosting that our website is currently on can't provide us with the space needed. So I searched about storing and stuff I found amazon s3.
Now when we post manga on the website, its files are added to the hosting, we want to transfer these files or store them on amazon s3. We have no idea how to start, amazon has some complicated resources if we want to read their articles, not easy at all. I would love to hear from you your perspective on this situation.
Thank you in advance :D
Hey Mehdi - I haven't needed to do this myself, but since you're needing to access S3 from an external site, the S3 REST API is probably the way to go (versus the SDK). This WILL require you to write some code to compute the authentication signature for the request.
I'm not finding a ton of great examples out there, but maybe these will help?
-The best walk-through I could find that incorporate the IAM and API Gateway bits: hevodata.com/learn/amazon-s3-rest-api-integration/
-Another example, simpler than above: bluegrid.io/using-rest-api-to-upload-files-to-the-s3-bucket/
-Java code/example for uploading a file: riptutorial.com/java/example/31783/upload-file-to-s3-bucket
-The S3 REST API documentation: docs.aws.amazon.com/AmazonS3/latest/userguide/RESTAPI.html
@@TinyTechnicalTutorials ok ill check them out thank you for your time!
You bet! :)
This was an excellent tutorial for getting files and folders into S3. Nice work! I just wish you had extended the tutorial slightly and shown us how to share access to the files and folders with a user group. This would close out the loop and we would have our own functioning file storage repository for our team. Is there any chance you would be willing to do this? Greetings from Scotland!
Greetings, Tartan! :) Thanks for the nice comment--so glad you enjoyed the video! Yes, I can definitely look at doing a video about sharing on S3. To make sure I'm addressing your scenario correctly, do (or will) the folks on your team have IAM accounts? Or you're needing to share outside of your organization?
@@TinyTechnicalTutorials Thank you so much for responding to my inquiry. Ideally, I don't want my team to have IAM accounts. Here is my Christmas wish... A webpage to act as a front-end portal to an S3 document repository (knowledge bank) with a simple username and password where they can view, upload, and download files as required. I've been working on this all afternoon and have finally got 1x user connected using IAM to an S3 bucket. But, it's messy and my users can surf around inside AWS Services, which although limited is not what I was hoping for. This is why I would prefer no IAM accounts if this is at all feasible. I apologise for putting this challenge before you. It's just that I really enjoyed your tutorial and it worked perfectly until the end. Thank you sincerely!
@Tartan Rambo - Massive apologies...I just discovered this response has been sitting in my "held for review" bucket for 6 days! Gah! Not sure if you still need some help with this, but check out the two options in this article: dev.to/idrisrampurawala/share-your-aws-s3-private-content-with-others-without-making-it-public-4k59. The presigned URL should get you around the IAM issue. I haven't personally tried the CloudFront option, but it might be another good solution for what you're doing. Let me know what you end up doing!
Hey @Tartan Rambo - Just published a video about sharing S3 files using presigned URLs. Hope it helps! ruclips.net/video/DVc9VRt-7IQ/видео.html
Excellent video
Can you please upload a video regarding AWS SDK and how to use Java code there
Thanks so much, Protons! :) I've added this to my list for future videos, but in the meantime (if you haven't already found it), this should hopefully get you started: aws.amazon.com/sdk-for-java/
@@TinyTechnicalTutorials
Waiting 😃
Question, is it true you can have an entire site running on in an S3 Bucket, if so, is that bad practice and why?
Hi Gil - Yes! You can host a static website from an S3 bucket. If you have content that doesn't change much (maybe a personal portfolio/resume, for example), this can be an inexpensive way to go. However, as far as good/bad, it kind of depends on your use case. If you need support for HTTPS/SSL, you won't be able to do that with S3 (you'll need to integrate with the AWS's Content Delivery Network called CloudFront; this will also give you caching, so you'll get better performance). You "pay as you go," so if you get TONS of traffic to the site, it could get expensive (caching with CloudFront can help with this). So lots of things to think about! :) Here's some more info to get you started: docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
Hello, thanks for a clear demonstration, however, I have one question about S3, how do you perform access permissions so that it's possible to move data from S3 to redshift?
Hi Mwanthi! Thanks for watching! 😊 Generally speaking, the way to grant access from one service to another is through an IAM role. Here's a walk-through specific to Redshift and S3 that might help: www.dataliftoff.com/iam-roles-for-loading-data-from-s3-into-redshift/.
@@TinyTechnicalTutorials thanks a lot, I will read it
can u make course on to clear cloud practitioner certificate?
Hi Abhishek! 👋 I actually have a Cloud Practitioner course hosted on Zero to Mastery. Check out the link in the video description if you're interested (there's also a discount code in there). 🤓
Nice tutorial, thanks! PLEASE explain a way to share a FOLDER containing (public) videos for example. All via a browser. The "other side" should be able to see the list of videos/files in that folder and open and or download each one if needed.
Thanks so much, skegen! Glad you enjoyed it. :)
As far as sharing an entire S3 bucket, there's not really way to share/display it as a "folder" (like a Dropbox or Google Drive or something). But you'd need to open up the bucket for public read access, which will let people access any of the objects in the bucket using the object's URL (here's a good video for that: ruclips.net/video/s1Tu0yKmDKU/видео.html. It uses an older version of the Portal, but hopefully you can still follow). If you wanted to give them a view of everything in the bucket (like a list of files), you'd need to do that programmatically (using the SDK to get a list of all objects, then adding the links to a web page).
Hope that helps! It does seem like there should be a simpler way, but I've yet to find it!
@@TinyTechnicalTutorials Hi and Thanks again for your kind and pro response. Really appreciate it. 1. I tried to do it with a public bucket (as in the link you provided) but the other side activates a download operation with zero file size. 2. I'm trying now a probably better approach (since the target is a specific user) to create a Link/User/Password to give to "the other side" in order for him/her to logon to S3 BUT, to see only that particular folder with the files, Read Only, NO way to go up in the tree structure, no way to delete or add more files. Just to view the content of the folder and be able to download a selected file. CAN YOU HELP ME WITH THIS?
Hi again skegen! Generally, if you want to block public access on the bucket (a best practice), but still allow someone to download files, you'll need to use a presigned URL. You may have already figured this out (and maybe that's what you're talking about with the "link/user/password" comment. The URL grants temporary access to the file. I have a short video for that if it's helpful: ruclips.net/video/DVc9VRt-7IQ/видео.html.
@@TinyTechnicalTutorials THANKS!
can you create with the current updated amazon site
Hi Melvin! Yes, there have been some UI updates since this video was made (it's a never-ending challenge! 😊). I'll add this one to my list to update. Thanks for watching!
@@TinyTechnicalTutorials do you have a tutorial on getting your files off aws?
Hi again Melvin! 😊 Not really. Probably the closest I have is how to find which resources are being used and then delete them: ruclips.net/video/8BwDrzeHOks/видео.html. And here's the documentation from AWS on how to delete things from S3 specifically: docs.aws.amazon.com/AmazonS3/latest/userguide/delete-objects.html
Much better than Neil’s content
Wow!!! I'll take that as the highest compliment!! THANK YOU! 🙏🥰🌟
@@TinyTechnicalTutorials I wish you had everything covered that I needed. Your voice is relaxing and clear😅. Great work tho
Good Video! Does Amazon replicate/backup the S3 buckets automatically to provide the high durability in case there was an Availability or Regional failure and we have to activate DR site?
Hi moe! 😊 You can set up cross-region replication for S3, which will create a copy of your data in a second region. But it doesn't happen by default (since costs will obviously be higher). Here's more info if you need it: docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html. Thanks for watching!
@@TinyTechnicalTutorials Thanks for the reply. I knoew about cross-region replication but I wanted to confirm that we need to set this up in case of DR plan for regional failure. Does this mean that in case of AZ failure AWS will recover the S3 bucket automatically and we dont have to worry about setting our own S3 or other storage backup (EBS, EFS, FSx)?
Apologies...I read your question too quickly! 😊 By default, S3 replicates data across three AZs (with the exception of the One Zone-Infrequent Access storage class). So within one region, you've got good redundancy by default, and depending on your use case, that might be sufficient for DR. But if the entire region goes down, then you'd be in trouble (so would want to activate cross-region replication). Hope that helps! Here's a good blog post that talks more about DR and S3: aws.amazon.com/blogs/storage/architecting-for-high-availability-on-amazon-s3/
really great content!
Thank you so much! 🙏
Good one, thank you
Glad you liked it! Thanks for watching!
Question if I may? If you have an EC2 instance with say a web server and a MySQL system would you store the SQL database on an S3 bucket or directly on the EC2 server. Is data stored on EC2 retained in event reboot etc? You also mentioned elastic file system. Thanks
Hi arch1e! Yes, it's possible to install MySQL on the same EC2 instance that you're using as your web server. (S3 is only for objects/files, so it wouldn't be appropriate for the database). The "hard drive" of an EC2 instance is called Elastic Block Store, and that's where your actual data would be stored. And yes, it survives reboots, just like a hard drive would on any other server. But for the database, you can also use the Relational Database Service (RDS), which will put your database on an entirely separate server that Amazon manages (so you don't have to install it, update it, patch it, etc.). I have a short video on RDS if it's helpful: ruclips.net/video/vp_uulb5phM/видео.html.
@@TinyTechnicalTutorials hi thanks for the reply, that makes sense. Yep I’d seen RDS as an option. I guess easier maintenance (performance not really an issue for the environments Im considering for AWS) but at a slightly higher cost than putting on an EC2 server that you’d already need for the web server. The shear number of options is frightening! Appreciate your reply. Thanks
Sure thing! And yes, the number of options can be overwhelming sometimes! :)
How many GBs can I upload on Amazon S3?
Hi there! 😊 Here's some good info the FAQs (aws.amazon.com/s3/faqs/): The total volume of data and number of objects you can store in Amazon S3 are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5 GB. For objects larger than 100 MB, customers should consider using the multipart upload capability: docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html.
I'm new to S3 but not new to EC2. Thanks for this simplified tutorial. I have a question. How long will the uploaded files be retained in the folders?
Hi Trivacker-Zatie! Thanks for watching! :) There are several different storage classes available for S3, and you can transition data between them to save money. By default, things are uploaded to S3 Standard storage, and will stay there until you delete them. Here's more info if you're interested: docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html
Is it free?
Thanks!
Thank YOU for watching! 🙏🌟
What are the costs?
Hi youtube007! Pricing can vary quite a bit, depending on how much you're storing and in what "tier." But generally speaking, it's the cheapest storage option in AWS. Here's the official pricing page: aws.amazon.com/s3/pricing/
So good.
Thank you so much, Ahmad! :)
great video but you does not showing how to accessible publicly and why don't you use object URL as well as that error
Thanks for the feedback, Vaibhav! I'll add this topic to my list of videos to make...the various ways to access objects in S3. I appreciate the comment! :)
Hey @Vaibhav Kamble - Just published a video about sharing S3 files using presigned URLs. Hope it helps! ruclips.net/video/DVc9VRt-7IQ/видео.html
hi I'm Ariful Islam leeton im software engineer and members of the international organization who and members of the international telecommunications and investors public and private sector
👋