A Tale of an S3 Bucket Misconfiguration By Nikhil Sahoo

 

Introduction

Back with a new blog. But this time it won’t be an HTB machine writeup, rather this blog is about the methodology or the procedure to test for misconfigurations in the target client’s AWS S3 buckets and along with that there’s also a Proof Of Concept about a vulnerability that I found recently. So without further ado let’s begin…

What is Amazon S3 Buckets?

In simple words, Buckets are a kind of storage space provided by Amazon S3 (Amazon Simple Storage Service) where the registered customers can upload their data/objects by either using the Amazon S3 API, S3 console or the AWS CLI and is accessible globally with a proper interface.

Naming Conventions

Now coming to its naming scheme, all the bucket names are unique worldwide and the same bucket name can’t be used by some other account until and unless the bucket is deleted by the user. The S3 buckets are also associated with the regions where the user needs to specify the region in which he/she would like the create the bucket.

Accessing a Bucket

A bucket can be accessed in a browser by either of these URL formats:

1) http://bucket_name.s3-aws-region.amazonaws.com

2) http://bucket_name.s3.amazonaws.com

3) http://s3.aws-region.amazonaws.com/bucket_name

4) http://s3.amazonaws.com/bucket_name

Permissions

1) Specific Users: Here the user could give access to the buckets to specific users by using their AWS ID(CanonicalUser).

2) Access to authenticated users: This simply means any user having a valid AWS ID can access the bucket.

3) All users: All users whether authenticated or not would be able to access the buckets.

Access Control List

READ
This allows the grantee to list the files in a bucket or read the file if it is present in the object level.

WRITE
This allows the grantee to create, upload, modify and delete objects in a bucket.

READ_ACP
This allows the grantee to read the access control list of the bucket or object.

WRITE_ACP
This allows the grantee to set/modify an ACL for a bucket or an object.

FULL_CONTROL
This allows the grantee to perform all operations allowed by the policies above except the Write policy.

Grantee could be either a single user or a group. Mentioned below are a few types of groups:

1) AllUsers group which represents all users including the non-AWS users and is represented by the following URL: http://acs.amazonaws.com/groups/global/AllUsers

2) AuthenticatedUsers group which represents all the AWS users and is represented by the following URL: http://acs.amazonaws.com/groups/global/AuthenticatedUsers

3) LogDelivery group who have the permission to write access logs objects to the bucket and is represented by the following URL: http://acs.amazonaws.com/groups/s3/LogDelivery

 

Finding the Bucket Name

So enough of introduction, now let’s see what are the different possible ways to enumerate bucket names:

1) There are a few tools available that could be used to bruteforce  bucket names:

2) Try using the domain name or some similar keywords to guess the bucket name.

3) There is an amazing tool created by GrayHatWarfare, a free tool that lists open s3 buckets and helps you search for interesting files. It can be accessed by visiting the following link

4) We can also check for any possible CNAME records using nslookup and try to use that canonical name as the bucket name.

5) Try breaking/abusing some web functionality or basically try to generate some errors like removing parameter values or giving special characters input, etc. It might generate an error anytime and could reveal the whole S3 bucket name URL.

6) Try placing a %C0 at the end of any object in the URL and it might reveal the bucket name. This trick was given by a tweet by Daniel Matviyiv

For getting more in-depth on enumerating bucket names refer to this blog:

https://medium.com/@localh0t/unveiling-amazon-s3-bucket-names-e1420ceaf4fa

 

Proof Of Concept

Now moving onto the testing part, the website which I have referred below was not a bug bounty website rather it was a private engagement so I won’t be able to reveal its name. Let’s assume the URL to be https://redacted.com.

So before starting my testing on any given application, I always run a quick dirb and intruder bruteforcing for common backup files and this is where I came across a file named file_upload.php.

This actually brought a smile to my face as because this clearly shows that the developers had actually forgotten to remove this file before moving it onto the production so I thought it would be a good way to upload my shell.

But I later realized that this functionality was actually broken and wasn’t even taking any valid files like image, pdf, Xls, etc. However, it always resulted in an error whenever I tried to upload anything and along with that error it also displayed the full s3 URL along with the bucket name.

Next, I tried visiting the link in my browser and saw that directory listing was enabled which showed that everyone had read access to the bucket’s contents.

 

So now I had to check if write access is there or not so that I can upload my own files. For doing this I quickly installed the aws cli tool in my kali and configured it.

For installing aws cli:

apt-get install awscli

Then we need to configure it  by typing in:

aws configure

And it will ask for 4 parameters: Access key id, secret access key, Region, Output format. We can get all this information from our AWS dashboard.

1) Login in to your AWS dashboard first

2) Under the dashboard tab, there would be another tab named “Delete your root access keys” and then click on “Manage Security Credentials”.

3) Then under the access key tab, you would be able to create your new access key which would give you the access key id as well as the secret access key.

4) For region and the output format, you can specify the defaults as given in the command itself.

So after all the configurations were done, I quickly typed in the below command to view the bucket ACL:

aws s3api get-bucket-acl –bucket bucket_name

and as you could see from the above image the AllUsers group was assigned Full Control for the bucket which was really strange.

So without further wasting any more time I tried uploading a text file by using the command

aws s3 cp random.txt s3://bucket_name

we can list all the files by using the ls command:

aws s3 ls s3://bucket_name

Ok, so the file was successfully uploaded. Next, I tried deleting it. For deleting a file we can use the same rm command that is used in Linux systems:

aws s3 rm s3://bucket_name/file.txt

The text file was deleted successfully.

So for final POC, I made a quick good looking HTML page and uploaded it over the bucket and granted it the Read permission.

Burp Plugin

A Burpsuite plugin named AWS Extender created by Virtue Security could also be used to test for misconfigurations. The configuration steps are pretty much the same like Access Key, Secret Key, etc.

Follow this link for more information:

https://github.com/VirtueSecurity/aws-extender

Impact

The possible impact was huge as the bucket contained a hell lot of sensitive information about a conference that was about to be held after a few months and any anonymous user had full control of the bucket and could delete and modify any object present in the bucket. And that being said the company has a huge customer base and deals with around a million customer’s data, so any mishappenings could have cost a lot to the company.

 

Mitigation

As you might have already guessed, Bucket’s ACL must be properly configured and the AllUsers should never be assigned Full Control. Here is a good blog on setting secure ACLs for your buckets:

 

References

 

I feel that explains it all.

So that’s for now. See you next time. Goodbye

You can have a look at my previous article on Hack The Box: Networked Box Walkthrough. Here is the link of the article

Loved what you read?

If so, then kindly comment, follow and share our website for much more interesting stuff  ?

For any queries you can send a Hi to my Linkedin Handle: Here

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *