keyWatcher scan exposed AWS key


AWS Trusted Advisor recently added a new check ‘Exposed Access Key’ in Security category. This to checks popular code repositories for access keys that have been exposed to the public and for irregular Amazon Elastic Compute Cloud (Amazon EC2) usage that could be the result of a compromised access key.

By default Trusted Advisor run checks every 24 hours. For such critical check, we probably want to run it more frequent, say every 30 minutes. Currently, AWS trusted advisor does not support custom schedule. Per the conversation I had with the AWS support, they are working on event trigger notification feature for Trusted Advisor.

While waiting that feature becomes available. I have added this feature into AWS keyWatcher v0.3, and create a cronjob to let it run every 30 minutes.

Here is how it works:

Screen Shot 2016-09-20 at 2.27.31 PM.png

 

‘aws support describe-trusted-advisor-checks’ is us-east-1 only?


Just found this out –  you have to hard code ‘–region us-east-1` when run aws support trusted advisor relevant commands. I guess this is caused by the same reason that I explained in my previous blog IAM dependency.

Here is my conclusion: whenever AWS says the service is global which does not require a region selection, it generally means the service is most likely hosted in the Northern Virginia (us-east-1 region).

$ aws support describe-trusted-advisor-checks --language en 

You must specify a region. You can also configure your region by running "aws configure".

$ aws --region ap-southeast-2 support describe-trusted-advisor-checks --language en

Could not connect to the endpoint URL: "https://support.ap-southeast-2.amazonaws.com/"

$ aws --region us-east-1 support describe-trusted-advisor-checks --language en

{
    "checks": [
        {
            "category": "cost_optimizing", 
            "description": "Checks the Amazon Elastic Compute Cloud (Amazon EC2) instances that were running at any time during the last 14 days and alerts you if the daily CPU utilization ...

CloudTrail bug


I found this bug in CloudTrail when working on the AWS keyWatcher project. I noticed that some CloudTrail logs do not have access key id field. Then I opened a ticket with AWS support, and they forwarded it to the CloudTrail service team. Here is the response which confirms it is a bug:

Briefly speaking, they've confirmed this being a bug. In fact, we do expect accessKeyId to be present in this case. We were also able to replicate the issue that you observed - called CreateBucket and GetBucketTagging from the console but did not find the accessKeyId field in the log events.

We apologize for any trouble or confusion that this might have caused to you. At this stage, we are not able to give an ETA of when exactly this bug will be fixed. But we are already investigating the issue with high priority.

AWS keyWatcher


We have seen multiple times that users accidentally expose their AWS access key and secret key on Internet, e.g. GitHub. This is a really dangerous thing, as whoever get that key can do whatever you can do to your AWS account. Here are two examples, the exposed key was used by someone unknown to create large number of EC2 instance to do BitCoin mining.

Dev put AWS keys on Github. Then BAD THINGS happened
Ryan Hellyer’s AWS Nightmare: Leaked Access Keys Result in a $6,000 Bill Overnight

If it only costs your fortune, then you are lucky! The worst thing is that they can permanently remove everything you built. Code Space closed door just because of this.

Check out the Best Practices for Managing AWS Access Keys to secure your AWS key. If unfortunately it has already happened, then follow the guide of What to Do If Your Inadvertently Expose an AWS Access Key as soon as possbile.

Proactive prevention is very necessary, and passive monitor is also needed. CloudTrail keeps records of all AWS API calls, so it should always be enabled.

AWS KeyWatcher is a tool that I wrote to monitor the AWS API calls logged by CloudTrail, then scores them based on the established key profile to detect the suspicious traffics. Check it out on my GitHub if you are interested.

AWS IAM Dependency


I did not know that there is a dependency between regions for AWS IAM service until one day when IAM had a outage, as I have never seen any relevant information in any AWS documentations.

On 23/Aug Sydney time, I notice that the IAM console is not full functional when I try to make a change to a role. The console showed errors when I try to list the roles.

role.png

Also one of my team mate was not able to login, he either got 500 or timed out, but I could login without problems.

Here is what was showing on AWS status page.

syd.png

us.png

It says IAM service is operating normally in Sydney region, but actually it is not. I guess most likely it is caused by the issue (Increased Error Rates and Latencies) that was happening in Virginia region. And later I confirmed it with our AWS TAM. Per him, the Virginia region is where all IAM metadata is stored, therefor any changes have to be made to that region. That explains why I could not even list the roles when the Virginia has IAM service issues.

For the login issue that my team mate had,  I guess it could be caused by that he had not logon for quite a while (couple months) so his login is not cached. The authentication request needs to be sent to the backend where the metadata is saved. As I had login recently, my login info is cached. That’s why I could login, but he could not.

Akamai add basic auth to incoming request


In some cases, Akamai may need to add auth basic to incoming request before sending it to the origin. Here is how to:

1) encode the username and password in the format of username:password. It can be done either via bash script or the online tool.

echo -n username:password | base64
dXNlcm5hbWU6cGFzc3dvcmQ=

https://www.base64encode.org/

Screen Shot 2016-08-30 at 8.06.35 PM

2) Add a behavior in Akamai to modify the incoming request header to add Authorization header, and also you may need to remove the Authorization header in the outgoing response header.

Screen Shot 2016-08-30 at 4.04.37 PM.png

AWS API Gateway behind Nginx


If you happen to have a Nginx upstream using AWS API Gateway, and gets this error ‘SSL_do_handshake() failed (SSL: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure) while SSL handshaking to upstream

Here is the fix – you need to add ‘proxy_ssl_server_name on;‘ in your nginx.conf. The directive is only available since version 1.7.0.

Reference: proxy_ssl_server_name

Syntax: proxy_ssl_server_name on | off;
Default:
proxy_ssl_server_name off;
Context: http, server, location

This directive appeared in version 1.7.0.

Enables or disables passing of the server name through TLS Server Name Indication extension (SNI, RFC 6066) when establishing a connection with the proxied HTTPS server.