Ask HN: Our AWS account got compromised after their outage

kinj28 | 392 points

I would normally say that "That must be a coincidence", but I had a client account compromise as well. And it was very strange:

Client was a small org, and two very old IAM accounts had suddenly had recent (yesterday) console log ins and password changes.

I'm investigating the extent of the compromise, but so far it seems all they did was open a ticket to turn on SES production access and increase the daily email limit to 50k.

These were basically dormant IAM users from more than 5 years ago, and it's certainly odd timing that they'd suddenly pop on this particular day.

timdev2 | 3 days ago

Is it possible that people who already managed to get access (that they confirmed) has been waiting for any hiccups in AWS infrastructure in order to hide among the chaos when it happens? So maybe the access token was exposed weeks/months ago, but instead of going ahead directly, idle until there is something big going on.

Certainly feels like an strategy I'd explore if I was on that side of the aisle.

CaptainOfCoit | 3 days ago

couple folks on reddit said while they were refreshing during the outage, they were briefly logged in as a whole different user

sousastep | 3 days ago

Cloudtrail events should be able to demonstrate WHAT created the EC2s. Off the top of my head I think it's the runinstance event.

ThreatSystems | 3 days ago

If I were an attacker I would choose when to attack and a major disruption happening leaving your logging is in chaos seems like it could be a good time. Is it possible you had been compromised for a while and they took that moment to take advantage of it? Or, similarly, they took that moment to use your resources for a different attack that was spurred by the outage?

jmward01 | 2 days ago

weird, can you send me your API key so I can verify it's not in the list of compromised credentials?

defraudbah | 2 days ago

Highly likely to be coincidence. Typically an exposed access key. Exposed password for non-MFA protected console access happens but is less common.

yfiapo | 3 days ago

Not uncommon that machines get exposed during trouble-shooting. Just look at the Crowdstrike incident just the other year. People enabled RDP on a lot machines to "implement the fix" and now many of these machines are more vulnerable than if if they never installed that garbage security software in the first place.

AtNightWeCode | 3 days ago

During time of panic, that’s when people are most vulnerable to phishing attacks.

Total password reset and tell your AWS representative. They usually let it slide on good faith.

didip | 2 days ago

us-east-1 is unimaginably large. The last public info I saw said it had 159 datacenters. I wouldn't be surprised if many millions of accounts are primarily located there.

While this could possibly be related to the downtime, I think this is probably an unfortunate case of coincidence.

kondro | 2 days ago

i cant imagine it's related. if it is related, hello Bloomberg News or whoever will be reading this thread because that would be a catastrophic breach of customer trust that would likely never fully return

itsnowandnever | 3 days ago

If I was a burgler holding a stolen key to a house, waiting to pick a good day, a city-wide blackout would probably feel like a good day.

geor9e | 3 days ago
[deleted]
| 2 days ago

The AWS issue related to DNS entries. And IAM doesn't use Dynamo DB. It wasn't related, other than an outage gives a good way to obfuscate TTPs.

whoknew1122 | 19 hours ago

Lot of keys and passwords being panic entered on insecure laptops yesterday.

Do not discount the possibility of regular malware.

brador | 3 days ago

Our Alexa had a random person "drop in" yesterday. We could hear a child talking on the other end, but no idea who it was. It may just be a coincidence, but it's never happened before so it's easy to imagine it might be related to the AWS issues.

WesleyJohnson | 2 days ago

Any chance you did something crazy while troubleshooting downtime (before you knew it was an AWS issue)? I've had to deal with a similar situation, and in my case, I was lazy and pushed a key to a public repo. (Not saying you are, just saying in my case it was a leaked API key)

bdcravens | 3 days ago

How can you not know what credentials were used. A simple cloud trail search on the affected infrastructure will tell you.

more_corn | 15 hours ago

It makes me very uncomfortable to know I got my CC in GCP, AWS and oracle cloud and that I have access to 3 corporate AWS accounts with bills on the level of 10's of millions per month.

Why don't cloud providers offer IP restrictions?

I can only access GitHub from my corporate account if I am in the VPN and it should be like that for every of those services with the capability to destroy lives.

Traubenfuchs | 2 days ago

[dead]

ohdeardear | 2 days ago

[dead]

unit149 | 2 days ago

[dead]

temptemptemp111 | 2 days ago

[dead]

NedF | 2 days ago

Sounds like a coincidence to me

klysm | 3 days ago

Considering AWS’s position as the No.1 cloud provider worldwide, their operational standards are extremely high. If something like this happened right after an outage, coincidence is the most plausible explanation rather than incompetence.

mr_windfrog | 2 days ago