Episode 2: Top Cloud Attacks - Tenacity’s Hackerman Bad Podcast
The threats are seeming legion in the cloud and cyber security space. What can you do to stop them before a risk turns into a headline? Check out the second episode of our Hackerman Bad podcast to learn more.
In the last episode of our Hackerman Bad podcast, we spoke with AJ Yawn from ByteChek about the growing Russian cyber threats that surfaced over the last few months. And in this episode, our Tenacity hosts, Steve and Jason, are joined by Senior Architect Aaron Lake to sound off on what those threats might actually look like. Here’s a recap of our conversation where we talk about the most popular attack vectors that bad actors might use to compromise your cloud environment.
Jason: Some may say that you haven’t lived until you’ve been the victim of some sort of data or identity theft breach in 2022, right?
Steve: True. And It could just be something really small, like one of your passwords being leaked, or a company you work with fell victim to a breach. But I think at this point, nearly everybody’s been a victim to something. Aaron, can you talk about data breaches and how they relate to lack of visibility and unauthorized access?
Aaron: Yeah, that’s an excellent starting place. Lack of visibility is absolutely the primary issue. You don’t know what you have out there, you don’t know what your stuff looks like, and you’ve probably got a bunch of different hands in the pot creating things. This is the cloud era that we live in. If you don’t know what’s out there, it’s going to be really difficult to fix anything. But you’ve got to start somewhere.
Jason: Even Tenacity is not a super large company, but things change in our environment a lot. Every single day things are added and things are changed… and we’re not a Fortune 1000 company. Just imagine the scale of that.
Steve: What would that have looked like before cloud environments? Take the hundreds of changes happening within our Tenacity environment… is that number pretty standard? What’s the before and after look like?
Aaron: It’s a lot different. Before, you just had a bunch of servers and now you have a bunch of services. In the old dinosaur era, you would have a network team and a security team, but that’s not the case anymore. Now, you’ve got developers and people in there creating things using automation. There’s more artifacts and more things to get control and visibility over.
Jason: Yeah, I’m not even sure that the problem is any different now. It’s just that the cloud now exposes the problem. With legacy IT, you had all of your stuff in your data center and you had a bunch of servers that were running VMware. You had a bunch of virtual machines running on them, you had some storage arrays, and then maybe you had one ingress and egress out of that facility. You can be really bad at securing each one of those internal machines and not be as exposed, because at the end of the day, things are controlled at one point of entry and exit. Now, with the cloud, almost every single one of those resources can be made public by itself. It’s just a click of a button. A developer can make something unsecure without even knowing it in some cases.
Aaron: Exactly. You used to have one person or a network team that was your gatekeeper. And they would have never said, “Yeah, that’s a great idea. Let’s make our really important database public.” The buck would have stopped with them. But that’s not the case anymore. We’ve done installs in the past with Tenacity where the organization’s public database is just sitting out there and they didn’t have any idea about it. We’ve told them to go fix it right away.
Jason: Yeah, they didn’t know that their databases were public. In fact, they thought they were private. With every single customer that we’ve installed, we’ve found something out there where the user has said, “Oh my gosh, that should not have been public” or “That user shouldn’t have had that level of access.” The list goes on and on.
Steve: So, lack of visibility seems to be pretty overarching, right? It sounds like there’s a lot more humans creating a lot more stuff than before. There may have been a process around it in the past, but now it’s “Boom - I need a new bucket. I need a new ec2 instance.”
Aaron: Right. And it doesn’t even have to be a whole lot of actors. It could be a single actor - a one man shop. If you just don’t know which buttons not to click, or if you don’t have observability into your environment saying “that’s a bad thing you just did,” it doesn’t matter the amount of people. And it’s not just AWS that is like this… all cloud providers make it really easy to do bad things. They don’t protect you from yourself.
Steve: Why not? If something should be private or not public 99% of the time, why is the default setting public?
Aaron: The default isn’t always public, and I will say that they’re all getting better. It used to be really easy to create s3 buckets with public access. I’m sure we’ve all heard horror stories about that from five or six years ago. Now they give you warnings… but not always for every single service. They should do a better job of auditing, but ultimately the onus is really on you. That’s part of the agreement - they control the infrastructure, you control the configuration.
Jason: They also got a little bit better at it based on the exponential growth in use. It became a problem that was so bad because the use of these products or services grew exponentially. Five years ago, maybe I'm using 100 s3 buckets… but that’s more like 1000 s3 buckets today. The pool size of risk grew by 10 times for most organizations, so the providers had to get a little bit better. That being said, it's still really difficult to figure out if you've got 1000 things that are secure or not, whether it's easier to configure them or not. At the end of the day, it is literally a widget. It's a little toggle button that asks when you deploy something, “Should it be public or not?” It's very easy for these things to happen this way.
Steve: What other things might accidentally be left public that shouldn’t be?
Aaron: Databases. Public IP addresses on services is generally a pretty bad thing to leave public. You want to throw that stuff behind a firewall or WAF or something. Almost every service can be created public, and generally none of them should have that selected as their default. If you choose to make them public, you should have to fill out a questionnaire where is asks you, “Why are you making that decision? That’s a terrible idea.”
Steve: What steps should people be taking to ensure proper configuration and better visibility? Is it possible with exponential growth?
Aaron: You either have to do it manually or you can leverage a tool. One of those is going to be a lot more effective than the other, so I would say leverage some sort of tool like Tenacity. Otherwise it would be a full time job for someone to babysit, see when new things are created, and make sure that they’re matching your configuration settings for compliance. It’s impossible without a tool.
Steve: Is the same thing true with too many user permissions? Or what if you’re a one-person shop and you have all the permissions?
Aaron: I would say that, even if you are a one person shop, you still shouldn’t have all of the permissions. You should only give yourself the permissions that you need and lock that root account somewhere else. Otherwise, it’s the same thing. Let’s say, for example, you are onboarding a new developer and you don’t want to mess with giving them the right stuff, so you just give them access to everything. That’s incredibly common. But giving people least-privilege is the way that AWS recommends it.
Jason: And with most of these examples, maliciousness is not intended. You just think, “Oh, I’ve got to get this working for them because things need to happen right away. So, let me just give them full access right now so they have it.” And then you get 100 more requests and you move onto those, then all of a sudden you’ve lost sight of the fact that a user has too much access.
Steve: Right. There’s no lack of things to do. You’ve got enough time to get 10 things done and 100 requests. So, what I heard is that at the end of the day, lack of visibility, unauthorized access, and not having a proper monitoring tool are the main at-faults when it comes down to bad actors compromising your environment.
Be sure to check out our next episode of Hackerman Bad where we talk about the internal threats coming from inside the house and one big, looming threat specifically that can be mitigated with processes that don't have anything to do with IT infrastructure. Subscribe on Audible, Amazon Music, Spotify, or Player FM so you don’t miss out.