Ten Things I’ve Learned About Cloud Security

Tuesday, July 17, 2012

Bill Mathews

D03c28fd5a80c394905c980ee1ecdc88

This is not a Top 10 list - it is a list of 10 things I’ve learned along the way.

Top 10 lists imply some sort of universal knowledge of the “top” things possible in a given field. Top 10 attractive women, top 10 guitar players, top 10 whatever, they all have one thing in common: They are all ten things the author thinks are the best.

I don’t really like to think I know everything, so this list is in no particular order. This particular list is on cloud security and, well, it is a big topic that interests me greatly and there is no way I can cover it all in a blog post. As a result I will be doing a presentation around this topic in a few places, including BSides Cleveland.

Anyway, cloud security is tough for a lot of reasons, not least of which is because you, like me, probably only understand the basics of what you interface with in the cloud - the controls the cloud provider allows you to see. This lack of depth of management introduces many security related challenges.

Having said that, let’s explore:

image1) Control Panels

Control panels are simultaneously the best and worst aspect of a given cloud provider’s offerings.

They can enable you to do really great things or handicap you by not allowing enough fine-grained control. They can enhance the security of your slice of the cloud infrastructure and then cut it off at the knees, sometimes with both in the same feature.

If a control is very granular and allows you to be very custom, you can make spectacular infrastructure decisions while at the same time easily forgetting to make some necessary security adjustments.

If the controls aren’t granular enough, i.e. the provider made those decisions for you, then that can limit your abilities. In general, control panels are a double edged sword... and a balancing act...usually done while juggling razor-sharp ninja stars - not necessarily an easy job.

2) Uptime/Downtime

This is a problem, but not necessarily a problem specific to the cloud. It is a problem specific to computers. You will have downtime no matter where you host your services or what you do to prevent it. (Author’s Note: I have spent a large portion of my company’s overall budget to avoid downtime. It still happens, it’s just mitigated better).

Some will argue uptime is worse in the cloud than if you hosted it yourself, but depending on who you are this may or may not be true. It just depends on how much trouble you want to go through to deal with the uptime of critical assets - or rather how much you want to spend to achieve a good uptime ratio.

In the public cloud, the cost is spread around so it is naturally a bit cheaper. If you are doing it yourself then you are footing the entire cost. Simple equation really: how much downtime can you afford? Be careful here, the cloud is not always cheaper than doing it yourself, check out the Cloud is Cheap section.

Side note: While I was editing this post and getting its accompanying presentation ready Amazon Web Services had their big storm related outage and one of our apps was in the wrong zone at the wrong time, bringing it down for about 30 hours total. Luckily, it was a weekend so no one was using it.

But still, there is no greater feeling of helplessness when your service is down and completely out of your control. I’m like this whenever my phone or data center provider have problems too so I’ve gotten used to it. A bottle of pepto and lots of patience is required for any sort of cloud endeavor.

3) Access Control

There is a "myth" that you have no concept of access control in the cloud. In most cases, at least with the reputable providers, you do have a decent ACL system. In Amazon you can set up roles and assign folks to groups, not half bad.

The problem comes in when you actually MEAN access control. With very few exceptions you are running on shared resources in the cloud, not dedicated equipment. If you were under the impression it wasn’t shared, perhaps we need to revisit the definitions of cloud computing again (see cheatsheet). In theory, this sharing could cause some problems.

All cloud providers use some sort of virtualization - what it is, what vendor, what tech is completely irrelevant - there is at least some risk of someone being able to break out of the virtualized jail and see your data or perform some other malicious activity. This is a very important risk, one to at least mitigate with encryption on both the transport and rest layers.

Honestly though, you should be doing this in any virtualized environment, it just makes for very good practice. Dare I say, it should be a best practice.

4) API (Good and Evil)

I have a love/hate relationship with APIs (Application Programming Interface). I love them because they can make so many things so easy to do, at least the good ones. I hate them because they can often change without notice (depends on the provider) and they give providers yet another avenue for charging “micro payments". Micro payments sound good in theory but they do add up.

Amazon, for instance, wants you to send email through their messaging API and charge you per-message. I haven’t paid for email per message since...well never. They claim it increases reliability and makes it better than sending directly from your EC2 instance. I find that claim a little suspect but it’s their jail and their rules. Another big issue is if you buy the theory that the cloud is a jail for your apps then APIs are the bars. They can really lock you into a provider. I despise vendor lock-in almost more than anything.

There are cloud abstraction layers (such as Delta Cloud) but honestly I’ve never used them and really it is just adding another layer of complexity. Deploying your cloud app is not like dating, it's more akin to marriage and divorcing it is hard, so remember to do your homework.

Of course there is also the whole security angle of APIs that you have to consider. Is the transport encrypted? Is the data reliable and untainted? Are you sure you are pulling the correct data? These considerations cannot be overlooked, even in a cloud environment where you are encouraged to “trust the system.” Buyer should always beware.

5) Firewalls Are Dead... Well Sorta

Real firewalls in the cloud are a great idea, most reputable providers at least have basic packet filtering available. But wouldn’t it be great to have a full-on firewall up there protecting your data? It is possible! Check Point, Cisco, and probably many others have full firewall instances (some with IPS) available for you to deploy.

I think it's a good idea and all, but I struggle to see how many people will actually use it. I mean, people hate firewalls as it is for some strange reason (I blame willful ignorance). But now not only do you have to pay for the firewall license, but you will have to pay for the CPU time to actually run it.

Obviously we're talking about a public cloud here, if you have your own private cloud already you just need the license. Regardless of where you have your cloud, you should probably have a firewall to give you tighter control.

6) Redundancy

One of the ways the cloud sells itself is on it's instant super-redundancy and availability. As we’ve learned, even the large cloud providers are susceptible to downtime. As I discussed above in the uptime/downtime section, downtime just happens.

The more or less instant redundancy marketing line is somewhat true, you can absolutely load balance your apps across multiple Amazon EC2 instances across multiple availability zones. But this isn’t some magic feature you just get, it costs extra. Don’t be fooled by those sort of marketing tricks.

As I wrote this section I began thinking about the abstraction layers discussed in the API section and started to wonder: is it possible to build an application that was hosted then load balanced across multiple cloud providers. I bet it would be but now brain hurts (and I suspect if I did that my wallet would be hurting too). Anyone doing that out there?

7) Encrypt Early, Encrypt Often

Before Amazon introduced the ability to encrypt in their storage offering (S3) I wrote a tool called logsup that would allow me to automatically rotate (through logrotated), encrypt (through GPG) and upload (to S3) old log files. It takes some metadata and writes it up to Amazon’s SimpleDB service so I can easily search and figure out what data was in the encrypted log files.

Of course, I thought I was really clever when I wrote it, but then four days later Amazon introduced their encryption feature that has better key management than GPG. Eventually I'll rewrite logsup to take advantage of that, but until then I will keep stubbornly using it.

There are two primary lessons to take away from my logsup adventure. First, you should always encrypt sensitive data before it leaves your control. Second, you should always write a receipt for that data so you know where it came from and at least abstractly what type of data it contains. This will allow some piece of mind that your data is safe and that you will be able to find it later when you need it most.

Depending on the deployment, encryption also offers some protection against snooping tenants when you’re using cloud storage or other less private storage. It is not a replacement for strong access control or larger security precautions but it can provide a decent layer of protection against basic prying eyes.

8) Cloud Is Cheap!

There are a number of different types of cloud service (see cheatsheet) and the whole "cloud is cheap" myth only holds up for a few of them. Cloud can be very cheap when you’re discussing Software As A Service (SaaS), e.g. Google’s Apps for Business is only around $5 per user per month per year or $50 per user per year.

You as an independent person or company cannot run a mail server for any amount of users for less than that cost per user. The hardware alone would set you back more, so it makes very good financial sense to run your email in the cloud. Whether it makes good common sense is a different story, but I think it is becoming more generally accepted as a best practice to outsource your email, even if only for the cost benefit.

The story gets a lot murkier when you move away from software into infrastructure or platforms as services. Depending on your needs and usage this can be way more expensive than running your own stuff or much cheaper, again it just depends on the needs. If you want to build a redundant platform or infrastructure with off the shelf hardware and Linux, prepare to pay for the privilege. It really depends though, I’ve seen analyses where it is cheaper to do it yourself, so as with all advice your mileage may vary.

9) Logs In The Cloud

There is a very persistent myth that you can’t get proper logging for your cloud applications and this is patently untrue. An EC2 instance is just an operating system tweaked a little bit to run on Amazon’s infrastructure. There is nothing magical about it, it is the same as if you were running it on a VMWare cluster and you can get your logs from there just fine right? Right?

Of course you can, your application and OS will log the same as if you were hosting it locally. You could even put a log collection server in the cloud if you were so inclined or use something like Loggly or Splunk Storm and have your log analysis up there too.

When you start discussing SaaS or IaaS the story gets a little darker as you are not necessarily buying access to the logs - you are outsourcing it completely so the providers simply do provide that same level of visibility. I guess that is their call, you just need to be prepared. As we discussed in the control panels section the type of visibility you get will depend on how well the control panel is architected.

A lot of providers will give you access to logs for your specific instance (if only to cut down on support calls), but others do not. It is simply a matter of asking the right questions and, again, doing your homework.

10) Service Level Agreements (SLA)

When you are choosing a cloud provider be sure you actually read their SLA. This is basically the agreement that spells out your interactions and expectations when dealing with your provider. This is the document that will basically tell you how much uptime to expect (they all say 99.999% uptime, they are almost all deceitful) and more importantly what sort of compensation you will get if they violate their SLA.

Expect a lot of lawyer-speak here, so if you are putting something really critical in the cloud have your lawyer read it over. You won’t have a lot of negotiation room usually, but at least you’ll be able to plan for the possible risks with a clear head.

Typically an SLA will link out to a document describing security precautions taken by the provider to protect your data. This can be crucially important to have so you can effectively add in tech to fill the gaps, though sometimes these documents tend to be a bit vague.

While this list wasn’t entirely security focused, the intent was to help guide folks looking into cloud deployments for their organizations and how to better prepare for the differences in securing those environments.

Hopefully it met those goals and more. Please send any feedback on this list to blog@hurricanelabs.com.

Possibly Related Articles:
15122
Cloud Security
Service Provider
Firewalls Encryption Cloud Security Budgets Access Control Event Logging Managed Services Service Level Agreement Application Programming Interface
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.