There is no denying the fact that Cloud is everywhere. Starting from small scale startups to enterprises, companies have ripped the benefit of cloud with focus on solving the business problem rather than worrying about the infrastructure along with significant cost control. Based on a survey one of the prime reasons to migrate over to cloud by the companies was to transition from CapEx to an OpEx model and aim at being more agile and operation oriented.
With the general hype around in the IT industry about migrating over the advantages of cloud computing, if not thought through and executed correctly you might end up into troubles. In this blog i will try to explain some of the points one might consider while migrating to cloud.
In my blog i will refer to AWS as my primary cloud provider but this holds true for rest of them.
Even though cloud providers like Amazon, Google, Microsoft are following the industry standard security practices and certifications, one must be careful in understanding the shared responsibility of a cloud systems. Cloud providers are primarily responsible for the security of its core services and infrastructure. This is sometimes called as “Security of the cloud”. But the security of its data and the access and role management (IAM) is a responsibility that individual customers need to setup properly (refer as “Security in the cloud”).
One of the prime example of this was the data breach of over 100 million user information of Capital One Bank in 2019 by its own employee. The data was stored in an unencrypted format which quite possibly lead to this incident. Turing on encryption on your data-store could have helped in this case.
Now having said this, the onus is on the customer and its design team to setup the right data security & encryption (client-side encryption) for the services its using. Cloud providers will be responsible for the incorrect setup of the cloud infrastructure.
One of the security design principle to follow is to provide access to users with the least privilege. This would mean one should have minimal access to the cloud services for which you are working on. Using the right access policies is the key for secured applications especially in a big enterprise systems where sometimes it gets tough to manage user access to services.
One incident was published as “Murder in the cloud” (2014) for a code repository company called Code Spaces. The hacker managed to gain access to the EC2 console by physically gaining access (DDos attack) to the aws control panel. The hacker demanded ransom, failing which he started deleting all of its data, AMIs, EBS Snapshots; eventually leading to shutdown of the company.
So even though cloud providers claim to be most secure, customers using the services are the ones really need to think it through the best practices of the cloud security. One must be careful of its responsibility in the cloud as a customer.
“Cloud is not for cost saving. It is for cost control.”
IT overall cost is something we are looking to control when we migrate over to cloud. This is true because we follow “pay-as-you-go” in cloud. We pay for the services we use and the time it uses. Cost can potentially go beyond control if you are not using the right services.
Provisioning instances up front could be costly. Use of auto-scaling could significantly help in maintaining the cost. But the cost saving could be impacted if you don’t have a system to cool down the scaled up instances.
Cloud also provides with serverless computing which the customers could leverage instead of using an expensive EC2 instances. Serverless services like AWS lambda auto scales on the AWS provided infrastructure with minimal cost per request than a normal EC2 instance.
Make use of Reserved instances. AWS provides you to book instances on a fixed contract for a cost significantly lower than an on-demand instances. Though the use of reserved instances are very specific to customers requirement, one could easily leverage this.
No system is perfect. Systems are meant to fail.
Service failures can happen for any internet bound systems. You could potentially loose millions of dollars if your systems are not properly setup. The chances of such failure happening in an on-premise system is low as most of the maintenance is in-house.
As per the SLA documents, cloud systems provides you with 99.9% of high availability. So if you read carefully that would mean there is a 0.1% chance of your service going down. For a service to run for 8 hrs, a 0.1% downtime would mean you are out for approx. 48 minutes in a day. If you are having a highly critical application, 48 minutes could mean a potential million dollar loss.
Having an highly available system should be the primarily design principle in such situations. It is always advised to span your data or instances across multiple AZ (availability zones) or even span across regions. This would reduce the risk of your systems getting down and preventing potential service downtimes.
If you are planning for using the on-premise datacenters, leveraging direct connectivity to the cloud is highly recommended. This would reduce the risk of exposing the services to the internet. Services likes AWS Direct Connect could help you in such situations.
Concluding on the idea of security, cost control and high availability, it is essential for a cloud customer to understand the pros and cons first before deciding on a cloud based solution. It could possibly turn out that you may benefit more from on-premise than on cloud.