Introduction to Amazon EC2
Welcome to issue #8 of the “AWS Services Shorts”. In each issue, I present an AWS service and explore its strengths and weaknesses, discover some use cases, and finally discuss the most common mistakes people make with it.
Today’s issue is about Amazon EC2!
If you prefer you can watch the video on YouTube
Amazon EC2, which stands for Elastic Compute Cloud, is a web service that offers scalable computing capacity in the cloud. The main purpose of EC2 is to provide a scalable environment for running applications, with the ability to quickly increase or decrease capacity based on demand. It was created to remove the complexities of managing the underlying infrastructure and to give businesses and individuals the flexibility to innovate and scale.
What is cloud computing: https://aws.amazon.com/what-is-cloud-computing/
Features: https://aws.amazon.com/ec2/features/
FAQs: https://aws.amazon.com/ec2/faqs/
Pricing: https://aws.amazon.com/ec2/pricing/
Docs: https://docs.aws.amazon.com/ec2/
Amazon EC2 provides resizable virtual servers in the cloud, allowing developers and businesses to run applications without the overhead of managing physical hardware. This makes it suitable for hosting web applications, websites, and backend systems.
CMS: https://dev.to/aws-builders/deploy-wordpress-on-ec2-by-wordpress-ami-2mog
Basic custom infrastructure: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/TUT_WebAppWithRDS.html
Containerized applications: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html
Kubernetes clusters: https://aws.amazon.com/kubernetes/
Databases: https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-oracle-database/ec2-oracle.html
EC2 instances can be provisioned in large numbers for short durations to handle batch-processing tasks. This is particularly useful for jobs that require significant computational power for a limited time.
EC2 is integral to architectures that need to scale based on demand. With its auto-scaling feature, it can automatically adjust the number of instances based on workloads, ensuring that applications remain performant during spikes (increase the number of instances) and scale in (reduce the number of instances) when the load reduces.
WordPress: https://docs.aws.amazon.com/whitepapers/latest/best-practices-wordpress/reference-architecture.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Researchers and scientists use EC2 for running complex simulations and models. The service offers specialized instance types, such as those optimized for GPU tasks, making it suitable for these high-performance needs.
One of EC2’s strengths is the variety of instance types it offers, catering to different use cases—from compute-optimized to memory-optimized, and GPU instances.
EC2’s ability to quickly scale up or down based on demand ensures that applications remain performant and responsive, even during traffic spikes.
EC2 offers multiple layers of security, including AWS Identity and Access Management (IAM) for controlled access, Virtual Private Cloud (VPC) for network isolation, and data encryption.
With AWS’s vast global infrastructure, EC2 offers high availability and fault tolerance, ensuring applications remain up and running even in the event of component failures.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/disaster-recovery-resiliency.html
https://aws.amazon.com/blogs/architecture/lets-architect-creating-resilient-architecture/
https://aws.amazon.com/blogs/architecture/lets-architect-resiliency-in-architectures/
Setting up a VPC and managing network access can be complex, especially for those not well-versed in networking.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-networking.html
https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html
For newcomers, the multitude of options and configurations in EC2 can be overwhelming, leading to potential misconfigurations.
Traffic spikes, DDoS attacks, or third-party integrations can cause sudden and unpredictable increases in data transfer costs, and if auto-scaling is enabled, the automatic provisioning of additional EC2 instances to handle the load.
https://docs.aws.amazon.com/autoscaling/plans/userguide/best-practices-for-scaling-plans.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-capacity-limits.html
Improperly configured security groups can either expose sensitive resources to the public or overly restrict access, causing application failures.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security.html
Many users tend to over-provision resources, leading to unnecessary costs. It’s essential to right-size instances based on workload requirements.
https://repost.aws/knowledge-center/ec2-instance-choose-type-for-workload
https://www.freecodecamp.org/news/how-to-select-the-right-ec2-instance/
Failing to regularly update and patch the operating systems and applications running on EC2 can expose them to security vulnerabilities.
Reserved instances offer significant cost savings over on-demand pricing. Not leveraging them for predictable workloads can be a financial misstep.
Without regular monitoring using tools like AWS Cost Explorer, users can quickly accrue large bills without realizing it.
Not setting up monitoring and alerts can lead to undetected failures or performance issues, impacting the user experience.
Not setting up regular backups using services like Amazon EBS snapshots can lead to data loss.
For variable workloads, not implementing auto-scaling can lead to either underutilized resources or performance issues during traffic spikes.
I hope you find this overview useful!
Did you like it? Too long? Too short? Something is missing?
Please let me know with a comment! 🙏
Your feedback is truly precious to me 😊
Attributions:
Icons from https://www.freepik.com/
Music by Sergii Pavkin from Pixabay