I’ve taken the courses, I have practical experience, I paid the exam fee and I past the test. That makes me an “Amazon Web Services (AWS) Certified Solutions Architect – Associate”. Wow what a mouthful, but what does that mean to you?
Per AWS, I have “experience designing distributed applications and systems on the AWS platform”.
Yes, but what does that mean?
AWS has somewhere around 104 different services. These range from simple email to virtualized servers to “serverless” computing to big data processing and everything in between.
As a Solutions Achitect I know how to navigate the roadmap that makes up “AWS town”. When we speak, I strive to understand your existing resources and how they are used or what your requirements may be for a new project. I take that information and convert those requirements into a plan that utilizes AWS services. This could be a “all-in” approach or a mixed on-premise / AWS approach depending upon your needs.
I then implement that plan. I have much experience moving resources to AWS or creating those resources from scratch. If I lack expertise in what you need, I will either utilize my resources to understand how to accomplish what is needed or find another resource that can make it happen.
So how about an example?
A client has 3 web servers and a Microsoft SQL Server that hosts their website behind a load balancer. Their hardware is starting to get dated and could use some faster systems. But upgrading requires some significant up-front investment for the hardware, OS licenses and potentially a new SQL Server license.
The client grants me remote access into their system and I look at the average CPU, memory and disk usage metrics along with the peaks. I then translate those metrics into what I believe would be suitable on AWS.
Once the customer agrees to the plan I go to work. The first item is to setup up what I call a “virtual datacenter”. In this case it’s called a “virtual private cloud” or VPC. This gives us essentially a router, firewall, hacker protection and underlying physical infrastructure such as cooling, redundant electrical, backup electrical, redundant Internet, physical security, fire suppression and a hardened building.
With that I setup IP subnets, an Internet Gateway, NATs and Routes along with security rules, similar to using an ASA or firewall device. I only allow in traffic that supposed to get it. Normally http/s traffic to the public and administrative traffic to certain authorized users and/or networks.
From there I “spin up” a managed relational database service which runs Microsoft SQL Server using industry best practices and enables automated backups. Next I spin up three web servers in different “availability zones”. This gives us the high availability and spreads the workload across different servers that are physically on different power systems, Internet connections and even entirely different buildings – all connected together by high speed fiber.
To top it off, I enable a load balancer to balance workloads between the three web servers. In addition I migrate shared user data to Simple Storage Service (S3) which is a very low cost and highly resilient object store that all three servers can access. This is instead of setting up a virtual NAS or syncing files across all three platforms.
This is one example of what I do most commonly. However it can expand into image or video processing, mobile messaging, reporting, big data consumption and processing and much more.
If you were to try and replicate everything in my example on your own, you almost always find it to be a budget buster. By using a cloud service, such as AWS, that infrastructure cost is spread across a large customer base. In addition, the bulk hardware allows for further reduced cost.
When using AWS, you pay for what you use. If you only needed everything to be up 8 hours a day, you could literally shut everything down and only pay for your up-time.
If you would like consultation on a project similar to what I described, or even an off-the-wall idea, feel free to contact me.