In this article, we’ll provision a managed Kubernetes cluster on an AWS cloud, and we’ll also look into some of the alternatives.
Why Managed Kubernetes?
Well, yes, and no. Provisioning a cluster with
kubeadm is relatively easy; however, it’s still a lot of work to maintain, secure, update, and upgrade that cluster. By using a managed Kubernetes solution, you push that maintenance and security burden to your cloud provider, and you can focus on your worker nodes instead.
Managed Kubernetes Service in the Cloud
When it comes to “Kubernetes as a Service”, you have several options to pick from.
All of these platforms are CNCF-conformant, which means “in theory”, it’s relatively easy to migrate your Kubernetes workloads from one cluster to another. In practice, though, “vendor lock-in is real”, so it still makes sense to spend time reviewing all of the offerings and pick the cluster that suits your needs best.
For example, if you want your control plane to be upgraded automatically, you might like to choose GKE; on the other hand, if you want more control over your control plane’s upgrade cycles, you might choose either AKS or EKS.
If you want the most advanced Kubernetes version, then GKE would be a better option. On the other hand, if cost is your primary concern, then AKS would be preferable since both EKS and GKE charge for the control plane, while AKS doesn’t.
In short, although all of the big players offer “more or less” the same feature set, the devil is in the details, and you’d be better off if you do your research beforehand.
That leads to the follow-up question: “Why are we choosing EKS for FizzBuzz Pro?”. Well, it’s mostly for convenience. We’ll be using other AWS services, so having our cluster initiated in an AWS cloud will better integrate with a less operational headache. Yes, that also means we will likely be vendor-locked-in; however, that’s a calculated risk to take.
Installing an EKS Cluster
There are two ways to install an EKS cluster.
- Using the AWS Console Web UI.
- Using the
In this article, I will cover the second option, as it appears to be the option that everyone chooses to follow; it works out of the box. Unfortunately, setting things up using AWS Console is a more tedious—and error-prone—process.
Installing AWS CLI
Before moving any further, make sure that you have AWS CLI installed and configured on your system.
To test that, you can type
aws --version in your terminal.
Here is what I get when I do it:
$ aws --version aws-cli/2.2.9 Python/3.8.8 Darwin/20.5.0 exe/x86_64 prompt/off
If this command fails for you, follow these instructions to install AWS CLI to your system.
In my case, it was fetching the latest Mac OS Binary, then following the on-screen instructions.
This one is important:
Make sure that the user installing EKS is the same user you configure to use the AWS CLI.
Why? Because initially, only the user who created the EKS cluster can access and modify the cluster. So if your AWS CLI user and the user you use to provision your EKS cluster are not the same, you’ll have to do some RBAC gymnastics that you don’t, otherwise, have to do.
You can always change your users and assign different users with different roles to your cluster. That being said, I’ve found it’s hassle-free first to configure the cluster with an admin-level user and then set up a fine-grain access control later.
Configuring IAM Policies
After installing AWS CLI, you need to configure it to authenticate you with AWS services.
If you don’t have an access key yet, navigate to the IAM console of the user that you are going to use, create an access key, and download a csv export of the access key.
Also, for my user, I at least provide
AdministratorAccess as a permission, because at the very least, when provisioning Kubernetes resources, the user will have to touch a lot of AWS infrastructure from CloudFormation Templates to Network ACLs, Subnet mappings, Security groups, to actually provisioning and configuring EC2 instances.
If you want to be more granular, or if you are not comfortable with giving admin access to your user, then the following policies can be a starting point:
If you see any errors during EKS provisioning, you may want to add more policies to this chain, but those should be good enough for most cases.
Configuring AWS CLI
Once you set up your policies, downloaded your IAM credentials, and ensured that AWS CLI is installed on your system, now it’s time to set up the AWS CLI. You can configure your AWS client by executing
aws configure on the terminal:
➜ ~ aws configure AWS Access Key ID [****************ABCD]: AWS Secret Access Key [****************DEFG]: Default region name [us-west-2]: Default output format [yaml]:
Verifying AWS CLI Configuration
After you enter your Access Key ID and Secret and answer a bunch of questions, the installer will create an
~/.aws folder for you:
$ pwd /Users/volkan/.aws $ tree . ├── config ├── credentials $ cat config [default] region = us-west-2 output = yaml $ cat credentials [default] aws_access_key_id = your_access_key_id aws_secret_access_key = your_secret
If you see all these, your AWS Client is all set. Next up is installing
eksctl is a command-line utility for creating and managing clusters on EKS. It’s written in Go and uses AWS CloudFormation.
You can follow the official
eksctl installation instructions on AWS to set it up in your system.
In my case it is, installing HomeBrew first:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
Then adding the necessary tap to HomeBrew:
brew update brew tap weaveworks/tap
Followed by installing
brew install weaveworks/tap/eksctl
If you get a successful response to
eksctl version, then you have set it up correctly, and we can move to the next step.
$ eksctl version 0.57.0
Provisioning Our Cluster
Provisioning an EKS cluster costs a non-trivial amount of money. You
not only pay for the control plane, but also you‘ll pay for any resources
As of this writing, aside from provisioning the control plane,
m5.largeEC2 instances by default. And those instances are not cheap.
So if you are experimenting with things, make sure to tear down
your cluster once you are done to avoid additional costs.
Okay, since you have been warned, let’s move to the fun part.
You can follow the detailed instructions here. In this article, I’ll just summarize what I did to provision the FizzBuzz Pro EKS cluster.
Create a Key Pair
Let’s create a key pair first and note it down.
aws ec2 create-key-pair --region us-west-2 --key-name fizz-kp
I am using
us-west-2 as the region because it’s closer to where I live, and also it is one of the AWS regions that typically new/experimental AWS services and features are introduced first.
Create the Kubernetes Cluster
After all this prep work, creating the Kubernetes cluster is a one-liner:
eksctl create cluster \ --name fizz-cluster \ --region us-west-2 \ --with-oidc \ --ssh-access \ --ssh-public-key fizz-kp \ --managed
This process will take up to half an hour, and in the end, you’ll have a new shiny EKS cluster with all its glory.
The cluster add-ons
vpc-cniare NOT enabled by default, and you’ll most likely need them.
Navigate to “Configuration » Add-ons” and manually enable them.
And, just like that, we’ve created a Kubernetes cluster using
Resources and Additional Reading
In this article, we have seen how we can create a managed Kubernetes cluster on AWS using
Coming up next:
- We’ll see how we can containerize and register our microservices to AWS ECR as container images;
- Create Deployment Manifests for,
- And deploy our microservices to our cluster using the above manifests that we’ve created.
After all of these are done, I plan to talk about each service, one by one.
So, as always, there’s a lot to cover.
Stay tuned… And, may the source be with you 🦄.
\ \\, \\\,^,.,,. “Zero to Hero” ,;7~((\))`;;,, <zerotohero.dev> ,(@') ;)`))\;;', stay up to date, be curious: learn ) . ),(( ))\;, /;`,,/7),)) )) )\,, (& )` (,((,((;( ))\,