This general post describes different ways to deploy Kubernetes locally to emulate what you would do in a real cloud on your local machine. Running kubernetes locally can be helpful to save costs, to use portions for CI/CD on your cloud-native applications, and to address situations when you don’t have reliable internet access and want to do “real” work.
This blog post walks through the following technologies:
Note that these approaches worked in spring 2019 when I wrote this post, but this stuff moves fast! By the time you read my advice, there might be additional options for running Kubernetes locally.
Before you go any farther, I strongly suggest a machine with at least 8 Gigs of RAM. Some of the following options are based on virtual machines (VM), modern operating systems require at least 4 GB of RAM before issues can arise. Having 8 GB of RAM allows for options like VirtualBox to get enough resources to be able to do more then the bare minimum.
Then, you might as well install
kubectl on your local machine. It’s the canonical way to interface with any Kubernetes cluster, and your local instance is no different. I will say that you could call directly to the API, but I’d only suggest doing this if you had a specific reason to.
Minikube is the defacto entry point for most people learning or figuring out how to run
Kubernetes on a local machine. I’ve even heard of some people using Minikube to run Dev or
QA environments on shared VMs on clouds. It feels a lot like the old devstack options back in the OpenStack days. Honestly, Minikube has echos of it, and I feel like it’s probably the best analogy.
Just like any seasoned veteran of the OpenStack ecosystem would say, devstack is designed for
a specific use case, and that’s how Minikube is targeted, too. Minikube was and is the answer
to the question “What is the fastest way I can get Kubernetes running on my laptop?” and it
succeeds at this goal.
However, you should note that it’s only one worker node, one machine, with relays on Virtualbox or the like
for virtualization, and it has only 2 GB of RAM by default to use. This is all but a toy. You can
run your typical commands against it, but you can’t run anything that isn’t stock Kubernetes.
So how do you change this? You can run Minikube with a couple other options to give it more resources
on start time.
Luckily you have some commands you can run!
minikube --cpus 2 --memory 4024 start
The previous command gives your Minikube instance two virtual CPUs, and 4 GB of RAM. This change should allow
you to do more with it, like install istio, but not much more. If you want to create or test
something closer to your production instance, you should give it more resources. That’s up to your
restrictions and specific situation.
After you are done working with the instance, I suggest either shutting it down, or deleting it completely:
If you plan on walking away from your local computer or you don’t need the pods you are running for long
term, now is a good opportunity to run the following command on the VM that is running your Kubernetes cluster, so it doesn’t take up unneeded resources:
If you installed
kubectl you didn’t have to export anything or change any configuation around. From what I understand, that behavior is by design.
kubectl defaults to the local host that Minikube runs on, which allows for a small, but valuble, positive UX situation.
Docker in Docker
Personally, Docker in Docker is the choice I make. It is from the Kubernetes-sigs project called kubeadm-dind-cluster. It creates three Docker containers on your local machine, one master and two worker nodes. It’s as close as you can get to what you would really run in the real world. And because they are just containers, they spin up extremely quickly.
All in all, you need just a handful of commands, like the following example:
wget https://github.com/kubernetes-sigs/kubeadm-dind-cluster/releases/download/v0.1.0/dind-cluster-v1.13.sh chmod +x dind-cluster-v1.13.sh
As you can see, at the time I write this post, the version is 1.13, but it should be the same with whatever
version is available when you follow along at home:
# start the cluster ./dind-cluster-v1.13.sh up # add kubectl directory to PATH export PATH="$HOME/.kubeadm-dind-cluster:$PATH" kubectl get nodes NAME STATUS ROLES AGE VERSION kube-master Ready master 4m v1.13.0 kube-node-1 Ready <none> 2m v1.13.0 kube-node-2 Ready <none> 2m v1.13.0
And that’s it! You now have a fully working Kubernetes cluster running inside your Docker
# k8s dashboard available at http://localhost:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy # restart the cluster, this should happen much quicker than initial startup ./dind-cluster-v1.13.sh up # stop the cluster ./dind-cluster-v1.13.sh down # remove DIND containers and volumes ./dind-cluster-v1.13.sh clean
When I wrote this blog post, microk8s was a relatively newcomer to the space. It’s supported by
Canonical, the company behind Ubuntu. Microk8s is a simple
snap installation like the following command:
snap install microk8s --classic
Obiviously, if you don’t have snap you’re out of luck, but if you can swing it, snap is a way to get a kuberenetes cluster up and running extremely quickly. One oddity with the system is that every command is prefixed. Take a look at the following example:
microk8s.kubectl get nodes
The huge advantage is you don’t have to mess with your
KUBECONFIG to talk to your local cluster, but if you are a longtime user of Kubernetes, you must retrain your muscle memory. Take a look at the microdocs and play around with it.
k3s is the newest way to get Kubernetes cluster on a local machine. Supported
by Rancher Labs, it has huge promise. To quote the Github page: “Lightweight Kubernetes. Easy to install, half the memory, all in a binary less than 40mb.”
That’s extremely impressive. But it’s designed for a specific use case – not for the original statement of running a Kubernetes cluster on your laptop. It’s designed for small environments like IoT or edge computing. If you have a smaller environment it’s worth checking out.
IBM Cloud Private
As the IBM Cloud Private website says, “IBM Cloud Private is a reliable and scalable cloud platform that runs on your infrastructure. It’s built on open source frameworks, like containers, Kubernetes and Cloud Foundry. In addition, it offers common services for self-service deployment, monitoring, logging and security, and a portfolio of middleware, data and analytics.”
It is an on-premises offering of the IBM Cloud software suite with a Kubernetes backing. A community version allows you to run the complete stack (minus the IBM propritary appllications) locally on your laptop through Vagrant. I will show how to levarage the Vagrant edition here.
First, pull down everything from github.com/IBM/deploy-ibm-cloud-private:
git clone https://github.com/IBM/deploy-ibm-cloud-private.git cd deploy-ibm-cloud-private
The following commands are the lion’s share of what you need to know:
- login to master node:
So, kick it off by doing a
vagrant up. Notice quite a few things go by. On my machine
it took about 15 minutes or so to complete. When it was done I saw something like this:
icp: ############################################################################### icp: # IBM Cloud Private community edition installation complete! # icp: # The web console is now available at: # icp: # # icp: # https://192.168.27.100:8443 # icp: # username/password is admin/S3cure-icp-admin-passw0rd-default # icp: # # icp: # Documentation available at: # icp: # https://www.ibm.com/support/knowledgecenter/SSBS6K # icp: # # icp: # Request access to the ICP-ce Public Slack!: # icp: # http://ibm.biz/BdsHmN # icp: ############################################################################### ~/deploy-ibm-cloud-private $
And now you are ready to play with it! Go ahead and open up the web console. Like previously shown, mine is
https://192.168.27.100:8443. Log in, and you should see the initial dashboard.
This cluster is a complete enterprise-grade local Kubernetes cluster. There are a ton of things
you can do, but I’ll focus on showing how to get you to log in initially to Kubernetes.
In the bottom right corner there is a Terminal looking button. Click that button, you a terminal opens that is already set up to talk to your cluster. Run the following command and you should seen something close to this example:
admin:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.27.100 Ready etcd,management,master,proxy 9h v1.12.4+icp 192.168.27.101 Ready worker 9h v1.12.4+icp 192.168.27.102 Ready worker 9h v1.12.4+icp
Now you can run all of your typical
kubectl commands there, but you want to first figure out how to
get your local machine to talk to the instance.
Click the picture in the top right corner, and click Configure client. It should show something like the following example:
kubectl config set-cluster mycluster --server=https://192.168.27.100:8001 --insecure-skip-tls-verify=true kubectl config set-context mycluster-context --cluster=mycluster kubectl config set-credentials admin --token=SOMELARGETOKEN kubectl config set-context mycluster-context --user=admin --namespace=cert-manager kubectl config use-context mycluster-context
Copy and paste that into your local terminal, and you should now have access through
kubectl from your local machine.
From scratch on a local machine
Ok, so you arrived at this point. Maybe nothing fit your use case, or you are entertained by my writing style and can’t stop reading.
Installing Kubernetes from scratch on a local machine is a very specific use case. If you came here to spin up Kubernetes from scratch on your local machine, first of all, I salute you. Second of all…wow. There are many packaged ways to do what you’re trying to do, and this approach will only cause heartache.
I’m assuming you have experience going through and editing Kubernetes the hard way to work on your cloud of choice. Then you took and ran it locally on a VM, and now you want bring it to your local development environment.
You might want to run the most recent releases of Kubernetes, and have no trouble figuring out what’s going wrong when your application has trouble. You, yes you, are unique. This entire post was not aimed at you, and maybe it opened your eyes to something you haven’t thought of. But in general, you are a pioneer on your own boat. May the wind always be at your back. Have fun!
Hopefully you can see there are a wide variety of options available for running Kubernetes locally. The crazy part is that this blog post is only as accurate as my note at the top of the document. Kubernetes is moving fast and becoming the way to run cloud-native applications, but running it without a cloud can be tricky.
Most of the options I mentioned here hit most use cases. You need to do your homework to see what fits your situation the best. Standardize on something, which helps your teams become more successful when they all use the same base wrapper around this amazing technology.