Deploying a High-Availability K3s Cluster with KubeVIP
Note
This feature is crazy and experimental! Do not run in production servers. Feedback and bug reports are welcome, as we are improving the p2p aspects of Kairos.K3s is a lightweight Kubernetes distribution that is easy to install and operate. It’s a great choice for small and edge deployments, but it can also be used to create a high-availability (HA) cluster with the help of KubeVIP. In this guide, we’ll walk through the process of deploying a highly-available k3s cluster with KubeVIP, which provides a high available ip for the control plane.
The first step is to set up the cluster. Kairos automatically deploys an HA k3s cluster with KubeVIP to provide a high available ip for the control plane. KubeVIP allows to setup an ElasticIP that is advertized in the node’s network and, as managed as a daemonset in kubernetes it is already running in HA.
The difference between this setup is that we just use the p2p network to automatically co-ordinate nodes, while the connection of the cluster is not being routed to a VPN. The p2p network is used for co-ordination, self-management, and used to add nodes on day 2.
In order to deploy this setup you need to configure the cloud-config file. You can see the example of the yaml file below. You need to configure the hostname, user and ssh_authorized_keys. You need also to configure kubevip with the elastic ip and the p2p network with the options you prefer.
When configuring the p2p
section, start by adding your desired network_token
under the p2p configuration in the cloud-config file. To generate a network token, see documentation.
Next, set up an Elastic IP (kubevip.eip
) with a free IP in your network. KubeVIP will advertise this IP, so make sure to select an IP that is available for use on your network.
In the VPN configuration, the create and use options are disabled, so the VPN setup is skipped and not used to route any traffic into.
Feedback
Was this page helpful?
Awesome! Glad to hear it! Please tell us how we can improve.
Oh snap! Sorry to hear that. Please tell us how we can improve.