Kubernetes® is a software tool that helps you better orchestrate and manage your cloud, as well as significantly simplifies cloud operations and lowers the cost of cloud computing expenses. While running a simple Kubernetes cluster for testing purposes is a relatively easy task, configuring a fully operational production Kubernetes cluster is challenging and requires specific expertise in container management, networking, storage, security, and other aspects of a production cloud. Many organizations decide to outsource this work to cloud providers who specifically focus on complex cloud systems.
The Rackspace KaaS solution enables you to run Kubernetes workloads on top of Rackspace Private Cloud Powered by OpenStack (RPCO). You can request that Rackspace Kubernetes-as-a-Service be installed on an existing or new RPCO environment starting with RPCO v14.
KaaS Control Panel#
The KaaS Control Panel is an integral part of the KaaS experience for administrators and regular users alike. The control panel allows users to create tokens, which are used to authenticate with the Kubernetes API and some managed services, to easily access and authenticate with managed services, and to perform other administrative actions on a cluster.
Authentication with the KaaS Control Panel is made with the username and password of any OpenStack user. The control panel looks up the provided credentials with OpenStack and allows the user to access the control panel if the credentials are valid.
Any OpenStack user can log in to the control panel; no role memberships or other whitelists are necessary. However, an OpenStack users’ role memberships affect what they can do in the control panel and on a Kubernetes cluster.
Authentication with the control panel is separate from authentication with a Kubernetes cluster. Control panel authentication uses OpenStack credentials, while Kubernetes authentication uses a token that is generated in the control panel. While the authentication mechanisms are different, users’ identifying information still comes from OpenStack in each case.
In OpenStack environments with a single Kubernetes cluster, cluster selection does not apply. The environment’s one and only cluster is implicitly selected as the control panel’s active cluster.
At the top of the control panel’s main navigation menu is a blue dropdown showing the name of the active cluster. Any cluster-specific actions in the control panel are executed against the active cluster. To change the active cluster, click or tap the dropdown and select the name of the desired cluster from the list.
If you are seeing unexpected data in the control panel after changing the active cluster, refreshing the page in your browser will resolve any issues.
In addition to the basic Kubernetes functionality, your Kubernetes cluster comes with managed services that provide extra features.
kube-system namespaces in Kubernetes
are used for managed services. Do not modify any resources
in these namespaces.
Rackspace KaaS provides the following managed services:
- Any production cloud requires performance and uptime monitoring to enable cloud administrators to execute steps to address issues. The Managed Kubernetes solution leverages such tools as Prometheus and Grafana® integrated with the internal Monitoring as a Service (MaaS) system to enable Rackspace operators to track the health of your cloud. Rackspace KaaS deploys two instances of Prometheus. One is for internal use and the other is for monitoring Kubernetes applications. For more information, see Logging and monitoring.
- Implemented by using such tools as Elasticsearch™, Fluentd, and Kibana, the logging managed service provides real-time data analytics and system resource utilization statistics of your cloud.
- Private Docker® image registry
- In addition to public Docker image registry, you can store and manage your own Docker images in a private registry implemented with VMware Harbor.
- Rackspace KaaS uses Flannel for communication between the Kubernetes Pods. Flannel enables advanced networking features, such as network policies and overlay networking. You can define network policies as required for your cloud or request the Rackspace Support team to define them for you.
- Rackspace KaaS uses a highly-available Ceph storage cluster provisioned through the OpenStack Block Storage (cinder) driver for Kubernetes PersistentVolumes (PV). Ceph provides persistent block storage for Kubernetes users and managed services internal data.
Kubernetes supports the concept of virtual hosting by using Ingress resources. The Ingress controller provided with your cluster enables Ingress resources for your cluster. The Ingress controller is highly available (HA) with two replicas. You can read more about Ingress resources in the official Kubernetes documentation.
Sites available through the Ingress controller are routed to by subdomain, such
The Kubernetes Authentication service enables access to Kubernetes clusters to
users who provide valid tokens. However, to authorize a user to perform a
specific set of operations, you must create a Kubernetes
subject of the role bindings must reference such information
as user and role IDs collected from the OpenStack Identity service.
Your Kubernetes cluster is pre-configured with a
grants cluster-wide administrative access to all users with a specific
Keystone role. For example, if you created the
kubernetes-admins role with the ID
f6120fe6406a473682c3b25cdea4510a, your Kubernetes cluster has the
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: "rackspace:default-cluster-admin" subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: f6120fe6406a473682c3b25cdea4510a roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io
Note the difference in the Kubernetes and keystone terminology. A Kubernetes group is referred to as role in Keystone terminology. Do not confuse Keystone’s concept of groups with the Kubernetes concept of groups.
You might want to create additional Keystone roles and role bindings to provision and control Kubernetes access for users without administrative privileges.
Disaster recovery is an integral part of any robust production system. In a Kuberentes environment, you have multiple master, worker, and etcd nodes. Each of these servers runs multiple stateful and stateless components. Stateless components that run on Kubernetes master and worker nodes restore themselves automatically by using the KaaS automation tools. However, stateful components, such as etcd that stores a cluster's data and persistent volumes that store persistent data for your applications, are backed up by Heptio™ Velero.
Rackspace KaaS uses Heptio Velero to automatically create snapshots of persistent volumes and back up the cluster data that can be used later to recover from a cluster failure.
Your Kubernetes cluster is configured to be highly available. The Kubernetes components are replicated behind a load balancer and distributed on multiple compute resources.
By default, your Kubernetes cluster has three worker nodes. The default configuration of your cluster node includes:
- Kubernetes worker nodes:
- vCPU: 4
- RAM: 8 GB
- Local storage: 40 GB of local storage
- Private Docker registry:
- Database: 10 GB
The private Docker registry database stores Docker images metadata. The actual Docker images are stored in the object storage system, such as Ceph RGW, OpenStack Swift, or other compatible object store deployed as part of your cloud.
If you need to resize your cluster, contact your Rackspace representative and request additional resources.
To aid in Rackspace's monitoring of the cluster health anonymoys access to the healthz endpoint of the kubernetes API server is enabled. Users are free to integrate that with their own (external) monitoring solution.