Last updated: Jan 06, 2020
This documentation is intended as a quick reference for Rackspace customers who have questions about Rackspace Kubernetes-as-a-Service.
What is Rackspace Kubernetes-as-a-Service?
Rackspace Kubernetes-as-a-Service (KaaS) is a managed service that enables Rackspace deployment engineers to provision Kubernetes® clusters in supported cloud provider environments. Kubernetes is an open source container orchestration tool that enables system administrators to manage containerized applications in an automated manner. Running containerized applications efficiently is a complex task that typically requires a team of experts to architect, deploy, and maintain your cluster in your specific environment. Rackspace KaaS does these things for you, so you can focus on what is important for your business.
The Rackspace KaaS product includes the following features:
A recent conformant open-source version of Kubernetes
Your Kubernetes cluster runs the latest stable community Kubernetes software and is compatible with all Kubernetes tools. In a default configuration, three Kubernetes worker nodes are created that ensure high availability and fault tolerance for your cluster.
Logging and monitoring
Based on popular monitoring tools Prometheus®, Elasticsearch™, Grafana®, and others, the Rackspace KaaS solution ensures real-time analytics and statistics across your cluster.
Private container image registry
While using public container image registries, such as DockerHub and Quay® is still an option, some of your images might require an additional level of security. A private container image registry enables you to store and manage your own container images in a protected location that restricts public access.
Advanced network configuration
Rackspace KaaS uses Calico® for network policies to enable you to configure a flexible networking architecture. Many cloud environments require a complex networking configuration that isolates one type of network traffic from another. Network policies provide answers to many networking issues.
Backup and recovery
KaaS integrates Heptio® Velero to automatically create snapshots of your data and restore your persistent volumes and cluster resources with minimum downtime in an event of emergency. Velero enables you to move cluster resources between cloud providers, as well as create replicas of your production environment for testing purposes.
The following diagram provides a high-level overview of the Rackspace KaaS solution:
My company has an open-source initiative to avoid vendor lock-in. Is your installation tool open-sourced?
kaasctl tool is a wrapper around a few
open-source projects, such as Kubespray, Terraform®, and others. In the
future releases, this tool will change. The installer should have no impact
on any initiatives to avoid vendor lock-in because it exists outside of
the scope of what a user accesses or operates.
What happens to my Kubernetes cluster if we discontinue Rackspace KaaS?
The current offering does not account for a Build, Operate, and Transfer (BOT) model. This feature is on the current roadmap. If this is a requirement or concern, contact your Rackspace Account Manager to discuss available options.
What OpenStack components are consumed by the Rackspace KaaS offering?
The current Rackspace KaaS offering consumes the following OpenStack components:
- OpenStack® Compute service (nova)
- OpenStack Networking service (neutron)
- OpenStack Load Balancing service (octavia)
- OpenStack Identity service (keystone)
- OpenStack Block Storage service (cinder)
- OpenStack DNS service (designate)
- OpenStack Object Storage service (swift)
Are there any extra components from the ones listed above?
The Rackspace KaaS offering deploys an authentication bridge and a user interface on the physical servers that are also known as the OpenStack Infrastructure nodes.
In addition, Rackspace KaaS on OpenStack requires Cinder backed by Ceph®. We use Cinder with Ceph because the Cinder's default Logic Volume Manager (LVM) backend does not support data replication, which is a requirement for the data volume failover and the resiliency of the Kubernetes worker node.
What load balancer software is used in the Rackspace KaaS offering?
The choice of load balancers and Ingress Controllers depends on the underlying
cloud platform capabilities. For example, Rackspace KaaS on OpenStack
leverages a highly available instance of OpenStack Octavia Load Balancer
as a service (LBaaS) with NGINX® Ingress Controllers preconfigured and
This configuration enables Day 1 support for application developers to deploy
Kubernetes applications with native
type: loadbalancer support for
What do you use as your backend object store for KaaS?
Rackspace KaaS leverages several types of storage depending on the use case. At the object store level, we require access to a Swift object API or Ceph Rados Gateway (RGW). The object store stores backups, snapshots, and container images pushed to the private container registry (Harbor). These object store APIs are also exposed for application developers to use within their applications.
Why is an object store a requirement for OpenStack-based deployments?
Using an object store for backups, snapshots, container image storage, and versioning is the Kubernetes community standard. By using the object store native features, as opposed to writing those features into a specific file system, Rackspace KaaS enables support for storage and versioning of disaster recovery binary large objects (blobs) over a period of months and years.
Why are you choosing to use Ceph in RPC?
To support OpenStack, Rackspace KaaS requires an end-to-end, highly available architecture.
By default, Cinder does not support volume replication. If a single Cinder host fails, the data stored on that block device is lost. In turn, Kubernetes cannot fail over data volumes to Kubernetes nodes. By using Ceph's volume replication, we ensure that all failure scenarios result in a volume or block device that can fulfill Kubernetes failover semantics.
Will I have the ability to use other public cloud Infrastructure as a Service (IaaS) platforms?
Support for other IaaS platforms is something that we are currently examining and scoping on our product roadmap. If you have an urgent requirement to support a specific IaaS platform, such as a hybrid/burst scenario, contact your Rackspace Account Manager to raise the priority to our product team.
Will there be a single pane of glass user interface (UI) where I can manage all of my clusters across IaaS platforms and OpenStack installations?
This functionality is on our product roadmap and we are actively working to make a unified user experience regardless of infrastructure choice.
What is the reference architecture and base requirements for an RPCO environment?
The minimum requirements for a highly available Kubernetes cluster include the following items:
- 3 x Kubernetes Master nodes (VMs): According to the OpenStack Compute
anti-affinitypolicy, each Kubernetes Master node is located on a separate nova host.
- 5 x etcd nodes (VMs): According to the OpenStack Compute service
anti-affinitypolicy, each etcd master node is located on a different nova host.
A custom number of Kubernetes worker nodes is based on the specific deployment needs and workloads. Ideally, two Kubernetes worker nodes should not be hosted on the same OpenStack compute node. However, this situation is not enforced as it might dramatically increase the total count of the OpenStack compute nodes in some deployments.
Therefore, you need a minimum of five OpenStack compute hosts to set affinity rules correctly for the etcd cluster.
By default, Ceph requires a minimum of three nodes for data replication and resiliency.
Can a Rackspace KaaS cluster use the same control plane as my OpenStack private cloud?
In a typical OpenStack deployment, you have the control plane and the data plane. The control plane is the nodes that serve the OpenStack services. The data plane is the aggregated physical hosts where your workloads (VMs) run.
Because Rackspace KaaS runs within the context of OpenStack Compute nodes, it runs within the data plane of your OpenStack deployment. However, supporting services, such as authentication and others, run on the same control plane nodes as other OpenStack services.
How does the Rackspace KaaS solution work? Is it correct that Kubernetes communicates with OpenStack and tells it to spin VMs, and then Kubernetes deploys containers inside of those OpenStack VMs?
When Kubernetes needs a new node, it creates a new nova API call, the Kubernetes provisioner configures and installs Kubernetes and supporting software, and then adds that node to the cluster.
Kubernetes runs multiple container formats. After you deploy a Kubernetes cluster or add a node, Kubernetes schedules work on the worker nodes of the cluster. In a Kubernetes environment, these are pods, deployments, and services, and not specifically Docker containers.
For information about how Kubernetes schedules work, see the official documentation for Kubernetes at https://kubernetes.io/docs.
Do you have any details on the EFK that is deployed on our Managed Kubernetes cluster?
Each Rackspace KaaS deployment includes a fully configured Elasticsearch®, Fluentd®, and Kibana® (EFK) stack. This installation is meant for your application developers to use for centralized application logging for the services and applications that you deploy.
These services are open-source, upstream software with a rational default configuration for application development use.
How do I view the Kubernetes cluster logs?
When running applications in a Kubernetes cluster, you can use the
logs command to collect the entire output of an application. Your Kubernetes
cluster is preconfigured to aggregate all of your application logs and make
them searchable by using Elasticsearch and Kibana. You can access these logs
by using the Ingress Controller that is provided with your cluster.
To view the logs, complete the following steps:
- In your browser, navigate to
- When prompted, use
Will the Elasticsearch database clustering be configured during the deployment?
By default, Rackspace KaaS is deployed with three Elasticsearch containers. If more Elasticsearch instances are required, you can request that your Account Manager increase the replica count of the Elasticsearch containers to a required number.
Is the logrotate process configured to run on all Kubernetes worker nodes?
Yes. Every Rackspace Kubernetes-as-as-Service installation uses nova VMs that run a hardened Linux® OS (Container Linux) that has log rotation enabled. However, this functionality is not exposed to end users.
How do I customize Fluentd logs?
Currently, the EFK stack includes Fluentd as a fully managed service. If you need to customize your deployment, contact your Account Manager to provide your use case and work with Support to enable the required customization.
Can you help me create and troubleshoot my YAML files and troubleshoot my Kubernetes deployments?
Yes. Rackspace offers best practices and assistance with creating the various YAML files that are used by the Kubernetes primitives. Rackspace employees do not replace a team with Kubernetes knowledge, but augment it.
How do I manage my users?
Rackspace has integrated Kubernetes authentication into the OpenStack Identity service (keystone). Therefore, you can use the OpenStack Dashboard and other standard OpenStack tools to manage your users and groups. Rackspace can configure keystone to upload its users to an Lightweight Directory Access Protocol (LDAP) provider, such as Microsoft® Active Directory® (AD) so that you can manage keystone users in the AD user interface directly.
How do I scale a Kubernetes cluster?
Currently, node addition and recovery are performed by our Support personnel. To request the addition of worker nodes to your Kubernetes cluster, submit a ticket in your account control panel.