A Guide to Red Hat Hypershift
A Guide to Red Hat Hypershift
Red Hat Hypershift is a new platform that enables enterprises to run Kubernetes clusters on any cloud or edge environment. It is based on the OpenShift Container Platform, which provides a consistent and secure way to manage containerized applications across hybrid and multi-cloud deployments. Hypershift simplifies the installation, configuration, and operation of Kubernetes clusters by automating the provisioning and management of the control plane and worker nodes. It also integrates with Red Hat Advanced Cluster Management for Kubernetes, which offers centralized governance and policy enforcement for clusters across different environments.
One of the main benefits of Hypershift is that it enables edge computing scenarios, where applications can run closer to the data sources and users, reducing latency and bandwidth consumption. Edge computing is especially useful for industries such as manufacturing, retail, healthcare, and telecommunications, where real-time data processing and decision making are critical. Hypershift allows enterprises to deploy and manage Kubernetes clusters at the edge with minimal overhead and complexity, while maintaining security and compliance standards. Hypershift also supports distributed workloads that span across multiple edge locations, enabling high availability and resilience.
Red Hat Hypershift is a middleware for hosting OpenShift control planes at scale that solves for cost and time to provision, as well as portability cross cloud with strong separation of concerns between management and workloads. Clusters are fully compliant OpenShift Container Platform (OCP) clusters and are compatible with standard OCP and Kubernetes toolchains.
In this guide, we will explain the architecture, use case and benefits of Red Hat Hypershift, especially in the context of edge computing in telco or large mega scale projects. We will also provide a step-by-step tutorial on how to enable and provision a Hypershift based cluster on bare metal.
Architecture
Red Hat Hypershift consists of two main components: the host cluster and the guest cluster.
The host cluster is where the Hypershift operator runs and also where the control plane of the guest cluster is deployed. The guest cluster is where the user workloads run and is managed by the host cluster. The host cluster can be any OCP cluster running on any platform, such as AWS, Azure, GCP, VMware or bare metal.
The guest cluster can be provisioned on any platform that supports Ignition, such as AWS, Azure, GCP or bare metal. The guest cluster nodes are registered as agent resources in the host cluster and are bootstrapped using Ignition files generated by the host cluster.
The control plane of the guest cluster consists of several pods running in a dedicated namespace in the host cluster. These pods include the API server, controller manager, scheduler, etcd, kube-proxy and network controller. The control plane pods communicate with the guest cluster nodes via a VPN tunnel established by WireGuard. The following diagram illustrates the architecture of Red Hat Hypershift: !
[Hypershift Architecture]
(hypershift/docs/content/images/high-level-overview.png at main · openshift/hypershift (github.com))
Use case Red Hat Hypershift is designed for use cases that require large numbers of OCP clusters with fast provisioning, low operational overhead and high portability. Some examples of such use cases are: -
Edge computing: Red Hat Hypershift enables telcos or other edge providers to deploy OCP clusters at the edge locations with minimal infrastructure requirements and centralized management. The guest clusters can run edge workloads such as IoT, AI/ML or 5G applications with low latency and high availability. -
Multi-tenancy: Red Hat Hypershift enables cloud providers or enterprises to offer OCP clusters as a service to their customers or internal teams with strong isolation and security. The guest clusters can run any workload that is compatible with OCP or Kubernetes with full control and flexibility. -
Mega scale projects: Red Hat Hypershift enables organizations to scale up their OCP clusters to thousands or tens of thousands of nodes with ease and efficiency. The guest clusters can run large-scale workloads such as big data analytics, machine learning or scientific computing with high performance and scalability.
Benefits Red Hat Hypershift offers several benefits over traditional OCP clusters, such as: -
Cost efficiency: Red Hat Hypershift reduces the cost of running OCP clusters by sharing the control plane resources among multiple guest clusters and minimizing the infrastructure footprint at the edge locations. -
Time efficiency: Red Hat Hypershift reduces the time to provision OCP clusters by leveraging Ignition files for bootstrapping and automating the cluster lifecycle management. -
Portability: Red Hat Hypershift increases the portability of OCP clusters by decoupling them from the underlying platform and enabling them to run on any platform that supports Ignition. -
Separation of concerns: Red Hat Hypershift enhances the separation of concerns between management and workloads by isolating the control plane from the guest cluster nodes and providing different levels of access and visibility for different roles. -
Compliance: Red Hat Hypershift ensures the compliance of OCP clusters by maintaining them as fully compatible with standard OCP and Kubernetes toolchains and APIs.
Tutorial
In this tutorial, we will show how to enable and provision a Hypershift based cluster on bare metal using a host cluster running on AWS. We will assume that you have access to an AWS account and a bare metal server that can boot from an ISO image.
Step 1: Install the host cluster
The first step is to install an OCP 4.10 or higher cluster on AWS using the installer-provisioned infrastructure (IPI) method.
You can follow the official documentation for this step: https://docs.openshift.com/container-platform/4.10/installing/installing_aws/installing-aws-default.html
Step 2: Install the required operators
The next step is to install some operators on the host cluster that are required for Red Hat Hypershift.
These operators are: -
Infrastructure Operator for Red Hat OpenShift: This operator provides the agent service that is responsible for generating the Ignition files and serving the OS images for the guest cluster nodes. -
Red Hat Hypershift Operator: This operator provides the Hypershift service that is responsible for creating and managing the guest cluster control plane pods and resources.
You can install these operators from the OperatorHub in the OpenShift console.
For the Infrastructure Operator, you can use the community operator here: https://operatorhub.io/operator/infrastructure-operator-for-red-hat-openshift
For the Red Hat Hypershift Operator, you can use the operator bundle image here: quay.io/openshift-hive/hypershift-operator-bundle:latest
Step 3: Configure the agent service
The next step is to configure the agent service that will serve the Ignition files and OS images for the guest cluster nodes. You need to create an agent service configuration custom resource that will tell the operator how much storage you need for the various components and what OpenShift versions to maintain. For example, you can use the following YAML file:
apiVersion: hypershift.openshift.io/v1alpha1
kind: AgentServiceConfig
metadata:
name: agent
spec:
databaseStorage:
size: 20Gi
storageClassName: gp2
filesystemStorage:
size: 100Gi
storageClassName: gp2
osImages:
- openshiftVersion: "4.9"
url: https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.9/latest/rhcos-4.9.0-x86_64-live.x86_64.iso
rootFSUrl: https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.9/latest/rhcos-live-rootfs.x86_64.img
You can apply this file using the oc command:
oc apply -f agent-service-config.yaml
This will create a persistent volume claim for the database and the filesystem storage, and download the OS images to the filesystem storage.
Step 4: Create a guest cluster
The next step is to create a guest cluster custom resource that will define the configuration and parameters of the guest cluster. For example, you can use the following YAML file:
apiVersion: hypershift.openshift.io/v1alpha1
kind: HostedCluster
metadata:
name: guest-cluster
namespace: clusters
spec:
release:
image: quay.io/openshift-release-dev/ocp-release:4.9.0-x86_64
pullSecret:
name: pull-secret
auditWebhook:
name: audit-webhook
sshKey:
name: ssh-key
networking:
machineCIDR: 10.0.0.0/16
podCIDR: 10.128.0.0/14
serviceCIDR: 172.30.0.0/16
nodePortRange: "30000-32767"
networkType: OVNKubernetes
platform:
type: BareMetal
bareMetal:
apiVIP: 10.0.0.5
ingressVIP: 10.0.0.6
You can apply this file using the oc command:
oc apply -f guest-cluster.yaml
This will create a hosted control plane for the guest cluster, and generate the kubeconfig file for accessing it.
Step 5: Create node pools
The final step is to create node pools custom resources that will define the number and type of nodes for the guest cluster. For example, you can use the following YAML files:
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
name: master
namespace: clusters
spec:
clusterName: guest-cluster
nodeCount: 3
platform:
type: BareMetal
bareMetal:
isoURL: http://agent-service.hypershift.svc/rhcos-iso?version=4.9&arch=x86_64&rootfs=http://agent-service.hypershift.svc/rhcos-rootfs?version=4.9&arch=x86_64&ignition=http://agent-service.hypershift.svc/ignition?token=<token>
role: master
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
name: worker
namespace: clusters
spec:
clusterName: guest-cluster
nodeCount: 2
platform:
type: BareMetal
bareMetal:
isoURL: http://agent-service.hypershift.svc/rhcos-iso?version=4.9&arch=x86_64&rootfs=http://agent-service.hypershift.svc/rhcos-rootfs?version=4.9&arch=x86_64&ignition=http://agent-service.hypershift.svc/ignition?token=<token>
role: worker
You can apply these files using the oc command:
oc apply -f master-node-pool.yaml
oc apply -f worker-node-pool.yaml
This will create node pool manifests for the guest cluster, and generate a token for each node pool. You need to replace the <token> placeholder in the isoURL with the actual token value, which you can get from the status of the node pool resource.
To boot the guest cluster nodes, you need to use the isoURL as the boot source for your bare metal server. You can use any tool that can boot from an ISO image, such as iPXE or virt-install. Once the nodes are booted, they will join the guest cluster automatically.
You have successfully created a Hypershift based cluster on bare metal. You can access the guest cluster using the kubeconfig file that was generated in the previous step. You can also use the oc get hostedclusters command to see the status of your guest cluster.
Reference:
Redhat.com. (2022). A Guide to Red Hat Hypershift on Bare Metal. [online] Available at: https://www.redhat.com/en/blog/a-guide-to-red-hat-hypershift-on-bare-metal [Accessed 23 Jan. 2024].