Encountered below error when trying to connect EKS Cluster to Azure Arc.
Based on the error our first instinct was the registry connectivity issue. But thanks to the input from @Aravindh, we cleared the temp cache for helm (C:\Users\username\AppData\Local\Temp\helm\). And it all started working again, and we were able to connect the EKS cluster to Azure Arc.
Unable to pull helm chart from the registry 'mcr.microsoft.com/azurearck8s/batch1/stable/azure-arc-k8sagentle/azure-arc-k8sagents:1.4.0': Error: blob sha256:a695085bbfb24345ad4b68a1cb6e125e909d7a53dAppData\Local\Te75427b11e002aef1a5bf69d expected at C:\Users\username\AppData\Local\Temp\helm\registry\cache\blobs\sha256\a695085bbfb24345ad4b68a1cb6e125e909d7a53d75427b11e002aef1a5bf69d: not found
This post is part of the Azure Arc for Kubernetes series, in this post, we will create Helm-based GitOps configuration on a Elastic Kubernetes Service (Amazon EKS) cluster which is connected as an Azure Arc connected cluster resource.
GitOps for Kubernetes is the practice of declaring the desired state of Kubernetes cluster configurations (deployments, namespaces, etc.) in a Git repo. The Git repository can contain YAML manifests (describing any valid Kubernetes resources, including Namespaces, ConfigMaps, Deployments, DaemonSets, etc.) and Helm charts for deploying applications. This declaration is followed by a polling and pull-based deployment of these cluster configurations using an operator. Flux, a popular open-source tool in the GitOps space, can be deployed on the Kubernetes cluster to ease the flow of configurations from a Git repository to a Kubernetes cluster.
We will deploy & attach 2 GitOps configuration to your cluster, a cluster-level config to deploy nginx-ingress controller and a namespace-level config to deploy the “Hello Arc” web application in your Kubernetes cluster.
Note: For the purpose of this guide, notice how the “git-poll-interval 3s” is set. The 3 seconds interval is useful for demo purposes since it will make the git-poll interval to rapidly track changes on the repository but it is recommended to have longer interval in your production environment
The helm chart deploys the Cluster-level config “nginx-ingress” at cluster scope
The Cluster level config initiates the Flux nginx-ingress pods and service deployment. Get the details below.
kubectl get pods,svc -n cluster-mgmt
Confirm the config deployed on the Azure portal. It takes few min for the configuration change it’s “Operator State” from “Pending” to “Installed”
Deploy Namespace-level config to deploy the “Hello Arc” application Helm chart
The “Hello Arc” application (a Namespace-level component) will be deployed with 1 replica to the prod namespace.
Configure EKS Nodes to communicate to Control Plane
Add the ConfigMap to the cluster from Terraform. The ConfigMap is a Kubernetes configuration, in this case for granting access to our EKS cluster. This ConfigMap allows our EC2 instances in the cluster to communicate with the EKS master, as well as allowing our user account access to run commands against the cluster.
az ad sp create-for-rbac -n "<Unique Name>" --role contributor
Example: az ad sp create-for-rbac -n "http://zcarcseries" --role contributor
Verify access to cluster and Azure:
A kubeconfig file pointing to the cluster you want to connect to Azure Arc.
‘Read’ and ‘Write’ permissions for the user or service principal connecting creating the Azure Arc enabled Kubernetes resource type (Microsoft.Kubernetes/connectedClusters).
Install the following Azure Arc enabled Kubernetes CLI extensions of versions
az extension add --name connectedk8s
az extension add --name k8s-configuration
# To update
az extension update --name connectedk8s
az extension update --name k8s-configuration
Register the two providers for Azure Arc enabled Kubernetes
az provider register --namespace Microsoft.Kubernetes
az provider register --namespace Microsoft.KubernetesConfiguration
# Monitor the registration process
az provider show -n Microsoft.Kubernetes -o table
az provider show -n Microsoft.KubernetesConfiguration -o table
Login to previously created, Service Principal
export spappId='<Service Principal App ID>'
export spsecret='<Service Principal Client Secret>'
export tenantId='<Tenant ID>'
az login --service-principal --username $spappId --password $spsecret --tenant $tenantId
Set variables and create a resource group
az group create --name $resourceGroup --location 'eastus'
Note: After onboarding the cluster, it takes around 5 to 10 minutes for the cluster metadata (cluster version, agent version, number of nodes, etc.) to surface on the overview page of the Azure Arc enabled Kubernetes resource in Azure portal.
Check Namespace, deployments created as part of onboarding