Table of Contents
Kubernetes - Instalação Debian 11
Ajustes de rede
$ sudo hostnamectl set-hostname k-ctrl-pl-01 $ sudo hostnamectl set-hostname k-node-01 $ sudo hostnamectl set-hostname k-node-02
cat <<EOF | sudo tee -a /etc/hosts 192.168.1.200 k-ctrl-pl-01.example.com k-ctrl-pl-01 192.168.1.201 k-node-01.example.com k-node-01 192.168.1.202 k-node-02.example.com k-node-02 fd00::a192:b168:c1:d200 k-ctrl-pl-01.example.com k-ctrl-pl-01 fd00::a192:b168:c1:d201 k-node-01.example.com k-node-01 fd00::a192:b168:c1:d202 k-node-02.example.com k-node-02 fd00::a192:b168:c1:d210 k-nfs-01.example.com k-nfs-01 EOF
Disco adicional
Disco reservado para o pods - containers.
Em ambos os servidores
$ MOUNT_POINT=/var/lib/containers $ DISK_DEVICE=/dev/sdb
$ echo -e "n\np\n1\n\n\nw" | sudo fdisk ${DISK_DEVICE}
$ sudo mkfs.ext4 ${DISK_DEVICE}1
$ UUID=`sudo blkid -o export ${DISK_DEVICE}1 | grep UUID | grep -v PARTUUID` $ sudo mkdir ${MOUNT_POINT} $ sudo cp -p /etc/fstab{,.dist}
$ echo "${UUID} ${MOUNT_POINT} ext4 defaults 1 2" | sudo tee -a /etc/fstab
$ sudo mount ${MOUNT_POINT}
$ df -hT | grep containers
Instalando o CRI-O
Nessa instalação o CRI-O será usado como Container Runtime.
Carregamento dos módulos necessários que o CRI-O utiliza.
$ cat <<EOF | sudo tee /etc/modules-load.d/crio.conf overlay br_netfilter EOF
$ sudo modprobe overlay $ sudo modprobe br_netfilter
$ lsmod | grep br_netfilter br_netfilter 32768 0 bridge 253952 1 br_netfilter
$ lsmod | grep overlay overlay 143360 0
Parâmetros do Sysctl.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.ipv4.ip_forward = 1 net.ipv6.conf.all.forwarding = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF
Aplicando os parâmetros do Sysctl sem reiniciar o sistema.
$ sudo sysctl --system
Definindo a variável OS referente a sua distribuição e variável VERSION para a versão do cluster Kubernetes que será instalado.
$ OS=Debian_11 $ VERSION=1.24
Adicionando o repositório CRI-O.
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ / EOF
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ / EOF
Adicionando a chave GPG para p repositório CRI-O.
$ curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
100 389 100 389 0 0 455 0 --:--:-- --:--:-- --:--:-- 454
100 390 100 390 0 0 366 0 0:00:01 0:00:01 --:--:-- 366
100 391 100 391 0 0 307 0 0:00:01 0:00:01 --:--:-- 307
100 392 100 392 0 0 264 0 0:00:01 0:00:01 --:--:-- 264
100 393 100 393 0 0 232 0 0:00:01 0:00:01 --:--:-- 232
100 1093 100 1093 0 0 575 0 0:00:01 0:00:01 --:--:-- 0
OK
$ curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 100 1093 100 1093 0 0 1272 0 --:--:-- --:--:-- --:--:-- 1270 OK
Instalando o CRI-O.
$ sudo apt update
$ sudo apt info cri-o Package: cri-o Version: 1.24.6~0 Priority: optional Section: devel Maintainer: Peter Hunt <haircommander@fedoraproject.org> Installed-Size: 96,0 MB Depends: libgpgme11, libseccomp2, conmon, containers-common (>= 0.1.27) | golang-github-containers-common, tzdata Suggests: cri-o-runc | runc (>= 1.0.0), containernetworking-plugins Replaces: cri-o-1.19, cri-o-1.20, cri-o-1.21 Homepage: https://github.com/cri-o/cri-o Download-Size: 20,6 MB APT-Sources: http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.24/Debian_11 Packages Description: OCI-based implementation of Kubernetes Container Runtime Interface.
$ sudo apt install cri-o cri-o-runc cri-tools
Instalando o Kubernets
$ sudo swapoff -a
$ sudo cp -fp /etc/fstab{,.dist}
$ sudo sed -i '/swap/d' /etc/fstab
curl -fsSL "https://packages.cloud.google.com/apt/doc/apt-key.gpg" | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg
echo "deb https://packages.cloud.google.com/apt kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Para instalar a última versão
$ sudo apt update $ sudo apt install kubelet kubeadm kubectl $ sudo apt-mark hold kubelet kubeadm kubectl
Para procurar e instalar uma versão específica
$ apt-cache madison kubeadm $ sudo apt install kubelet=1.24.15-00 kubeadm=1.24.15-00 kubectl=1.24.15-00
Iniciando os serviços
$ sudo systemctl daemon-reload
$ sudo systemctl enable crio --now
$ sudo systemctl status crio
$ sudo systemctl enable kubelet --now
Configurando o Kubernets
Executar no master - Control Plane.
$ sudo kubeadm config images pull I0625 13:22:03.261171 3725 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24 [config/images] Pulled registry.k8s.io/kube-apiserver:v1.24.15 [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.24.15 [config/images] Pulled registry.k8s.io/kube-scheduler:v1.24.15 [config/images] Pulled registry.k8s.io/kube-proxy:v1.24.15 [config/images] Pulled registry.k8s.io/pause:3.7 [config/images] Pulled registry.k8s.io/etcd:3.5.6-0 [config/images] Pulled registry.k8s.io/coredns/coredns:v1.8.6
$ sudo crictl image IMAGE TAG IMAGE ID SIZE registry.k8s.io/coredns/coredns v1.8.6 a4ca41631cc7a 47MB registry.k8s.io/etcd 3.5.6-0 fce326961ae2d 301MB registry.k8s.io/kube-apiserver v1.24.15 04761ffc5bd1d 133MB registry.k8s.io/kube-controller-manager v1.24.15 ccb155671979f 122MB registry.k8s.io/kube-proxy v1.24.15 3c380d132a526 112MB registry.k8s.io/kube-scheduler v1.24.15 c4a0a11ea70a3 53MB registry.k8s.io/pause 3.7 221177c6082a8 718kB
$ mkdir -p yamls/config $ cd yamls/config/
- kubeadm-config.yaml
# vim kubeadm-config.yaml --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration networking: podSubnet: 10.244.0.0/14,fd01::/48 serviceSubnet: 10.96.0.0/16,fd02::/112 --- apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: "177.75.176.34" bindPort: 6443 nodeRegistration: kubeletExtraArgs: node-ip: 177.75.176.34,2804:694:3000:8000::34
$ sudo kubeadm init --config=kubeadm-config.yaml I0625 13:27:34.181396 3972 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24 [init] Using Kubernetes version: v1.24.15 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k-ctrl-pl-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.200] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k-ctrl-pl-01 localhost] and IPs [192.168.1.200 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k-ctrl-pl-01 localhost] and IPs [192.168.1.200 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 27.001244 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k-ctrl-pl-01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k-ctrl-pl-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: op8t1y.uffntz0msanhdoza [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.200:6443 --token op8t1y.uffntz0msanhdoza \ --discovery-token-ca-cert-hash sha256:c352b052ac2b3dd802ab359856b8ea7c26fa929948ee27b9dadef92b5fcfd7cf
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k-ctrl-pl-01 Ready control-plane 88s v1.24.15 192.168.1.200 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-23-amd64 cri-o://1.24.6
$ kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-57575c5f89-cgpkn 1/1 Running 0 115s 10.85.0.2 k-ctrl-pl-01 <none> <none> kube-system coredns-57575c5f89-ndsff 1/1 Running 0 115s 10.85.0.3 k-ctrl-pl-01 <none> <none> kube-system etcd-k-ctrl-pl-01 1/1 Running 0 2m9s 192.168.1.200 k-ctrl-pl-01 <none> <none> kube-system kube-apiserver-k-ctrl-pl-01 1/1 Running 0 2m9s 192.168.1.200 k-ctrl-pl-01 <none> <none> kube-system kube-controller-manager-k-ctrl-pl-01 1/1 Running 0 2m9s 192.168.1.200 k-ctrl-pl-01 <none> <none> kube-system kube-proxy-zph26 1/1 Running 0 115s 192.168.1.200 k-ctrl-pl-01 <none> <none> kube-system kube-scheduler-k-ctrl-pl-01 1/1 Running 0 2m9s 192.168.1.200 k-ctrl-pl-01 <none> <none>
Adicionando os workers - nodes
k-node-01
- kubeadm-config.yaml
$ vim kubeadm-config.yaml --- apiVersion: kubeadm.k8s.io/v1beta3 kind: JoinConfiguration discovery: bootstrapToken: apiServerEndpoint: 177.75.176.34:6443 token: "cv5m0b.aehl2kux0tai4mga" caCertHashes: - "sha256:9ac25b5e2fffee49faaa4288316fbf208454574c3ab411a9bfaf9afb71a2ab3d" # change auth info above to match the actual token and CA certificate hash for your cluster nodeRegistration: kubeletExtraArgs: node-ip: 177.75.176.35,2804:694:3000:8000::35
$ sudo kubeadm join --config=kubeadm-config.yaml [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
k-node-02
<file yaml kubeadm-config.yaml>
apiVersion: kubeadm.k8s.io/v1beta3 kind: JoinConfiguration discovery:
bootstrapToken: apiServerEndpoint: 177.75.176.34:6443 token: "cv5m0b.aehl2kux0tai4mga" caCertHashes: - "sha256:9ac25b5e2fffee49faaa4288316fbf208454574c3ab411a9bfaf9afb71a2ab3d" # change auth info above to match the actual token and CA certificate hash for your cluster
nodeRegistration:
kubeletExtraArgs: node-ip: 177.75.176.35,2804:694:3000:8000::35
</file>
$ sudo kubeadm join --config=kubeadm-config.yaml [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
$ kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k-ctrl-pl-01 Ready control-plane 18m v1.26.1 177.75.176.34 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-21-amd64 cri-o://1.24.4 k-node-01.juntotelecom.com.br Ready <none> 3m21s v1.26.1 177.75.176.35 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-21-amd64 cri-o://1.24.4 k-node-02.juntotelecom.com.br Ready <none> 60s v1.26.1 177.75.176.36 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-21-amd64 cri-o://1.24.4
Rede calico
$ kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml namespace/tigera-operator created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/tigera-operator created serviceaccount/tigera-operator created clusterrole.rbac.authorization.k8s.io/tigera-operator created clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created deployment.apps/tigera-operator created
$ curl -L https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -o custom-resources.yaml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 825 100 825 0 0 1412 0 --:--:-- --:--:-- --:--:-- 1410
<file yaml custom-resources.yaml>
$ cat custom-resources.yaml
This section includes base Calico installation configuration.
For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1 kind: Installation metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 10.244.0.0/14
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
- blockSize: 122
cidr: fd01::/48
encapsulation: None
natOutgoing: Enabled
nodeSelector: all()
This section configures the Calico API server.
For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1 kind: APIServer metadata:
name: default
spec: {} </file>
$ kubectl apply -f custom-resources.yaml installation.operator.tigera.io/default created apiserver.operator.tigera.io/default created
$ cat /etc/cni/net.d/10-calico.conflist { "name": "k8s-pod-network", "cniVersion": "0.3.1", "plugins": [ { "type": "calico", "datastore_type": "kubernetes", "mtu": 0, "nodename_file_optional": false, "log_level": "Info", "log_file_path": "/var/log/calico/cni/cni.log", "ipam": { "type": "calico-ipam", "assign_ipv4" : "true", "assign_ipv6" : "true"}, "container_settings": { "allow_ip_forwarding": false }, "policy": { "type": "k8s" }, "kubernetes": { "k8s_api_root":"https://10.96.0.1:443", "kubeconfig": "/etc/cni/net.d/calico-kubeconfig" } }, { "type": "bandwidth", "capabilities": {"bandwidth": true} }, {"type": "portmap", "snat": true, "capabilities": {"portMappings": true}} ] }
após reiniciar o servidor o calico conseguiu atribuir os ips da configuração aos pods.
$ kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-apiserver calico-apiserver-6688c76778-2jkqx 1/1 Running 0 93m 10.246.220.11 k-ctrl-pl-01 <none> <none> calico-apiserver calico-apiserver-6688c76778-z6m9n 1/1 Running 0 93m 10.246.220.10 k-ctrl-pl-01 <none> <none> calico-system calico-kube-controllers-6b7b9c649d-lnb89 1/1 Running 0 93m 10.246.220.12 k-ctrl-pl-01 <none> <none> calico-system calico-node-h4wc4 1/1 Running 0 56s 177.75.176.36 k-node-02 <none> <none> calico-system calico-node-jgx8m 1/1 Running 0 70s 177.75.176.35 k-node-01 <none> <none> calico-system calico-node-l7jj2 1/1 Running 3 117m 177.75.176.34 k-ctrl-pl-01 <none> <none> calico-system calico-typha-74f5669c89-684dq 1/1 Running 1 (15s ago) 46s 177.75.176.36 k-node-02 <none> <none> calico-system calico-typha-74f5669c89-vrbjw 1/1 Running 0 110s 177.75.176.34 k-ctrl-pl-01 <none> <none> calico-system csi-node-driver-8vs84 2/2 Running 0 68s 10.246.228.193 k-node-01 <none> <none> calico-system csi-node-driver-f8kgm 2/2 Running 0 51s 10.245.15.193 k-node-02 <none> <none> calico-system csi-node-driver-j4h87 2/2 Running 6 117m 10.246.220.9 k-ctrl-pl-01 <none> <none> kube-system coredns-787d4945fb-5w9l7 1/1 Running 3 144m 10.246.220.8 k-ctrl-pl-01 <none> <none> kube-system coredns-787d4945fb-fjvxj 1/1 Running 3 144m 10.246.220.7 k-ctrl-pl-01 <none> <none> kube-system etcd-k-ctrl-pl-01 1/1 Running 3 144m 177.75.176.34 k-ctrl-pl-01 <none> <none> kube-system kube-apiserver-k-ctrl-pl-01 1/1 Running 3 144m 177.75.176.34 k-ctrl-pl-01 <none> <none> kube-system kube-controller-manager-k-ctrl-pl-01 1/1 Running 3 144m 177.75.176.34 k-ctrl-pl-01 <none> <none> kube-system kube-proxy-bjbvk 1/1 Running 0 56s 177.75.176.36 k-node-02 <none> <none> kube-system kube-proxy-jfb22 1/1 Running 0 70s 177.75.176.35 k-node-01 <none> <none> kube-system kube-proxy-zbbkg 1/1 Running 3 144m 177.75.176.34 k-ctrl-pl-01 <none> <none> kube-system kube-scheduler-k-ctrl-pl-01 1/1 Running 3 144m 177.75.176.34 k-ctrl-pl-01 <none> <none> tigera-operator tigera-operator-54b47459dd-mvxdm 1/1 Running 0 109s 177.75.176.34 k-ctrl-pl-01 <none> <none>
