fit pods,fit and healthy

  

  前言:在部署Kubernetes的过程中,需要从k8s.grc.io仓库中拉取部署所需的镜像文件,但是由于国内对国外的防火墙问题导致无法正常拉取,下面介绍一个方法来解决此问题,完成Kubernetes的正常部署。   

  

  问题描述:使用Kubernetes V1.22.1版本部署Kubernetes集群,在进行kubeadm init初始化时,需要从k8s.grc.io仓库拉取所需的镜像:   

  

  .您也可以使用“kubeadm配置映像拉取"预先执行此操作出现一些致命错误:拉取映像失败退出状态1 :拉取映像失败退出状态1 :拉取映像失败退出状态1 :退出状态1 :拉取映像失败退出状态1 :退出状态1 :拉取映像失败退出状态1 :退出状态1 :退出.`解决方案:docker.io仓库对google的容器做了镜像,可以通过下列命令下拉取相关镜像:   

  

  码头工人拉镜像谷歌容器/kube-API服务器-amd 64:v 1。22 .1名码头工人拉镜像谷歌容器/kube-控制器-管理器-amd 64:v 1。22 .1名码头工人拉镜像谷歌容器/kube-调度-amd 64:v 1。22 .1名码头工人拉镜像谷歌容器/kube-proxy-amd 64:v 1。22 .1名码头工人拉mirror Google containers/暂停   

  

  码头工人标签码头工人。io/mirrorgoogle容器/kube-proxy-amd 64:v 1。22 .1 k8s.gcr.io/kube-proxy-amd64:v1.22.1docker标签码头工人。io/mirror Google containers/kube-scheduler-amd 64: v1。22 .1 k8s.gcr.io/kube-scheduler-amd64:v1.22.1docker标签码头工人。io/mirror Google containers/kube-API server-amd 643: v1。22 .1k 8s。gcr。io/kube-API服务器-amd 643:v 1。22 .一   

5.0 k8s.gcr.io/etcd-amd64:3.5.0docker tag docker.io/mirrorgooglecontainers/pause:3.5 k8s.gcr.io/pause:3.5docker tag docker.io/coredns/coredns:1.8.4 k8s.gcr.io/coredns:1.8.4使用docker rmi删除不用的镜像,通过docker images命令显示,已经有我们需要的镜像文件,可以继续部署工作了:

  

# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEk8s.gcr.io/kube-proxy-amd64 v1.22.1 bea694275d97 1 days ago 97.8 MBk8s.gcr.io/kube-scheduler-amd64 v1.22.1 ca43b177bese 1 days ago 56.8 MBk8s.gcr.io/kube-apiserver-amd64 v1.22.1 3de571b6587b 1 days ago 187 MBcoredns/coredns 1.8.4 b3154sdrecfc 1 days ago 45.6 MBk8s.gcr.io/coredns 1.8.4 b3b94275d97c 1 days ago 45.6 MBk8s.gcr.io/etcd-amd64 3.5.0 b8d1f5sa24f7 1 days ago 219 MBk8s.gcr.io/pause 3.5 d6csa23rdsa1 1 days ago 742 kB重新初始化Kubernetes

  

# kubeadm init --kubernetes-version=v1.22.1 --apiserver-advertise-address=192.168.1.18 --image-repository registry.aliyuncs.com/google0.0/16 Using Kubernetes version: v1.22.1 Running pre-flight checks : docker service is not enabled, please run 'systemctl enable docker.service' : hostname "k8s-master" could not be reached : hostname "k8s-master": lookup k8s-master on 192.168.1.1:53: no such host Pulling images required for setting up a Kubernetes cluster This might take a minute or two, depending on the speed of your internet connection You can also perform this action in beforehand using 'kubeadm config images pull' Using certificateDir folder "/etc/kubernetes/pki" Generating "ca" certificate and key Generating "apiserver" certificate and key apiserver serving cert is signed for DNS names Generating "apiserver-kubelet-client" certificate and key Generating "front-proxy-ca" certificate and key Generating "front-proxy-client" certificate and key Generating "etcd/ca" certificate and key Generating "etcd/server" certificate and key etcd/server serving cert is signed for DNS names and IPs <192.168.1.18 127.0.0.1 ::1> Generating "etcd/peer" certificate and key etcd/peer serving cert is signed for DNS names and IPs <192.168.1.18 127.0.0.1 ::1> Generating "etcd/healthcheck-client" certificate and key Generating "apiserver-etcd-client" certificate and key Generating "sa" key and public key Using kubeconfig folder "/etc/kubernetes" Writing "admin.conf" kubeconfig file Writing "kubelet.conf" kubeconfig file Writing "controller-manager.conf" kubeconfig file Writing "scheduler.conf" kubeconfig file Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" Starting the kubelet Using manifest folder "/etc/kubernetes/manifests" Creating static Pod manifest for "kube-apiserver" Creating static Pod manifest for "kube-controller-manager" Creating static Pod manifest for "kube-scheduler" Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". Th All control plane components are healthy after 6.002108 seconds Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster Skipping phase. Please see --upload-certs Marking the node k8s-master as control-plane by adding the labels: Marking the node k8s-master as control-plane by adding the taints Using token: 9t2nu9.00ieyfqmc50dgub6 Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles configured RBAC rules to allow Node Bootstrap tokens to get nodes configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate cre configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token configured RBAC rules to allow certificate rotation for all node client certificates in the cluster Creating the "cluster-info" ConfigMap in the "kube-public" namespace Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key Applied essential addon: CoreDNS Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f .yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.1.18:6443 --token 9t2nu9.00ieyfqmc50dgub6 \ --discovery-token-ca-cert-hash sha256:183b6c95b4e49f0bd4074c61aeefc56d70215240fbeb7a633afe3526006c4dc9初始化成功,问题解决!

  


  

如果您喜欢本文,就请动动您的发财手为本文点赞评论转发,让我们一起学习更多运维相关知识,最后请记得关注我。

相关文章