怎么解决go中的notready问题
本篇内容主要讲解“怎么解决go中的notready问题”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“怎么解决go中的notready问题”吧!
让客户满意是我们工作的目标,不断超越客户的期望值来自于我们对这个行业的热爱。我们立志把好的技术通过有效、简单的方式提供给客户,将通过不懈努力成为客户在信息化领域值得信任、有价值的长期合作伙伴,公司提供的服务项目有:域名申请、网络空间、营销软件、网站建设、上党网站维护、网站推广。
环境:
[root@k8s-01 ing]# kubectl version Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25: 06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}[root@k8s-01 ing]# kubectl version |grep server [root@k8s-01 ing]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-01 Ready control-plane,master 18d v1.21.0 k8s-02 Ready worker 18d v1.21.0 k8s-03 Ready worker 18d v1.21.0
现象:
k8s-02节点处于notready状态,查看pod的terminating时间点,7h以前;查看messages日志,报错也是从7h以前开始的。
排查过程
1、检查网络连通性
并且该节点处于单通状态:master和其他节点可以ping通k8s-02机器,k8s-02不能ping通其他机器。
确实网络有问题,于是查看calico的pod状态 是ok的,calico-kube-controller也故障转移了。
describe如下:
进行抓k8s-02的icmp包,master接收到了 但不给回复:
tcpdump -i eth0 icmp and host 10.170.36.46
2、kubelet排查
到这里网络排查 没有头绪,开始根据describe的内容查看kubelet并百度:
百度都是防火墙、关闭swap等等操作,但是我这里都没有……重启节点、重启kubelet都不行!
查看日志messages:
May 11 21:27:58 k8s-02 kubelet: I0511 21:27:58.469919 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:27:59 k8s-02 kubelet: I0511 21:27:59.469278 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:00 k8s-02 kubelet: I0511 21:28:00.469261 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:00 k8s-02 kubelet: E0511 21:28:00.598812 651 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.Obje ctMeta{Name:"k8s-02.167e04da740613a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"k8s-02", UID:"k8s-02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node k8s-02 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"k8s-02"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc01ebe130bad1ba2, ext:12673521163, loc:(*time.Location)(0x74ad9e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc01ebe130bad1ba2, ext:12673521163, loc:(*time.Location)(0x74ad9e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.170.2.32:6443/api/v1/namespaces/default/events": dial tcp 10.170.2.32:6443: i/o timeout'(may retry after sleeping)May 11 21:28:01 k8s-02 kubelet: I0511 21:28:01.469790 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:02 k8s-02 kubelet: I0511 21:28:02.406629 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:02 k8s-02 kubelet: I0511 21:28:02.406669 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:02 k8s-02 kubelet: I0511 21:28:02.469338 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:03 k8s-02 kubelet: I0511 21:28:03.407443 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:03 k8s-02 kubelet: I0511 21:28:03.469928 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:03 k8s-02 kubelet: I0511 21:28:03.617223 651 trace.go:205] Trace[766683077]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (11-May- 2021 21:27:33.615) (total time: 30001ms):May 11 21:28:03 k8s-02 kubelet: Trace[766683077]: [30.001402015s] [30.001402015s] END May 11 21:28:03 k8s-02 kubelet: E0511 21:28:03.617257 651 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Ser vice: Get "https://10.170.2.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.170.2.32:6443: i/o timeoutMay 11 21:28:04 k8s-02 kubelet: I0511 21:28:04.407120 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:04 k8s-02 kubelet: I0511 21:28:04.469376 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:05 k8s-02 kubelet: I0511 21:28:05.407095 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:05 k8s-02 kubelet: I0511 21:28:05.469475 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:05 k8s-02 kubelet: I0511 21:28:05.769847 651 trace.go:205] Trace[347094812]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (11-May- 2021 21:27:35.768) (total time: 30000ms):May 11 21:28:05 k8s-02 kubelet: Trace[347094812]: [30.000987614s] [30.000987614s] END May 11 21:28:05 k8s-02 kubelet: E0511 21:28:05.769907 651 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.C SIDriver: Get "https://10.170.2.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.170.2.32:6443: i/o timeoutMay 11 21:28:06 k8s-02 kubelet: I0511 21:28:06.407171 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:06 k8s-02 kubelet: I0511 21:28:06.469821 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:06 k8s-02 kubelet: I0511 21:28:06.469863 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:06 k8s-02 kubelet: E0511 21:28:06.469887 651 kubelet.go:2298] "Error getting node" err="nodes have not yet been read at least once, cannot construct node obj ect"May 11 21:28:06 k8s-02 kubelet: I0511 21:28:06.570550 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:06 k8s-02 kubelet: I0511 21:28:06.570599 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:07 k8s-02 kubelet: I0511 21:28:07.407416 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:07 k8s-02 kubelet: I0511 21:28:07.571241 651 kubelet.go:461] "Kubelet nodes not sync" May 11 21:28:08 k8s-02 kubelet: I0511 21:28:08.407052 651 kubelet.go:461] "Kubelet nodes not sync"
日志也百度了,总之就是连不上master……
3、根据时间点排查
查看pod停止时间:
同时根据messages里面的日志开始报错时间 和pod停止时间一致,所以排查当时时间点做了什么操作 ,排查 恢复即可!
4、万能解决方案
重启大法:
重启节点、重启kubelet 无效!
解决方案
找到时间点的历史命令,做了externalIp操作,并且和k8s-02的ip一致,删除externalIp 网络恢复,节点ready!
==将ipvs换成iptables就可以正常使用externalIp!==
到此,相信大家对“怎么解决go中的notready问题”有了更深的了解,不妨来实际操作一番吧!这里是创新互联网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!
分享名称:怎么解决go中的notready问题
新闻来源:http://pwwzsj.com/article/jjegis.html