kubernetes中Pod资源的操作
一、资源限制:
在镇远等地区,都构建了全面的区域性战略布局,加强发展的系统性、市场前瞻性、产品创新能力,以专注、极致的服务理念,为客户提供成都做网站、成都网站设计、成都外贸网站建设 网站设计制作按需定制设计,公司网站建设,企业网站建设,成都品牌网站建设,全网整合营销推广,成都外贸网站制作,镇远网站建设费用合理。
pod和container的资源请求和限制:
spec.containers[].resources.
limits
.cpu #cpu上限spec.containers[].resources.
limits
.memory #内存上限spec.containers[].resources.
requests
.cpu #创建时分配的基本cpu资源spec.containers[].resources.
requests
.memory #创建时分配的基本内存资源
示例(在master1上操作):
[root@master1 demo]# vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend #Pod资源的名称
spec:
containers:
- name: db #容器1的名称
image: MySQL
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: wp #容器2的名称
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
#插入完成后,按Esc退出插入模式,输入:wq保存退出
`创建资源`
[root@master1 demo]# kubectl apply -f pod2.yaml
pod/frontend created
`查看资源详细信息`
[root@master1 demo]# kubectl describe pod frontend
Name: frontend
Namespace: default
Priority: 0
PriorityClassName:
Node: 192.168.18.148/192.168.18.148 #被分配到的节点为node1
......此处省略多行
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 89s default-scheduler Successfully assigned default/frontend to 192.168.18.148
Normal Pulling 88s kubelet, 192.168.18.148 pulling image "mysql"
Normal Pulled 23s kubelet, 192.168.18.148 Successfully pulled image "mysql"
Normal Created 23s kubelet, 192.168.18.148 Created container
Normal Started 22s kubelet, 192.168.18.148 Started container
Normal Pulling 22s kubelet, 192.168.18.148 pulling image "wordpress" #处于镜像拉取wordpress状态
[root@master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend 2/2 Running 0 4m26s
#此时两个容器就会处于Running运行状态
`查看对应节点上Pod资源的占用情况`
[root@master1 demo]# kubectl describe nodes 192.168.18.148
Name: 192.168.18.148
......此处省略多行
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 550m (55%) 1100m (110%) #核心资源
memory 228Mi (13%) 556Mi (32%) #上限资源
`查看命名空间`
[root@master1 demo]# kubectl get ns
NAME STATUS AGE
default Active 13d
kube-public Active 13d
kube-system Active 13d
#只要不用-n指定,出现的就是默认的这三个
二、重启策略:
1:Always:当容器终止推出后,总是重启容器,默认策略
2:Onfailure:当容器异常退出(退出码为非0)时,重启容器
3:Never:当容器终止退出,从不重启资源
注意:k8s中不支持重启pod资源,只有删除重建
示例(在master1上操作):
`默认的重启策略是Always`
[root@master1 demo]# kubectl edit deploy
#输入/restartPolicy查找
restartPolicy: Always #没有设定重启策略时默认为Always
[root@master1 demo]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: busybox
image: busybox
args: #参数
- /bin/sh #在shell环境中
- -c #command命令
- sleep 30; exit 3 #容器启动后休眠30s,异常退出返回状态码为非0值
#插入完成后,按Esc退出插入模式,输入:wq保存退出
[root@master1 demo]# kubectl apply -f pod3.yaml
pod/foo created
[root@master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
foo 0/1 ContainerCreating 0 18s
#其中有RESTARTS重启值,此时为0
[root@master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
foo 0/1 Error 0 62s
#此时出现Error报错,因为我i们刚刚设置的异常退出,一会再查看时RESTARTS重启值会变为1、
`这个就是依照其中的重启策略去执行的`
[root@master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
foo 1/1 Running 1 3m13s
`先删除之前创建的资源,因为会占用`
[root@master1 demo]# kubectl delete -f pod3.yaml
pod "foo" deleted
[root@master1 demo]# kubectl delete -f pod2.yaml
pod "frontend" deleted
`添加重启策略Never`
[root@master1 demo]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 10 #修改休眠时间为10s
restartPolicy: Never #添加重启策略
#修改完成后,按Esc退出插入模式,输入:wq保存退出
[root@master1 demo]# kubectl apply -f pod3.yaml
pod/foo created
[root@master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
foo 1/1 Running 0 14s
[root@master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
foo 0/1 Completed 0 65s
#此时资源创建完成,不需要使用的状态下会自动休眠,因为之间添加了重启策略,所以不会进行重启
三、健康检查:又称为探针(Probe)
注意:规则可以同时定义
livenessProbe 如果检查失败,将杀死容器,根据Pod的restartPolicy来操作
ReadinessProbe 如果检查失败,kubernetes会把Pod从service endpoints后端节点中中剔除
Probe支持三种检查方法:
httpGet 发送http请求,返回200-400范围状态码为成功
exec 执行Shell命令返回状态码是0为成功
tcpSocket 发起TCP Socket建立成功
示例exec方式(在master1上操作):
[root@master1 demo]# vim pod4.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy #创建一个空文件,休眠30s,删除这个空文件
livenessProbe:
exec: #探测健康
command: #command命令
- cat #执行查看
- /tmp/healthy #创建的空文件
initialDelaySeconds: 5 #容器创建完成5秒之后开始健康检查
periodSeconds: 5 #检查的间隔频率为5秒
`休眠之前检查状态返回值为0,30秒休眠结束之后再检查,因为没有这个文件了就会返回非0值`
`刷新资源`
[root@master1 demo]# kubectl apply -f pod4.yaml
pod/liveness-exec created
[root@master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 0 24s
[root@master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-exec 0/1 Completed 0 53s
[root@master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 1 67s
[root@master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-exec 0/1 CrashLoopBackOff 1 109s
[root@master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 2 2m5s
#当中状态不断改变,就代表它正在不断的进行检查,然后不断的执行重启策略,其中的RESTARTS重启值也会随之增加
新闻名称:kubernetes中Pod资源的操作
网站地址:http://pwwzsj.com/article/ispdii.html