侧边栏壁纸
博主头像
往事随风博主等级

当你感到悲哀痛苦时,最好是去学些什么东西。学习会使你永远立于不败之地!

  • 累计撰写 20 篇文章
  • 累计创建 6 个标签
  • 累计收到 2 条评论
标签搜索

目 录CONTENT

文章目录

k8s学习笔记

往事随风
2022-08-26 / 0 评论 / 0 点赞 / 433 阅读 / 19,404 字 / 正在检测是否收录...

k8s学习笔记

install

master 操作系统 ip 作用 安装
k8s-master-01 CentOS 7.6.1810 192.168.152.23 docker,kubeadm。kubectl,kubelet
k8s-node-01 CentOS 7.6.1810 192.168.152.26 docker,kubeadm。kubectl,kubelet
k8s-node-02 CentOS 7.6.1810 192.168.152.27 docker,kubeadm。kubectl,kubelet

安装前准备(所有机器都要执行的)

1、修改主机名

[root@centos7 ~]# hostnamectl set-hostname k8s-master-01  && bash
[root@k8s-master-01 ~]#

[root@centos7 ~]# hostnamectl set-hostname k8s-node-01  && bash
[root@k8s-node-01 ~]#

[root@centos7 ~]# hostnamectl set-hostname k8s-node-02  && bash
[root@k8s-node-02 ~]#

2、修改host文件

cat >> /etc/hosts << EOF
192.168.152.23    k8s-master-01
192.168.152.26    k8s-node-01
192.168.152.27    k8s-node-02
EOF

3、禁用swap

# 临时禁用
swapoff -a
#永久禁用,若需要重启后也生效,在禁用swap后还需修改配置文件/etc/fstab,注释swap
sed -i.bak '/swap/s/^/#/' /etc/fstab

4、内核参数修改

# 配置桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 加载配置文件
sysctl -p /etc/sysctl.d/k8s.conf
# 加载网桥过滤模块
modprobe br_netfilter
# 查看网桥过滤模块是否加载成功
lsmod | grep br_netfilter

5、关闭selinux(linux的安全机制)

sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

6、关闭防火墙

firewall-cmd --state          # 查看防火墙状态
systemctl stop firewalld      # 停止防火墙的服务
systemctl disable firewalld   # 禁止开机启动

7、配置免密登录

# 生成秘钥
ssh-keygen -t rsa
# 分发到其他主机
ssh-copy-id 主机名

8、配置yum源

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache

9、Docker安装配置

#安装依赖包
yum install -y yum-utils   device-mapper-persistent-data   lvm2
#设置docker源
yum-config-manager  --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 查看docker安装版本
yum list docker-ce --showduplicates 
# 安装docker
yum install -y docker-ce docker-ce-cli containerd.io
# 或者安装知道版本的docker
# yum install docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io
# 查看docker版本
docker --version
#配置docker加速
mkdir /etc/docker
cat <<EOF> /etc/docker/daemon.json
{
	"exec-opts": ["native.cgroupdriver=systemd"], #修改cgroupdriver是为了消除告警
	"registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
}
EOF
#启动并开机启动
systemctl start docker && systemctl enable docker

# 命令补全 安装bash-completion
yum -y install bash-completion
source /etc/profile.d/bash_completion.sh

#验证
docker --version
docker run hello-world

修改kubernetes源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#更新缓存
yum clean all &&  yum -y makecache
  • [] 中括号中的是repository id,唯一,用来标识不同仓库
  • name 仓库名称,自定义
  • baseurl 仓库地址
  • enable 是否启用该仓库,默认为1表示启用
  • gpgcheck 是否验证从该仓库获得程序包的合法性,1为验证
  • repo_gpgcheck 是否验证元数据的合法性 元数据就是程序包列表,1为验证
  • gpgkey=URL 数字签名的公钥文件所在位置,如果gpgcheck值为1,此处就需要指定gpgkey文件的位置,如果gpgcheck值为0就不需要此项了

安装master节点

版本查看

 yum list kubelet --showduplicates | sort -r 

安装kubelet、kubeadm和kubectl

yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
# 查看版本
kubelet --version
Kubernetes v1.23.6
  • kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具
  • kubeadm 用于初始化集群,启动集群的命令工具
  • kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

启动kubelet

# 可以等节点初始化之后启动,不然可能会启动不成功
systemctl start kubelet && systemctl enable kubelet 

kubelet命令补全

echo "source <(kubectl completion bash)" >> ~/.bash_profile
source .bash_profile 

镜像下载

Kubernetes几乎所有的安装组件和Docker镜像都放在goolge自己的网站上,直接访问可能会有网络问题,这里的解决办法是从阿里云镜像仓库下载镜像,拉取到本地以后改回默认的镜像tag。

# 编写下载镜像的脚本
[root@k8s-master-01 ~]# cat image.sh 
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/google_containers
version=v1.23.6
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
  docker pull $url/$imagename
done

#执行脚本
[root@k8s-master-01 ~]# sh image.sh
#查看镜像
[root@k8s-master-01 ~]# docker images
REPOSITORY                           TAG       IMAGE ID       CREATED         SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.23.6   d521dd763e2e   3 weeks ago     130MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.23.6   3a5aa3a515f5   3 weeks ago     51MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.23.6   2ae1ba6417cb   3 weeks ago     110MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.23.6   586c112956df   3 weeks ago     119MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   latest    5185b96f0bec   2 months ago    48.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.5.1-0   aebe758cef4c   3 months ago    299MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.6       221177c6082a   4 months ago    711kB

初始化Master

kubeadm init \
--apiserver-advertise-address=192.168.152.23  \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.23.6 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all

初始化如果

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.152.23:6443 --token f7iyyt.xiglxt9h53ubkbjs \
        --discovery-token-ca-cert-hash sha256:a0f1320f210030ca756d2ffa4f9ca3188a41f4cffb9a590108e8f40370799650
        
        
# 执行
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile 
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络

[root@k8s-master-01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
# 查看是否成功
[root@k8s-master-01 ~]#  kubectl get pods -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-zl25r   1/1     Running   0          11s

master节点配置

taint:污点

taint:污点的意思。如果一个节点被打上了污点,那么pod是不允许运行在这个节点上面的

默认情况下集群不会在master上调度pod,如果偏想在master上调度Pod,可以执行如下操作:

# 查看污点
[root@k8s-master-01 ~]# kubectl describe node  k8s-master-01 |grep -i taints
Taints:             node-role.kubernetes.io/master:NoSchedule
# 删除污点
[root@master ~]# kubectl taint nodes  k8s-master-01   node-role.kubernetes.io/master-
node/master untainted
# 污点机制
kubectl taint node [node] key=value[effect]   
     其中[effect] 可取值: [ NoSchedule | PreferNoSchedule | NoExecute ]
      NoSchedule: 一定不能被调度
      PreferNoSchedule: 尽量不要调度
      NoExecute: 不仅不会调度, 还会驱逐Node上已有的Pod
# 打污点
[root@master ~]# kubectl taint node k8s-master-01 key1=value1:NoSchedule
node/master tainted
[root@master ~]# kubectl describe node k8s-master-01 |grep -i taints
Taints:             key1=value1:NoSchedule

Node节点安装

安装kubelet、kubeadm和kubectl

2. 下载镜像

同master节点

3. 加入集群

以下操作master上执行

# 查看令牌(如果令牌过期才需要执行下面的操作,否则不需要执行下面操作)
[root@k8s-master-01 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
f7iyyt.xiglxt9h53ubkbjs   23h         2022-08-05T06:48:32Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
# 生成新令牌
[root@k8s-master-01 ~]# kubeadm token create
1zl3he.fxgz2pvxa3qkwxln
# 生成新的加密秘钥窜
[root@k8s-master-01 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'
# node节点加入集群
[root@node01 ~]# kubeadm join 172.27.9.131:6443 --token 1zl3he.fxgz2pvxa3qkwxln  --discovery-token-ca-cert-hash sha256:5f656ae26b5e7d4641a979cbfdffeb7845cc5962bbfcd1d5435f00a25c02ea50

在每个node节点执行

#如果忘记这个条join命令可以执行下面这条命令重新查看
kubeadm token create --print-join-command
# 加入集群
[root@k8s-node-01 ~]# kubeadm join 192.168.152.23:6443 --token f7iyyt.xiglxt9h53ubkbjs \
>         --discovery-token-ca-cert-hash sha256:a0f1320f210030ca756d2ffa4f9ca3188a41f4cffb9a590108e8f40370799650
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# 重置
[root@k8s-node-01 ~]# kubeadm reset

Dashboard安装

下载yaml

[root@k8s-master-01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

运行yaml文件

[root@k8s-master-01 ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
# 查看状态
[root@k8s-master-01 ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-799d786dbf-5bj68   1/1     Running   0          59s
kubernetes-dashboard-546cbc58cd-7bcnp        1/1     Running   0          59s

修改配置

#编辑pod将ClusterIP改成NodePort
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

查看外网端口

[root@k8s-master-01 ~]# kubectl get svc -A | grep kubernetes-dashboard
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.102.216.65   <none>        8000/TCP                 3m5s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.106.10.85    <none>        443:31872/TCP            3m6s

创建一个admin

默认dashboard出厂的serviceaccount权限太低,需要配置一个admin用户,用它的token登录即可。

创建一个文件dashboard-serviceaccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
# 执行
[root@k8s-master-01 ~]# kubectl apply -f dashboard-serviceaccount.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

访问

浏览器输入https://192.168.152.23:31872

image-20220817103254654

用token方式登录
[root@k8s-master-01 ~]# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6ImtEa0RsamNGejRSOWdyQ1o4bGtncDNDRXJzZEhqd2o5b1BTZnJraWhnbWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXFnOHZ4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwNGYwMjc3Yi0wZGNlLTQ4M2EtYWIwMi03NjdlMjQ1OThkYTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.ObY5Xjbt0iHFj86kjWf6v9qfgt1Xf86YhlW9is1ryiNqxGDKn7icMoozLYHVJEupnUSZF7BFelVHOVfcDL-RPsq_qVj_JiWDM-iWohvNTQHD01enAWk9M2Gh3bUyiPeFpiArkfjnB76eYkPuy5OFrvdUMozcU-eha7bBMc4wa3b5pbxBunxpi5wdWFA7QsnvMSB03epHZ4QK9950I94gSXX9rNQ8cTIzxTn3bJtWvuv5YrNu7-svfemWOYSocqed0dM2auamdOOmkRlhwFwHeS3znTQayiHYGewB7JGzwNg1lJLk13JS-7ZmCQhdSucsQtYmgp8IK7h5qInMJowzlA

image-20220817111112013

用kubeconfig方式登录

暂时省略。。。

部署metrics-server

metrics-server是用来扩展k8s的第三方apiserver,其主要作用是收集pod或node上的cpu,内存,磁盘等指标数据,并提供一个api接口供kubectl top命令访问;默认情况kubectl top 命令是没法正常使用,其原因是默认apiserver上没有对应的接口提供收集pod或node的cpu,内存,磁盘等核心指标数据;kubectl top命令主要用来显示pod/node资源的cpu,内存,磁盘的占用比例;该命令能够正常使用必须依赖Metrics API;

默认没有部署metrics server使用kubectl top pod/node查看pod或节点的cpu,内存占用比例

[root@master01 ~]# kubectl top
Display Resource (CPU/Memory/Storage) usage.
 
 The top command allows you to see the resource consumption for nodes or pods.
 
 This command requires Metrics Server to be correctly configured and working on the server.
 
Available Commands:
  node        Display Resource (CPU/Memory/Storage) usage of nodes
  pod         Display Resource (CPU/Memory/Storage) usage of pods
 
Usage:
  kubectl top [flags] [options]
 
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
[root@master01 ~]# kubectl top pod
error: Metrics API not available
[root@master01 ~]# kubectl top node
error: Metrics API not available
[root@master01 ~]#

提示:默认没有部署metrics server,使用kubectl top pod/node命令,它会告诉我们没有可用的metrics api;

安装metrics server
[root@master01 ~]# mkdir metrics-server
[root@master01 ~]# cd metrics-server
[root@master01 metrics-server]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.0/components.yaml
--2021-01-14 23:54:30--  https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.0/components.yaml
Resolving github.com (github.com)... 52.74.223.119
Connecting to github.com (github.com)|52.74.223.119|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/92132038/c700f080-1f7e-11eb-9e30-864a63f442f4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210114%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210114T155432Z&X-Amz-Expires=300&X-Amz-Signature=fc5a6f41ca50ec22e87074a778d2cb35e716ae6c3231afad17dfaf8a02203e35&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=92132038&response-content-disposition=attachment%3B%20filename%3Dcomponents.yaml&response-content-type=application%2Foctet-stream [following]
--2021-01-14 23:54:32--  https://github-production-release-asset-2e65be.s3.amazonaws.com/92132038/c700f080-1f7e-11eb-9e30-864a63f442f4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210114%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210114T155432Z&X-Amz-Expires=300&X-Amz-Signature=fc5a6f41ca50ec22e87074a778d2cb35e716ae6c3231afad17dfaf8a02203e35&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=92132038&response-content-disposition=attachment%3B%20filename%3Dcomponents.yaml&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.217.39.44
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.217.39.44|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3962 (3.9K) [application/octet-stream]
Saving to: ‘components.yaml’
 
100%[===========================================================================================>] 3,962       11.0KB/s   in 0.4s  
 
2021-01-14 23:54:35 (11.0 KB/s) - ‘components.yaml’ saved [3962/3962]
 
[root@master01 metrics-server]# ls
components.yaml
修改部署清单内容

提示:在deploy中,spec.template.containers.args字段中加上–kubelet-insecure-tls选项,表示不验证客户端证书;上述清单主要用deploy控制器将metrics server运行为一个pod,然后授权metrics-server用户能够对pod/node资源进行只读权限;然后把metrics.k8s.io/v1beta1注册到原生apiserver上,让其客户端访问metrics.k8s.io下的资源能够被路由至metrics-server这个服务上进行响应;

129     spec:
130       containers:
131       - args:
132         - --cert-dir=/tmp
133         - --secure-port=4443
134         - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
135         - --kubelet-use-node-status-port
136         - --kubelet-insecure-tls    #加上这一行
137         image: k8s.gcr.io/metrics-server/metrics-server:v0.4.0
138         imagePullPolicy: IfNotPresent
应用文件
[root@master01 metrics-server]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
[root@master01 metrics-server]#
验证

验证:查看原生apiserver是否有metrics.k8s.io/v1beta1?

[root@master01 metrics-server]# kubectl api-versions|grep metrics
metrics.k8s.io/v1beta1
[root@master01 metrics-server]#

提示:可以看到metrics.k8s.io/v1beta1群组已经注册到原生apiserver上;

查看metrics server pod是否运行正常?

[root@k8s-master-01 metrics-server]# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS         AGE
coredns-65c54cc984-dj5jg                1/1     Running   5 (5d20h ago)    7d
coredns-65c54cc984-nl5vf                1/1     Running   5 (5d20h ago)    7d
etcd-k8s-master-01                      1/1     Running   6 (5d20h ago)    7d
kube-apiserver-k8s-master-01            1/1     Running   5 (5d20h ago)    7d
kube-controller-manager-k8s-master-01   1/1     Running   9 (4d15h ago)    7d
kube-proxy-kq554                        1/1     Running   4 (5d20h ago)    6d23h
kube-proxy-rjf8m                        1/1     Running   3 (5d20h ago)    6d23h
kube-proxy-xkxpg                        1/1     Running   5 (5d20h ago)    7d
kube-scheduler-k8s-master-01            1/1     Running   10 (4d15h ago)   7d
metrics-server-f88b98cd6-fdv5k          1/1     Running   0                15m

注意:k8s.gcr.io/metrics-server/metrics-server:v0.4.0 这个镜像一般是拉不下来的,需要翻墙拉取然后再导入

#deploy里的镜像需要翻墙下载
docker pull k8s.gcr.io/metrics-server/metrics-server:v0.4.0

#我先本地翻墙,把镜像下载下来然后导过去的
docker save -o metrics-server-image.tar  k8s.gcr.io/metrics-server/metrics-server:v0.4.0
docker load -i metrics-server-image.tar 

#大家可以在docker hub下载我推上去的镜像
docker push beyondyinjl/metrics-server:v0.4.0
#然后修改镜像名
docker tag beyondyinjl/metrics-server:v0.4.0 k8s.gcr.io/metrics-server/metrics-server:v0.4.0

验证:使用kubectl top 命令查看pod的cpu ,内存占比,看看对应命令是否可以正常执行?

[root@k8s-master-01 metrics-server]# kubectl top  pods -n kube-system
NAME                                    CPU(cores)   MEMORY(bytes)
coredns-65c54cc984-dj5jg                2m           23Mi
coredns-65c54cc984-nl5vf                2m           22Mi
etcd-k8s-master-01                      26m          62Mi
kube-apiserver-k8s-master-01            63m          278Mi
kube-controller-manager-k8s-master-01   36m          50Mi
kube-proxy-kq554                        1m           24Mi
kube-proxy-rjf8m                        1m           24Mi
kube-proxy-xkxpg                        1m           24Mi
kube-scheduler-k8s-master-01            5m           20Mi
metrics-server-f88b98cd6-fdv5k          5m           13Mi

提示:可以看到kubectl top命令可以正常执行,说明metrics server 部署成功没有问题;

以上就是使用apiservice资源结合自定义apiserver扩展k8s功能的示例,简单总结apiservice资源的主要作用就是在aggregator上创建对应的路由信息,该路由信息的主要作用是将对应端点访问路由至自定义apiserver所对应的service进行响应;

NFS安装

服务端
#安装
yum install nfs-utils
# 设置开机启动
systemctl enable nfs
# 启动服务
systemctl start nfs
#创建共享目录
mkdir /data
#设置目录权限
chmod -R 755 /data
#编辑配置文件
vi /etc/exports
/data [IP地址] (rw,sync,no_root_squash)
#重启
systemctl restart nfs
systemctl restart rpcbind
#查看挂载
showmount -e localhost

访问权限选项

  • 设置输出目录只读:ro
  • 设置输出目录读写:rw

用户映射选项

  • all_squash:将远程访问的所有普通用户及所属组都映射为匿名用户或用户组(nfsnobody);
  • no_all_squash:与all_squash取反(默认设置);
  • root_squash:将root用户及所属组都映射为匿名用户或用户组(默认设置);
  • no_root_squash:与rootsquash取反;
  • anonuid=xxx:将远程访问的所有用户都映射为匿名用户,并指定该用户为本地用户(UID=xxx);
  • anongid=xxx:将远程访问的所有用户组都映射为匿名用户组账户,并指定该匿名用户组账户为本地用户组账户(GID=xxx);

其它选项

  • secure:限制客户端只能从小于1024的tcp/ip端口连接nfs服务器(默认设置);
  • insecure:允许客户端从大于1024的tcp/ip端口连接服务器;
  • sync:将数据同步写入内存缓冲区与磁盘中,效率低,但可以保证数据的一致性;
  • async:将数据先保存在内存缓冲区中,必要时才写入磁盘;
  • wdelay:检查是否有相关的写操作,如果有则将这些写操作一起执行,这样可以提高效率(默认设置);
  • no_wdelay:若有写操作则立即执行,应与sync配合使用;
  • subtree:若输出目录是一个子目录,则nfs服务器将检查其父目录的权限(默认设置);
  • no_subtree:即使输出目录是一个子目录,nfs服务器也不检查其父目录的权限,这样可以提高效率;
客户端
#安装nfs
yum -y install nfs-utils
#开启RPC服务
systemctl start rpcbind
#查看服务端的共享目录
showmount -e [ip地址]
#新建一个目录
mkdir /data
#挂载服务端的目录
mount -t nfs [ip地址]:/data /data

controller

Replication Controller和ReplicaSet

Replication Controller(复制控制器,RC)和ReplicaSet(复制集,RS)是两种简单部署Pod的方式。在生产环境中,主要使用更高级的Deployment等方式进行Pod的管理和部署。

  • Replication Controller
  • ReplicaSet

Replication Controller

Replication Controller(简称RC)可确保Pod副本数达到期望值,也就是RC定义的数量。换句话说,Replication Controller可确保一个Pod或一组同类Pod总是可用。

如果存在的Pod大于设定的值,则Replication Controller将终止额外的Pod。如果太小,Replication Controller将启动更多的Pod用于保证达到期望值。与手动创建Pod不同的是,用Replication Controller维护的Pod在失败、删除或终止时会自动替换。因此即使应用程序只需要一个Pod,也应该使用Replication Controller或其他方式管理。Replication Controller类似于进程管理程序,但是Replication Controller不是监视单个节点上的各个进程,而是监视多个节点上的多个Pod。

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

ReplicaSet

ReplicaSet是支持基于集合的标签选择器的下一代Replication Controller,它主要用作Deployment协调创建、删除和更新Pod,和Replication Controller唯一的区别是,ReplicaSet支持标签选择器。在实际应用中,虽然ReplicaSet可以单独使用,但是一般建议使用Deployment来自动管理ReplicaSet,除非自定义的Pod不需要更新或有其他编排等。
定义一个ReplicaSet的示例如下:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # modify replicas according to your case
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
    matchExpressions:
      - {key: tier, operator: In, values: [frontend]}
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google_samples/gb-frontend:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below.
          # value: env
        ports:
        - containerPort: 80

查看一下使用Deployment来自动管理ReplicaSet

[root@k8s-master-01 ~]# kubectl get deploy -n kube-system
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
metrics-server            1/1     1            1           2d
[root@k8s-master-01 ~]# kubectl get deploy -n kube-system metrics-server -oyaml
message: ReplicaSet "metrics-server-64c6c494dc" has successfully progressed.

查看ReplicaSet

[root@k8s-master-01 ~]# kubectl get rs -n kube-system
NAME                                DESIRED   CURRENT   READY   AGE
metrics-server-64c6c494dc           1         1         1       2d

如果我们改动了一个参数,做了滚动升级,它就会重新生成一个rs,这个rs可以被回滚,而rc是不支持回滚的,我们一般使用高级的功能比如Deployment和DaemonSet去管理我们的rc或rs,再通过rs管理我们的pod

Replication Controller和ReplicaSet的创建删除和Pod并无太大区别,Replication Controller目前几乎已经不在生产环境中使用,ReplicaSet也很少单独被使用,都是使用更高级的资源Deployment、DaemonSet、StatefulSet进行管理Pod。

无状态服务Deployment概念

用于部署无状态的服务,这个最常用的控制器。一般用于管理维护企业内部无状态的微服务,比如configserver、zuul、springboot。他可以管理多个副本的Pod实现无缝迁移、自动扩容缩容、自动灾难恢复、一键回滚等功能。

Deployment的创建

创建命名空间

[root@k8s-master-01 ~]# kubectl create ns test
namespace/test created

手动创建

[root@k8s-master-01 ~]# kubectl create deploy nginx -n test --image=nginx
deployment.apps/nginx created

查看pod

[root@k8s-master-01 ~]# kubectl get pods -n test
NAME                     READY   STATUS    RESTARTS   AGE
nginx-85b98978db-g95v8   1/1     Running   0          2m28s

导出到nginx-deploy.yaml

[root@k8s-master-01 ~]# kubectl get deploy nginx -n test -o yaml > nginx-deploy.yaml
[root@k8s-master-01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2022-08-01T08:03:08Z"
  generation: 1
  labels:
    app: nginx
  name: nginx
  namespace: test
  resourceVersion: "418580"
  uid: c3fbd329-8dcc-46c4-a8f0-6f2c6a3bceee
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2022-08-01T08:03:12Z"
    lastUpdateTime: "2022-08-01T08:03:12Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-08-01T08:03:08Z"
    lastUpdateTime: "2022-08-01T08:03:12Z"
    message: ReplicaSet "nginx-85b98978db" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

修改副本数

[root@k8s-master-01 ~]# kubectl scale deploy nginx -n test --replicas=2
deployment.apps/nginx scaled

查看副本数

[root@k8s-master-01 ~]# kubectl get pods -n test
NAME                     READY   STATUS    RESTARTS   AGE
nginx-85b98978db-4cktc   1/1     Running   0          92s
nginx-85b98978db-g95v8   1/1     Running   0          5m7s

也可以直接修改nginx-deploy.yaml里面的replicas参数,还可以使用edit

[root@k8s-master-01 ~]# vi nginx-deploy.yaml 
# 把副本数改回1
replicas: 1
[root@k8s-master-01 ~]# kubectl edit deploy nginx
# 把副本数改回1
replicas: 1

查看副本数

[root@k8s-master-01 ~]# kubectl get pods -n test
NAME                     READY   STATUS    RESTARTS   AGE
nginx-66bbc9fdc5-vtk4n   1/1     Running   0          19m

查看文件

[root@k8s-master-01 ~]# cat nginx-deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2021-07-22T08:50:24Z"
  generation: 1
  labels: # Deployment本身的labels
    app: nginx
  name: nginx
  namespace: default
  resourceVersion: "1439468"
  uid: f6659adb-7b49-48a5-8db6-fbafa6baa1d7
spec:
  progressDeadlineSeconds: 600
  replicas: 2 # 副本数
  revisionHistoryLimit: 10 # 历史记录保留的个数
  selector:
    matchLabels:
      app: nginx # 与下面pod的labels必须保持一致,不然管理不了pod,匹配rs,新版本创建之后不允许修改,修改之后产生新的rs,无法对应旧的label
  strategy:
    rollingUpdate:# 滚动升级的策略
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template: # pod的参数
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.15.2
        imagePullPolicy: IfNotPresent
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

查看deploy的labels

[root@k8s-master-01 ~]# kubectl get deploy --show-labels -n test
NAME    READY   UP-TO-DATE   AVAILABLE   AGE     LABELS
nginx   1/1     1            1           8m13s   app=nginx

状态解析

[root@k8s-master-01 ~]# kubectl get deploy -n test -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES   SELECTOR
nginx   1/1     1            1           8m40s   nginx        nginx    app=nginx
		- NAME: Deployment名称
		- READY:Pod的状态,已经Ready的个数
		- UP-TO-DATE:已经达到期望状态的被更新的副本数
		- AVAILABLE:已经可以用的副本数
		- AGE:显示应用程序运行的时间
		- CONTAINERS:容器名称
		- IMAGES:容器的镜像
		- SELECTOR:管理的Pod的标签

Deployment的更新

修改spec里面的template才会触发更新

查看镜像版本

[root@k8s-master-01 ~]# kubectl get deploy nginx -n test -oyaml |grep image
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"deployment.kubernetes.io/revision":"1"},"creationTimestamp":"2022-08-01T08:03:08Z","generation":2,"labels":{"app":"nginx"},"name":"nginx","namespace":"test","resourceVersion":"418580","uid":"c3fbd329-8dcc-46c4-a8f0-6f2c6a3bceee"},"spec":{"progressDeadlineSeconds":600,"replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"nginx"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx","imagePullPolicy":"Always","name":"nginx","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}},"status":{"availableReplicas":1,"conditions":[{"lastTransitionTime":"2022-08-01T08:03:12Z","lastUpdateTime":"2022-08-01T08:03:12Z","message":"Deployment has minimum availability.","reason":"MinimumReplicasAvailable","status":"True","type":"Available"},{"lastTransitionTime":"2022-08-01T08:03:08Z","lastUpdateTime":"2022-08-01T08:03:12Z","message":"ReplicaSet \"nginx-85b98978db\" has successfully progressed.","reason":"NewReplicaSetAvailable","status":"True","type":"Progressing"}],"observedGeneration":1,"readyReplicas":1,"replicas":1,"updatedReplicas":1}}
      - image: nginx
        imagePullPolicy: Always

更改deployment的镜像并记录

[root@k8s-master-01 ~]# kubectl set image deploy nginx -n test nginx=nginx:1.9.8 --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx image updated

查看更新后的版本

[root@k8s-master-01 ~]# kubectl get deploy nginx -n test -oyaml | grep image
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"deployment.kubernetes.io/revision":"1"},"creationTimestamp":"2022-08-01T08:03:08Z","generation":2,"labels":{"app":"nginx"},"name":"nginx","namespace":"test","resourceVersion":"418580","uid":"c3fbd329-8dcc-46c4-a8f0-6f2c6a3bceee"},"spec":{"progressDeadlineSeconds":600,"replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"nginx"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx","imagePullPolicy":"Always","name":"nginx","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}},"status":{"availableReplicas":1,"conditions":[{"lastTransitionTime":"2022-08-01T08:03:12Z","lastUpdateTime":"2022-08-01T08:03:12Z","message":"Deployment has minimum availability.","reason":"MinimumReplicasAvailable","status":"True","type":"Available"},{"lastTransitionTime":"2022-08-01T08:03:08Z","lastUpdateTime":"2022-08-01T08:03:12Z","message":"ReplicaSet \"nginx-85b98978db\" has successfully progressed.","reason":"NewReplicaSetAvailable","status":"True","type":"Progressing"}],"observedGeneration":1,"readyReplicas":1,"replicas":1,"updatedReplicas":1}}
    kubernetes.io/change-cause: kubectl set image deploy nginx nginx=nginx:1.9.8 --namespace=test
      - image: nginx:1.9.8
        imagePullPolicy: Always

查看滚动更新过程

[root@k8s-master-01 ~]# kubectl rollout status deploy  nginx -n test
deployment "nginx" successfully rolled out

或者使用describe查看,

[root@k8s-master-01 ~]# kubectl describe deploy nginx -n test
Name:                   nginx
Namespace:              test
CreationTimestamp:      Mon, 01 Aug 2022 16:03:08 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 3
                        kubernetes.io/change-cause: kubectl set image deploy nginx nginx=nginx:1.9.8 --namespace=test --record=true
Selector:               app=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.9.8
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-5476577fb (1/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  15m    deployment-controller  Scaled up replica set nginx-85b98978db to 1
  Normal  ScalingReplicaSet  11m    deployment-controller  Scaled up replica set nginx-85b98978db to 2
  Normal  ScalingReplicaSet  7m10s  deployment-controller  Scaled down replica set nginx-85b98978db to 1
  Normal  ScalingReplicaSet  40s    deployment-controller  Scaled up replica set nginx-5476577fb to 1
  Normal  ScalingReplicaSet  27s    deployment-controller  Scaled down replica set nginx-c9b45cff to 0


查看rs

[root@k8s-master-01 ~]# kubectl get rs -n test
NAME               DESIRED   CURRENT   READY   AGE
nginx-5476577fb    1         1         1       2m20s
nginx-85b98978db   0         0         0       17m

滚动更新的策略是:先启动一个新的rs,将副本数设置为1,再把旧的删掉一个,然后再启动一个新的

查看滚动更新策略配置

[root@k8s-master-01 ~]# vim nginx-deploy.yaml 
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdat

Deployment的回滚

更新deploy镜像

# 更新一个不存在的镜像
[root@k8s-master-01 ~]# kubectl set image deploy nginx -n test nginx=nginx:asd1wda --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx image updated

[root@k8s-master-01 ~]# kubectl get pods -n test
NAME                     READY   STATUS              RESTARTS   AGE
nginx-5476577fb-rbq7z    1/1     Running             0          4m31s
nginx-57bdb7cd9f-s28d9   0/1     ContainerCreating   0          17s
...
# 因为修改的这个版本不存在所以最终是会报错的
[root@k8s-master-01 ~]# kubectl get pods -n test
NAME                     READY   STATUS             RESTARTS   AGE
nginx-5476577fb-rbq7z    1/1     Running            0          5m18s
nginx-57bdb7cd9f-s28d9   0/1     ImagePullBackOff   0          64s

查看历史版本

[root@k8s-master-01 ~]# kubectl rollout history deploy nginx -n test
deployment.apps/nginx
REVISION  CHANGE-CAUSE
1         <none>
2         kubectl set image deploy nginx nginx=nginx:1.9.8 --namespace=test --record=true
3         kubectl set image deploy nginx nginx=nginx:asd1wda --namespace=test --record=true

回滚到上一个版本

[root@k8s-master-01 ~]# kubectl rollout undo deploy nginx -n test
deployment.apps/nginx rolled back

查看pod,可以看到只剩一个

[root@k8s-master-01 ~]# kubectl get pods -n test
NAME                    READY   STATUS    RESTARTS   AGE
nginx-5476577fb-rbq7z   1/1     Running   0          7m46s

进行多次更新

[root@k8s-master-01 ~]# kubectl set image deploy nginx -n test nginx=nginx:aaa111 --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx image updated
[root@k8s-master-01 ~]# kubectl set image deploy nginx -n test nginx=nginx:aaa222 --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx image updated
[root@k8s-master-01 ~]# kubectl set image deploy nginx -n test nginx=nginx:aaa333 --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx image updated

查看历史记录

[root@k8s-master-01 ~]# kubectl rollout history deploy nginx -n test
deployment.apps/nginx
REVISION  CHANGE-CAUSE
1         <none>
3         kubectl set image deploy nginx nginx=nginx:asd1wda --namespace=test --record=true
4         kubectl set image deploy nginx nginx=nginx:1.9.8 --namespace=test --record=true
5         kubectl set image deploy nginx nginx=nginx:aaa111 --namespace=test --record=true
6         kubectl set image deploy nginx nginx=nginx:aaa222 --namespace=test --record=true
7         kubectl set image deploy nginx nginx=nginx:aaa333 --namespace=test --record=true

查看指定版本的详细信息

[root@k8s-master-01 ~]# kubectl rollout history deploy nginx -n test --revision=4
deployment.apps/nginx with revision #4
Pod Template:
  Labels:       app=nginx
        pod-template-hash=5476577fb
  Annotations:  kubernetes.io/change-cause: kubectl set image deploy nginx nginx=nginx:1.9.8 --namespace=test --record=true
  Containers:
   nginx:
    Image:      nginx:1.9.8
    Port:       <none>
    Host Port:  <none>
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>

回滚到执行的版本

[root@k8s-master-01 ~]# kubectl rollout undo deploy nginx -n test --to-revision=4
deployment.apps/nginx rolled back

查看deploy状态

[root@k8s-master-01 ~]# kubectl get pods -n test
NAME                    READY   STATUS    RESTARTS   AGE
nginx-5476577fb-rbq7z   1/1     Running   0          14m
# 查看pod 的详细状态
[root@k8s-master-01 ~]# kubectl get deploy -oyaml

Deployment扩容和缩容

扩容

[root@k8s-master-01 ~]# kubectl scale deploy nginx -n test --replicas=3
deployment.apps/nginx scaled


[root@k8s-master-01 ~]# kubectl get pods -n test -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE          NOMINATED NODE   READINESS GATES
nginx-5476577fb-9tlwg   1/1     Running   0          36s   10.244.1.25   k8s-node-01   <none>           <none>
nginx-5476577fb-n8jtk   1/1     Running   0          36s   10.244.2.23   k8s-node-02   <none>           <none>
nginx-5476577fb-rbq7z   1/1     Running   0          17m   10.244.1.23   k8s-node-01   <none>           <none>

缩容

[root@k8s-master-01 ~]# kubectl scale deploy nginx -n test --replicas=1
deployment.apps/nginx scaled
[root@k8s-master-01 ~]# kubectl get pods -n test -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE          NOMINATED NODE   READINESS GATES
nginx-5476577fb-n8jtk   1/1     Running   0          63s   10.244.2.23   k8s-node-02   <none>           <none>

Deployment更新暂停和恢复

使用edit命令可以修改多个配置,再一次性更新,但是通过set命令,每次都会触发更新,那么该如何做呢?可以使用Deployment更新暂停功能

[root@k8s-master-01 ~]# kubectl rollout pause deployment nginx -n test
deployment.apps/nginx paused

使用set命令修改配置

[root@k8s-master-01 ~]# kubectl set image deploy nginx -n test nginx=nginx:1.9.9 --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx image updated

# 进行第二次配置变更,添加内存CPU配置

[root@k8s-master-01 ~]# kubectl set resources deploy nginx  -n test -c nginx --limits=cpu=200m,memory=128Mi --requests=cpu=10m,memory=16Mi
deployment.apps/nginx resource requirements updated

查看deploy

[root@k8s-master-01 ~]# kubectl get deploy nginx -n test -oyaml | grep -8 requests
      containers:
      - image: nginx:1.9.9
        imagePullPolicy: Always
        name: nginx
        resources:
          limits:# 容器最大的CPU和内存
            cpu: 200m
            memory: 128Mi
          requests:# 容器启动最小的CPU和内存
            cpu: 10m
            memory: 16Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
            

查看pod是否被更新

[root@k8s-master-01 ~]# kubectl get pods -n test
NAME                    READY   STATUS    RESTARTS   AGE
nginx-5476577fb-n8jtk   1/1     Running   0          6m19s

可以看到pod没有更新

更新恢复

[root@k8s-master-01 ~]# kubectl rollout resume deploy nginx -n test
deployment.apps/nginx resumed

查看rs

[root@k8s-master-01 ~]# kubectl get rs -n test
NAME               DESIRED   CURRENT   READY   AGE
nginx-5476577fb    1         1         1       23m
nginx-55d65f6d5b   0         0         0       15m
nginx-57bdb7cd9f   0         0         0       19m
nginx-665bb84465   1         1         0       17s
nginx-6db95d98db   0         0         0       15m
nginx-7fd4b54567   0         0         0       15m
nginx-85b98978db   0         0         0       38m
nginx-c9b45cff     0         0         0       25m

[root@k8s-master-01 ~]# kubectl get pods -n test
NAME                     READY   STATUS    RESTARTS   AGE
nginx-665bb84465-spw2p   1/1     Running   0          48s

可以看到17s前新增了nginx的rs,更新被恢复就可以创建新的容器了

Deployment更新注意事项
查看deploy

[root@k8s-master-01 ~]# kubectl get deploy nginx -oyaml -n test
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "11"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"deployment.kubernetes.io/revision":"1"},"creationTimestamp":"2022-08-01T08:03:08Z","generation":2,"labels":{"app":"nginx"},"name":"nginx","namespace":"test","resourceVersion":"418580","uid":"c3fbd329-8dcc-46c4-a8f0-6f2c6a3bceee"},"spec":{"progressDeadlineSeconds":600,"replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"nginx"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx","imagePullPolicy":"Always","name":"nginx","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}},"status":{"availableReplicas":1,"conditions":[{"lastTransitionTime":"2022-08-01T08:03:12Z","lastUpdateTime":"2022-08-01T08:03:12Z","message":"Deployment has minimum availability.","reason":"MinimumReplicasAvailable","status":"True","type":"Available"},{"lastTransitionTime":"2022-08-01T08:03:08Z","lastUpdateTime":"2022-08-01T08:03:12Z","message":"ReplicaSet \"nginx-85b98978db\" has successfully progressed.","reason":"NewReplicaSetAvailable","status":"True","type":"Progressing"}],"observedGeneration":1,"readyReplicas":1,"replicas":1,"updatedReplicas":1}}
    kubernetes.io/change-cause: kubectl set image deploy nginx nginx=nginx:1.9.9 --namespace=test
      --record=true
  creationTimestamp: "2022-08-01T08:03:08Z"
  generation: 18
  labels:
    app: nginx
  name: nginx
  namespace: test
  resourceVersion: "422165"
  uid: c3fbd329-8dcc-46c4-a8f0-6f2c6a3bceee
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10 # 设置保留RS旧的revision的个数,设置为0的话,不保留历史数据
  selector:
    matchLabels:
      app: nginx
  strategy: # 滚动更新的策略
    rollingUpdate:
      maxSurge: 25% # 可以超过期望值的最大Pod数,可选字段,默认为25%,可以设置成数字或百分比,如果该值为0,那么maxUnavailable不能为0
      maxUnavailable: 25% # 指定在回滚或更新时最大不可用的Pod的数量,可选字段,默认25%,可以设置成数字或百分比,如果该值为0,那么maxSurge就不能0
    type: RollingUpdate  # 更新deployment的方式,默认是RollingUpdate,滚动更新,可以指定maxSurge和maxUnavailable
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.9.9
        imagePullPolicy: Always
        name: nginx
        resources:
          limits:
            cpu: 200m
            memory: 128Mi
          requests:
            cpu: 10m
            memory: 16Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2022-08-01T08:34:53Z"
    lastUpdateTime: "2022-08-01T08:34:53Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-08-01T08:41:22Z"
    lastUpdateTime: "2022-08-01T08:41:39Z"
    message: ReplicaSet "nginx-665bb84465" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 18
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1
  

.spec.minReadySeconds:可选参数,指定新创建的Pod在没有任何容器崩溃的情况下视为Ready最小的秒数,默认为0,即一旦被创建就视为可用。

.spec.strategy.type Recreate:重建,先删除旧的Pod,在创建新的Pod

有状态应用管理StatefulSet概念

​ StatefulSet的基本概念
​ StatefulSet注意事项
StatefulSet(有状态集,缩写为sts)常用于部署有状态的且需要有序启动的应用程序,比如在进行SpringCloud项目容器化时,Eureka的部署是比较适合用StatefulSet部署方式的,可以给每个Eureka实例创建一个唯一且固定的标识符,并且每个Eureka实例无需配置多余的Service,其余Spring Boot应用可以直接通过Eureka的Headless Service即可进行注册。

	- Eureka的statefulset的资源名称是eureka,eureka-0 eureka-1 eureka-2
  • Service:headless service,没有ClusterIP eureka-svc
  • Eureka-0.eureka-svc.NAMESPACE_NAME eureka-1.eureka-svc …

StatefulSet的基本概念

StatefulSet主要用于管理有状态应用程序的工作负载API对象。比如在生产环境中,可以部署ElasticSearch集群、MongoDB集群或者需要持久化的RabbitMQ集群、Redis集群、Kafka集群和ZooKeeper集群等。

和Deployment类似,一个StatefulSet也同样管理着基于相同容器规范的Pod。不同的是,StatefulSet为每个Pod维护了一个粘性标识。这些Pod是根据相同的规范创建的,但是不可互换,每个Pod都有一个持久的标识符,在重新调度时也会保留,一般格式为StatefulSetName-Number。比如定义一个名字是Redis-Sentinel的StatefulSet,指定创建三个Pod,那么创建出来的Pod名字就为Redis-Sentinel-0、Redis-Sentinel-1、Redis-Sentinel-2。而StatefulSet创建的Pod一般使用Headless Service(无头服务)进行通信,和普通的Service的区别在于Headless Service没有ClusterIP,它使用的是Endpoint进行互相通信,Headless一般的格式为:

statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local
  • serviceName为Headless Service的名字,创建StatefulSet时,必须指定Headless Service名称;

  • 0…N-1为Pod所在的序号,从0开始到N-1;

  • statefulSetName为StatefulSet的名字;

  • namespace为服务所在的命名空间;

  • cluster.local为Cluster Domain(集群域)。

假如公司某个项目需要在Kubernetes中部署一个主从模式的Redis,此时使用StatefulSet部署就极为合适,因为StatefulSet启动时,只有当前一个容器完全启动时,后一个容器才会被调度,并且每个容器的标识符是固定的,那么就可以通过标识符来断定当前Pod的角色。

比如用一个名为redis-ms的StatefulSet部署主从架构的Redis,第一个容器启动时,它的标识符为redis-ms-0,并且Pod内主机名也为redis-ms-0,此时就可以根据主机名来判断,当主机名为redis-ms-0的容器作为Redis的主节点,其余从节点,那么Slave连接Master主机配置就可以使用不会更改的Master的Headless Service,此时Redis从节点(Slave)配置文件如下:

port 6379
slaveof redis-ms-0.redis-ms.public-service.svc.cluster.local 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
……

其中redis-ms-0.redis-ms.public-service.svc.cluster.local是Redis Master的Headless Service,在同一命名空间下只需要写redis-ms-0.redis-ms即可,后面的public-service.svc.cluster.local可以省略。

StatefulSet注意事项

一般StatefulSet用于有以下一个或者多个需求的应用程序:

  • 需要稳定的独一无二的网络标识符。
  • 需要持久化数据。
  • 需要有序的、优雅的部署和扩展。
  • 需要有序的自动滚动更新。

如果应用程序不需要任何稳定的标识符或者有序的部署、删除或者扩展,应该使用无状态的控制器部署应用程序,比如Deployment或者ReplicaSet。

StatefulSet是Kubernetes 1.9版本之前的beta资源,在1.5版本之前的任何Kubernetes版本都没有。

Pod所用的存储必须由PersistentVolume Provisioner(持久化卷配置器)根据请求配置StorageClass,或者由管理员预先配置,当然也可以不配置存储。

为了确保数据安全,删除和缩放StatefulSet不会删除与StatefulSet关联的卷,可以手动选择性地删除PVC和PV

StatefulSet目前使用Headless Service(无头服务)负责Pod的网络身份和通信,需要提前创建此服务。

删除一个StatefulSet时,不保证对Pod的终止,要在StatefulSet中实现Pod的有序和正常终止,可以在删除之前将StatefulSet的副本缩减为0。

创建一个StatefulSet应用

-	定义一个StatefulSet资源文件
-	创建一个StatefulSet

定义一个StatefulSet资源文件

[root@k8s-master-01 ~]# vim nginx-sts.yaml
# 添加以下内容
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx" # StatefulSet必须配置一个serviceName,它指向已经存在的service,上面定义
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.9.9
        ports:
        - containerPort: 80
          name: web
  • kind: Service定义了一个名字为Nginx的Headless Service,创建的Service格式为nginx-0.nginx.default.svc.cluster.local,其他的类似,因为没有指定Namespace(命名空间),所以默认部署在default。

  • kind: StatefulSet定义了一个名字为web的StatefulSet,replicas表示部署Pod的副本数,本实例为2。

在StatefulSet中必须设置Pod选择器(.spec.selector)用来匹配其标签(.spec.template.metadata.labels)。在1.8版本之前,如果未配置该字段(.spec.selector),将被设置为默认值,在1.8版本之后,如果未指定匹配Pod Selector,则会导致StatefulSet创建错误。

当StatefulSet控制器创建Pod时,它会添加一个标签statefulset.kubernetes.io/pod-name,该标签的值为Pod的名称,用于匹配Service。

创建一个StatefulSet

[root@k8s-master-01 ~]# kubectl create -f nginx-sts.yaml
service/nginx created
statefulset.apps/web created

查看pod

[root@k8s-master-01 ~]# kubectl get sts,pods -n test -owide
NAME                   READY   AGE     CONTAINERS   IMAGES
statefulset.apps/web   2/2     3m28s   nginx        nginx:1.9.9

NAME                         READY   STATUS    RESTARTS   AGE     IP            NODE          NOMINATED NODE   READINESS GATES
pod/web-0                    1/1     Running   0          3m28s   10.244.2.24   k8s-node-02   <none>           <none>
pod/web-1                    1/1     Running   0          3m25s   10.244.1.27   k8s-node-01   <none>           <none>

查看service

[root@k8s-master-01 ~]# kubectl get svc -n test
NAME    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
nginx   ClusterIP   None         <none>        80/TCP    4m44s

扩容sts

[root@k8s-master-01 ~]# kubectl scale sts web -n test --replicas=3
statefulset.apps/web scaled

查看pod

[root@k8s-master-01 ~]# kubectl get sts,pods -n test -owide
NAME                   READY   AGE     CONTAINERS   IMAGES
statefulset.apps/web   3/3     4m41s   nginx        nginx:1.9.9

NAME                         READY   STATUS    RESTARTS   AGE     IP            NODE          NOMINATED NODE   READINESS GATES
pod/web-0                    1/1     Running   0          4m41s   10.244.2.24   k8s-node-02   <none>           <none>
pod/web-1                    1/1     Running   0          4m38s   10.244.1.27   k8s-node-01   <none>           <none>
pod/web-2                    1/1     Running   0          4s      10.244.2.25   k8s-node-02   <none>           <none>

新增busybox,解析无头service

cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

验证StatefulSet

[root@k8s-master-01 ~]# kubectl exec -it busybox -- sh
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ # nslookup web-0.nginx
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'web-0.nginx'
/ # exit

获取pod的IP

[root@k8s-master-01 ~]# kubectl get pods -n test -owide
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE          NOMINATED NODE   READINESS GATES
nginx-665bb84465-spw2p   1/1     Running   0          23m     10.244.1.26   k8s-node-01   <none>           <none>
web-0                    1/1     Running   0          7m25s   10.244.2.24   k8s-node-02   <none>           <none>
web-1                    1/1     Running   0          7m22s   10.244.1.27   k8s-node-01   <none>           <none>
web-2                    1/1     Running   0          2m48s   10.244.2.25   k8s-node-02   <none>           <none>
[root@k8s-master-01 ~]# kubectl get pods  -owide
NAME      READY   STATUS    RESTARTS   AGE    IP            NODE          NOMINATED NODE   READINESS GATES
busybox   1/1     Running   0          108s   10.244.1.28   k8s-node-01   <none>           <none>

可以看到它直接把service地址解析成pod的IP,不通过service访问,直接通过IP访问,减少了一层代理,性能更高,所以不需要配置clusterIP

Pod

Pod概述

Pod 是 k8s 系统中可以创建和管理的最小单元,是资源对象模型中由用户创建或部署的最小资源对象模型,也是在 k8s 上运行容器化应用的资源对象,其他的资源对象都是用来支撑或者扩展 Pod 对象功能的,比如控制器对象是用来管控 Pod 对象的,Service 或者Ingress 资源对象是用来暴露 Pod 引用对象的,PersistentVolume 资源对象是用来为 Pod提供存储等等,k8s 不会直接处理容器,而是 Pod,Pod 是由一个或多个 container 组成Pod 是 Kubernetes 的最重要概念,每一个 Pod 都有一个特殊的被称为”根容器“的 Pause容器。Pause 容器对应的镜 像属于 Kubernetes 平台的一部分,除了 Pause 容器,每个 Pod还包含一个或多个紧密相关的用户业务容器

Pod基本概念

  • 最小部署的单元
  • Pod里面是由一个或多个容器组成【一组容器的集合】
  • 一个pod中的容器是共享网络命名空间
  • Pod是短暂的
  • 每个Pod包含一个或多个紧密相关的用户业务容器

Pod存在的意义

  • 创建容器使用docker,一个docker对应一个容器,一个容器运行一个应用进程
  • Pod是多进程设计,运用多个应用程序,也就是一个Pod里面有多个容器,而一个容器里面运行一个应用程序
  • Pod的存在是为了亲密性应用,两个应用之间进行交互,网络之间的调用【通过127.0.0.1 或 socket】,两个应用之间需要频繁调用

Pod 两种机制

  • Pod 网络共享机制

  • 共享机制

镜像拉取策略

  • IfNotPresent:默认值,镜像在宿主机上不存在才拉取
  • Always:每次创建Pod都会重新拉取一次镜像
  • Never:Pod永远不会主动拉取这个镜像

Pod资源限制

也就是我们Pod在进行调度的时候,可以对调度的资源进行限制,例如我们限制 Pod调度是使用的资源是 2C4G,那么在调度对应的node节点时,只会占用对应的资源,对于不满足资源的节点,将不会进行调度

  • request:表示调度所需的资源
  • limits:表示最大所占用的资源

Pod健康检查

  • Liveness Probe :检测容器是否处于running状态(即Pod的状态是否为running)。若liveness探测到Pod不健康时,会通过kubelet杀掉该pod,并根据重启策略来判断是否重启这个pod。若未配置Liveness,则默认返回值是成功的。
  • ReadinessProbe :检测容器是否能正常对外提供服务,或接收请求(即pod的condition是否为ready)。若readiness探测结果为不健康,则会将这个Pod从接入层(service)的Endpoint中移除掉,直到下一次判断成功,才会将这个pod再次挂回到相应的endpoint上。

kubelet在容器上周期性的执行探针以检测容器的健康状态,kubelet通过调用被容器实现的处理器来实现检测,在Kubernetes中有三类处理器:

  • exec:通过执行容器中的一个命令来判断服务是否正常,命令返回值为0则表示容器是健康的。
  • httpGet:通过发送http Get请求来进行判断,若返回200-399状态码时,则表示容器是健康的。
  • tcpSocket:通过探测容器的IP和Port,执行TCP健康检查,若这个TCP的链接能够正常被建立,则表示容器是健康的。

健康检测的结果为下面三种情况:

  • Success:表示container通过了健康检查。
  • Failure:表示container没有通过健康检查。若未通过检查,则会做一个相应处理,Readiness处理方式就是,在service层将没有通过Readiness检查的pod进行摘除,而Liveness就是将这个pod进行重新拉起或删除。
  • Unknown:表示说当前这次检查操作没有完整执行,可能是因为超时或一些脚本没有及时返回。此时Readiness-probe或Liveness-probe不做任何操作,会等待下一次的机制来进行检验。

POD容器的重启策略 (pod.spec.restartPolicy):

  • Always:当POD内容器被终止,不管容器状态是success还是failed,总是重启容器(默认)。
  • OnFailure:当POD内容器被终止,且容器状态为failed时,才重启。
  • Never:当POD内容器被终止,不管容器状态是success还是failed,都不重启容器。
[root@k8s-master-01 ~]# kubectl run busybox --image=busybox --dry-run=client -o yaml > healtz.yaml
# 在生成的文件基础上面修改一下
[root@k8s-master-01 ~]# cat healtz.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: busybox
spec:
  containers:
  - image: busybox
    name: busybox
    resources: {}
    args:
    - /bin/sh
    - -c
    - sleep 10; exit 1  #添加pod运行指定脚本命令,模拟容器启动10秒后发生故障,退出状态码为1
  dnsPolicy: ClusterFirst
  restartPolicy: OnFailure #将默认的Always修改为OnFailure
status: {}

# 执行文件
[root@k8s-master-01 ~]# kubectl apply -f healtz.yaml
pod/busybox created
# 查看状态,会发现10秒后 他会重启5次,然而服务始终不正常,就会保持CrashLoopBackOff状态
# k8s会以指数级的退避延迟(10s,20s,40s等)重启服务 上限5分钟
[root@k8s-master-01 ~]# kubectl get pods
NAME      READY   STATUS    RESTARTS      AGE
busybox   1/1     Running   2 (19s ago)   60s
[root@k8s-master-01 ~]# kubectl get pods
NAME      READY   STATUS    RESTARTS      AGE
busybox   1/1     Running   2 (23s ago)   64s
[root@k8s-master-01 ~]# kubectl get pods
NAME      READY   STATUS    RESTARTS      AGE
busybox   1/1     Running   2 (25s ago)   66s
[root@k8s-master-01 ~]# kubectl get pods
NAME      READY   STATUS   RESTARTS      AGE
busybox   0/1     Error    2 (32s ago)   73s


liveness
exec方式
[root@k8s-master-01 ~]# cat  liveness.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness
spec:
  restartPolicy: OnFailure
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 10 # 容器启动10秒后开始检测
      periodSeconds: 5   # 每个5秒再检测一次
      
# initialDelaySeconds:在POD容器启动多少秒后再检测。比如JAVA应用,启动时间会较长,因为涉及到jvm启动及jar包加载,所以就需要延迟检测。
# periodSeconds:探测的间隔时间,默认为10秒。
# timeoutSeconds:探测的超时时间,默认为1秒。当超时时间之内没有检测成功,则会认为是一个失败状态。
# successThreshold:从探测失败到再一次判断探测成功的连续次数。比如为2,表示失败后,接下来的2次都探测成功,才表示正常。
# failureThreshold:探测失败的重试次数,默认为3次。
 
#验证方法:创建Pod后,等30s,/tmp/healthy文件被删掉后,liveness就会检测到POD容器运行不正常(处于Terminated状态),并尝试重启容器。
   
# 执行文件
[root@k8s-master-01 ~]# kubectl apply -f liveness.yaml
pod/liveness created

#查看运行详情
[root@k8s-master-01 ~]# kubectl describe pods liveness
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m40s                default-scheduler  Successfully assigned default/liveness to k8s-node-01
  Normal   Pulled     3m22s                kubelet            Successfully pulled image "busybox" in 15.677244834s
  Normal   Pulled     114s                 kubelet            Successfully pulled image "busybox" in 15.487542975s
  Warning  Unhealthy  70s (x6 over 2m50s)  kubelet            Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
  Normal   Killing    70s (x2 over 2m40s)  kubelet            Container liveness failed liveness probe, will be restarted
  Normal   Pulling    40s (x3 over 3m38s)  kubelet            Pulling image "busybox"
  Normal   Created    24s (x3 over 3m22s)  kubelet            Created container liveness
  Normal   Started    24s (x3 over 3m22s)  kubelet            Started container liveness
  Normal   Pulled     24s                  kubelet            Successfully pulled image "busybox" in 15.466279986s

httpGet方式
[root@k8s-master-01 ~]# cat livenessHttpGet.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness-http-get
  name: liveness-http-get
spec:
  containers:
  - name: liveness-http-get
    image: nginx
    ports:
    - name: http
      containerPort: 80
    command: ["/bin/sh","-c","sleep 30 ; rm -f /usr/share/nginx/html/index.html"]
    livenessProbe:
      httpGet:
        port: http
        path: /usr/share/nginx/html/index.html
      initialDelaySeconds: 10
      periodSeconds: 3
  restartPolicy: OnFailure
  
  # 查看pod的详细信息
  [root@k8s-master-01 ~]# kubectl describe pods liveness-http-get
  Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  84s                default-scheduler  Successfully assigned default/liveness-http-get to k8s-node-02
  Normal   Pulled     68s                kubelet            Successfully pulled image "nginx" in 15.529508444s
  Normal   Pulling    37s (x2 over 83s)  kubelet            Pulling image "nginx"
  Normal   Created    21s (x2 over 67s)  kubelet            Created container liveness-http-get
  Normal   Started    21s (x2 over 67s)  kubelet            Started container liveness-http-get
  Normal   Pulled     21s                kubelet            Successfully pulled image "nginx" in 15.823715625s
  Warning  Unhealthy  3s (x6 over 57s)   kubelet            Liveness probe failed: Get "http://10.244.2.29:80/usr/share/nginx/html/index.html": dial tcp 10.244.2.29:80: connect: connection refused
  Normal   Killing    3s (x2 over 51s)   kubelet            Container liveness-http-get failed liveness probe, will be restarted
readiness
[root@k8s-master-01 ~]# kubectl apply -f readiness.yaml
pod/readiness created
[root@k8s-master-01 ~]# cat  readiness.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: readiness
  name: readiness

spec:
  restartPolicy: OnFailure
  containers:
  - name: readiness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
    readinessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 10
      periodSeconds: 5

#查看状态
[root@k8s-master-01 ~]# kubectl describe pods readiness
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  118s               default-scheduler  Successfully assigned default/readiness to k8s-node-02
  Normal   Pulling    116s               kubelet            Pulling image "busybox"
  Normal   Pulled     115s               kubelet            Successfully pulled image "busybox" in 1.230254561s
  Normal   Created    115s               kubelet            Created container readiness
  Normal   Started    115s               kubelet            Started container readiness
  Warning  Unhealthy  3s (x19 over 83s)  kubelet            Readiness probe failed: cat: can't open '/tmp/healthy': No such file or directory
[root@k8s-master-01 ~]# kubectl get pods
NAME        READY   STATUS             RESTARTS        AGE
liveness    0/1     CrashLoopBackOff   6 (2m15s ago)   12m
readiness   0/1     Running            0               2m1s

一个实例

首先编写一个v1版本的文件myapp-v1.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mytest
spec:
  replicas: 10
  selector:
    matchLabels:
      app: mytest
  template:
    metadata:
      labels:
        app: mytest
    spec:
      containers:
      - name: mytest
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 10; touch /tmp/healthy; sleep 30000
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

运行文件,然后查看

[root@k8s-master-01 k8s-test]# kubectl apply -f myapp-v1.yaml
[root@k8s-master-01 k8s-test]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
mytest-57f6d8fc75-5kz8q   1/1     Running   0          2m29s
mytest-57f6d8fc75-7jbc7   1/1     Running   0          2m29s
mytest-57f6d8fc75-96zgw   1/1     Running   0          2m29s
mytest-57f6d8fc75-bx99b   1/1     Running   0          2m29s
mytest-57f6d8fc75-cnvmc   1/1     Running   0          2m29s
mytest-57f6d8fc75-fsdvn   1/1     Running   0          2m29s
mytest-57f6d8fc75-gx4tc   1/1     Running   0          2m29s
mytest-57f6d8fc75-hcxcc   1/1     Running   0          2m29s
mytest-57f6d8fc75-mrzqv   1/1     Running   0          2m29s
mytest-57f6d8fc75-sjwpk   1/1     Running   0          2m29s

接着我们来准备更新这个服务,并且人为模拟版本故障来进行观察,新准备一个配置myapp-v2.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mytest
spec:
  strategy:
    rollingUpdate:
      maxSurge: 35% #滚动更新的副本最大值(以10的基数为例)10+10*35%=13.5≈14
      maxUnavailable: 35%  #可用副本最大值(默认两个都是25%) 10-10*35%=6.5≈7
  replicas: 10
  selector:
    matchLabels:
      app: mytest
  template:
    metadata:
      labels:
        app: mytest
    spec:
      containers:
      - name: mytest
        image: busybox
        args:
        - /bin/bash
        - -c
        - sleep 30000  ## 可见这里并没有生成/tmp/healthy这个文件,所以下面的检测必然失败
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy

执行v2版本的文件,在查看状态

[root@k8s-master-01 k8s-test]# kubectl apply -f myapp-v2.yaml --record=true
[root@k8s-master-01 k8s-test]# kubectl get pods
NAME                      READY   STATUS              RESTARTS      AGE
mytest-57f6d8fc75-5kz8q   1/1     Running             0             20m
mytest-57f6d8fc75-7jbc7   1/1     Running             0             20m
mytest-57f6d8fc75-96zgw   1/1     Running             0             20m
mytest-57f6d8fc75-cnvmc   1/1     Running             0             20m
mytest-57f6d8fc75-hcxcc   1/1     Running             0             20m
mytest-57f6d8fc75-mrzqv   1/1     Running             0             20m
mytest-57f6d8fc75-sjwpk   1/1     Running             0             20m
mytest-76dbfff446-4n48d   0/1     CrashLoopBackOff    4 (47s ago)   5m3s
mytest-76dbfff446-db2wg   0/1     CrashLoopBackOff    4 (62s ago)   5m4s
mytest-76dbfff446-fvt96   0/1     CrashLoopBackOff    3 (65s ago)   5m3s
mytest-76dbfff446-hmnw2   0/1     CrashLoopBackOff    3 (49s ago)   5m3s
mytest-76dbfff446-kl987   0/1     CrashLoopBackOff    4 (78s ago)   5m4s
mytest-76dbfff446-ngfgh   0/1     RunContainerError   4 (13s ago)   5m4s
mytest-76dbfff446-x6848   0/1     CrashLoopBackOff    4 (34s ago)   5m4s

综合上面的分析,我们很真实的模拟一次K8s上次错误的代码上线流程,所幸的是这里有Health Check的Readiness检测帮我们屏蔽了有错误的副本,不至于被外面的流量请求到,同时保留了大部分旧版本的pod,因此整个服务的业务并没有因这此更新失败而受到影响。

接下来我们详细分析下滚动更新的原理,为什么上面服务新版本创建的pod数量是7个,同时只销毁了3个旧版本的pod呢

我们不显式配置这段的话,默认值均是25%

  strategy:
    rollingUpdate:
      maxSurge: 35% #滚动更新的副本最大值(以10的基数为例)10+10*35%=13.5≈14
      maxUnavailable: 35%  #可用副本最大值(默认两个都是25%) 10-10*35%=6.5≈7

滚动更新通过参数maxSurge和maxUnavailable来控制pod副本数量的更新替换。

maxSurge

这个参数控制滚动更新过程中pod副本总数超过设定总副本数量的上限。maxSurge 可以是具体的整数(比如 3),也可以是百分比,向上取整。maxSurge 默认值为 25%

在上面测试的例子里面,pod的总副本数量是10,那么在更新过程中,总副本数量的上限大最值计划公式为:

10 + 10 * 35% = 13.5 -> 14

我们查看下更新deployment的描述信息:

[root@k8s-master-01 k8s-test]# kubectl describe deploy mytest
Name:                   mytest
Namespace:              default
CreationTimestamp:      Wed, 03 Aug 2022 14:02:48 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=mytest
Replicas:               10 desired | 7 updated | 14 total | 7 available | 7 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  35% max unavailable, 35% max surge
# 旧版本available 的数量7个 + 新版本unavailable`的数量7个 = 总数量 14 total
maxUnavailable

这个参数控制滚动更新过程中不可用的pod副本总数量的值,同样,maxUnavailable 可以是具体的整数(比如 3),也可以是百分百,向下取整。maxUnavailable 默认值为 25%。

在上面测试的例子里面,pod的总副本数量是10,那么要保证正常可用的pod副本数量为:

10 - 10 * 35% = 6.5 --> 7

所以我们在上面查看的描述信息里,7 available 正常可用的pod数量值就为7

maxSurge 值越大,初始创建的新副本数量就越多;maxUnavailable 值越大,初始销毁的旧副本数量就越多。

正常更新理想情况下,我们这次版本发布案例滚动更新的过程是:

  1. 首先创建4个新版本的pod,使副本总数量达到14个
  2. 然后再销毁3个旧版本的pod,使可用的副本数量降为7个
  3. 当这3个旧版本的pod被 成功销毁后,可再创建3个新版本的pod,使总的副本数量保持为14个
  4. 当新版本的pod通过Readiness 检测后,会使可用的pod副本数量增加超过7个
  5. 然后可以继续销毁更多的旧版本的pod,使整体可用的pod数量回到7个
  6. 随着旧版本的pod销毁,使pod副本总数量低于14个,这样就可以继续创建更多的新版本的pod
  7. 这个新增销毁流程会持续地进行,最终所有旧版本的pod会被新版本的pod逐渐替换,整个滚动更新完成

而我们这里的实际情况是在第4步就卡住了,新版本的pod数量无法能过Readiness 检测。上面的描述信息最后面的事件部分的日志也详细说明了这一切:

Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  mytest-57f6d8fc75 (7/7 replicas created)
NewReplicaSet:   mytest-76dbfff446 (7/7 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  22m    deployment-controller  Scaled up replica set mytest-57f6d8fc75 to 10
  Normal  ScalingReplicaSet  7m28s  deployment-controller  Scaled up replica set mytest-76dbfff446 to 4
  Normal  ScalingReplicaSet  7m28s  deployment-controller  Scaled down replica set mytest-57f6d8fc75 to 7
  Normal  ScalingReplicaSet  7m28s  deployment-controller  Scaled up replica set mytest-76dbfff446 to 7

这里按正常的生产处理流程,在获取足够的新版本错误信息提交给开发分析后,我们可以通过kubectl rollout undo 来回滚到上一个正常的服务版本:

[root@k8s-master-01 k8s-test]# kubectl rollout undo deploy mytest --to-revision=1
deployment.apps/mytest rolled back
# 查看pod
[root@k8s-master-01 k8s-test]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
mytest-57f6d8fc75-5kz8q   1/1     Running   0          27m
mytest-57f6d8fc75-7jbc7   1/1     Running   0          27m
mytest-57f6d8fc75-96zgw   1/1     Running   0          27m
mytest-57f6d8fc75-cnvmc   1/1     Running   0          27m
mytest-57f6d8fc75-hcxcc   1/1     Running   0          27m
mytest-57f6d8fc75-mrzqv   1/1     Running   0          27m
mytest-57f6d8fc75-q68sm   1/1     Running   0          50s
mytest-57f6d8fc75-sjwpk   1/1     Running   0          27m
mytest-57f6d8fc75-x7n5c   1/1     Running   0          50s
mytest-57f6d8fc75-xnvrk   1/1     Running   0          50s

创建Pod流程

master节点

  • 首先创建一个pod,会进入到API Server 进行创建,之后通过API Server 将pod信息存储在 Etcd中
  • 在Etcd存储完成后,Etcd将存储结果返回给API Server,告诉它,我已经存储成功
  • 然后是Scheduler,监控API Server是否有新的Pod,如果有的话,先是通过API server读取存储在Etcd的pod信息,之后会通过调度算法,把pod调度某个node上

node节点

  • 在node节点,会通过 kubelet – apiserver 读取etcd 拿到分配在当前node节点上的pod,然后通过docker创建容器
  • 创建成功后 ,会将创建结果返回给Kubectl ,通过Kubectl 更新API Server的Pod状态,之后通过API Server更新etc存储状态
  • 更新后,Etcd返回给API Server,之后通过API Server 返回给Kubectl

节点亲和性

污点和污点容忍

pod应用状态

pod状态

  • Pending:正在创建Pod,但Pod中的容器还没有全部被创建完成,此阶段包括等待Pod被调度的时间和通过网络下载镜像的时间。
  • Running:Pod已经绑定到了某个节点,Pod中所有的容器都已被创建,且至少一个容器正在处于运行状态、正在启动状态或重启状态。
  • Succeeded:Pod中的所有容器都已成功终止,且不会再重启。
  • Failed:Pod中的所有容器都已终止,但至少有一个容器退出时为失败状态,也就是说,容器以非0状态退出或被系统终止。
  • Unknown:因为某些原因无法取得Pod状态,这种情况通常是因为与Pod所在主机通信失败。

container状态

  • Waiting:容器仍在运行完成启动时所需要的操作,如正在拉取镜像等。其中Reason字段给出了容器处于等待状态的原因。
  • Running:容器正在运行,且没有问题发生。
  • Terminated:容器运行正常结束,或由于某些原因失败退出。

conditions 部分

  • Type:Name of this Pod condition。
  • Initialized:若为True,表示所有的Init容器都已成功启动。
  • Ready:若为true,表示Pod可接受请求并对外提供服务,并将Pod添加到相应Service的endpoint上。
  • ContainersReady:若为true,表示Pod中所有容器都已就绪
  • PodScheduled:若为true,表示Pod已经被调度到某节点

condition 这个机制意思是说,在K8s里面有很多这种比较小的状态,而这些小状态之间的聚合会变成上层Pod的Status。

event部分

在K8s里面不同的状态之间的这个转换都会发生相应的事件,而事件分为两种: normal 与 warning。

应用故障排查

Pod停留在Pending

pending表示调度器没有介入,可通过kubectl describe pod,查看event排查,通常与资源使用相关。

Pod停留在waiting

State处在waiting状态,一般表示这个pod无法正常拉取镜像,原因可能是这个镜像是私有镜像,没有配置secret,或镜像地址不存在,或是一个公网镜像。

Pod不断被拉起并且可看到crashing

pod不断被拉起,且可看到类似像backoff,通常表示说pod已经被调度完成了,但启动失败,通常是由于配置、权限造成,需查看Pod应用自身的日志分析。

Pod处在Runing但是没有正常工作(不能正常对外提供服务)

通常是由于yaml文件中部分字段拼写错误造成的,即yaml文件下发了,部分字段没有正常生效,可通过校验部署来排查,如:kubectl apply --validate -f demo.yaml

Service无法正常的工作

因为service和底层的pod之间的关联关系是通过selector的方式来匹配的,也就是说,pod上面配置了一些label,然后service通过match label的方式和这个pod进行相互关联。

如果这个label配置的有问题,可能会造成这个service无法找到后面的endpoint,从而造成相应的service没有办法对外提供服务。

那如果service出现异常时,第一个要看的是这个service后面是不是有一个真正的endpoint,其次来看这个endpoint是否可以对外提供正常的服务

pod清单

apiVersion: v1     #必选,版本号,例如v1
kind: Pod         #必选,资源类型,例如 Pod
metadata:         #必选,元数据
  name: string     #必选,Pod名称
  namespace: string  #Pod所属的命名空间,默认为"default"
  labels:           #自定义标签列表
    - name: string                 
spec:  #必选,Pod中容器的详细定义
  containers:  #必选,Pod中容器列表
  - name: string   #必选,容器名称
    image: string  #必选,容器的镜像名称
    imagePullPolicy: [ Always|Never|IfNotPresent ]  #获取镜像的策略 
    command: [string]   #容器的启动命令列表,如不指定,使用打包时使用的启动命令
    args: [string]      #容器的启动命令参数列表
    workingDir: string  #容器的工作目录
    volumeMounts:       #挂载到容器内部的存储卷配置
    - name: string      #引用pod定义的共享存储卷的名称,需用volumes[]部分定义的的卷名
      mountPath: string #存储卷在容器内mount的绝对路径,应少于512字符
      readOnly: boolean #是否为只读模式
    ports: #需要暴露的端口库号列表
    - name: string        #端口的名称
      containerPort: int  #容器需要监听的端口号
      hostPort: int       #容器所在主机需要监听的端口号,默认与Container相同
      protocol: string    #端口协议,支持TCP和UDP,默认TCP
    env:   #容器运行前需设置的环境变量列表
    - name: string  #环境变量名称
      value: string #环境变量的值
    resources: #资源限制和请求的设置
      limits:  #资源限制的设置
        cpu: string     #Cpu的限制,单位为core数,将用于docker run --cpu-shares参数
        memory: string  #内存限制,单位可以为Mib/Gib,将用于docker run --memory参数
      requests: #资源请求的设置
        cpu: string    #Cpu请求,容器启动的初始可用数量
        memory: string #内存请求,容器启动的初始可用数量
    lifecycle: #生命周期钩子
		postStart: #容器启动后立即执行此钩子,如果执行失败,会根据重启策略进行重启
		preStop: #容器终止前执行此钩子,无论结果如何,容器都会终止
    livenessProbe:  #对Pod内各容器健康检查的设置,当探测无响应几次后将自动重启该容器
      exec:         #对Pod容器内检查方式设置为exec方式
        command: [string]  #exec方式需要制定的命令或脚本
      httpGet:       #对Pod内个容器健康检查方法设置为HttpGet,需要制定Path、port
        path: string
        port: number
        host: string
        scheme: string
        HttpHeaders:
        - name: string
          value: string
      tcpSocket:     #对Pod内个容器健康检查方式设置为tcpSocket方式
         port: number
       initialDelaySeconds: 0       #容器启动完成后首次探测的时间,单位为秒
       timeoutSeconds: 0          #对容器健康检查探测等待响应的超时时间,单位秒,默认1秒
       periodSeconds: 0           #对容器监控检查的定期探测时间设置,单位秒,默认10秒一次
       successThreshold: 0
       failureThreshold: 0
       securityContext:
         privileged: false
  restartPolicy: [Always | Never | OnFailure]  #Pod的重启策略
  nodeName: <string> #设置NodeName表示将该Pod调度到指定到名称的node节点上
  nodeSelector: obeject #设置NodeSelector表示将该Pod调度到包含这个label的node上
  imagePullSecrets: #Pull镜像时使用的secret名称,以key:secretkey格式指定
  - name: string
  hostNetwork: false   #是否使用主机网络模式,默认为false,如果设置为true,表示使用宿主机网络
  volumes:   #在该pod上定义共享存储卷列表
  - name: string    #共享存储卷名称 (volumes类型有很多种)
    emptyDir: {}       #类型为emtyDir的存储卷,与Pod同生命周期的一个临时目录。为空值
    hostPath: string   #类型为hostPath的存储卷,表示挂载Pod所在宿主机的目录
      path: string                #Pod所在宿主机的目录,将被用于同期中mount的目录
    secret:          #类型为secret的存储卷,挂载集群与定义的secret对象到容器内部
      scretname: string  
      items:     
      - key: string
        path: string
    configMap:         #类型为configMap的存储卷,挂载预定义的configMap对象到容器内部
      name: string
      items:
      - key: string
        path: string

service

将运行在一组Pods上的应用程序公开为网络服务的抽象方法。

使用 Kubernetes,你无需修改应用程序即可使用不熟悉的服务发现机制。 Kubernetes 为 Pod 提供自己的 IP 地址,并为一组 Pod 提供相同的 DNS 名, 并且可以在它们之间进行负载均衡。

创建和销毁 Kubernetes Pod 以匹配集群的期望状态。 Pod 是非永久性资源。 如果你使用 Deployment 来运行你的应用程序,则它可以动态创建和销毁 Pod。

每个 Pod 都有自己的 IP 地址,但是在 Deployment 中,在同一时刻运行的 Pod 集合可能与稍后运行该应用程序的 Pod 集合不同。

这导致了一个问题: 如果一组 Pod(称为“后端”)为集群内的其他 Pod(称为“前端”)提供功能, 那么前端如何找出并跟踪要连接的 IP 地址,以便前端可以使用提供工作负载的后端部分?

Service -ClusterIP

# 命令方式暴露端口
# [root@k8s-master-01 k8s-test]# kubectl expose deployment mynginx --port=80 --target-port=80
[root@k8s-master-01 k8s-test]# cat nginx-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: mynginx
  name: mynginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mynginx
  strategy: {}
  template:
    metadata:
      labels:
        app: mynginx
    spec:
      containers:
      - name: mynginx
        image: nginx
        ports:
        - containerPort: 80
          name: mynginx
---
apiVersion: v1
kind: Service
metadata:
  name: mynginx-svc
spec:
  type: ClusterIP		//可以不写,默认为ClusterIP
  selector:
    app: mynginx
  ports:
  - name: mynginx-svc-port
    protocol: TCP
    port: 80
    targetPort: mynginx
status:
  loadBalancer: {}

查看service和pod,访问service

[root@k8s-master-01 k8s-test]# kubectl get svc,pods -owide
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE    SELECTOR
service/kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP   162d   <none>
service/mynginx-svc   ClusterIP   10.108.160.124   <none>        80/TCP    70s    app=mynginx

NAME                           READY   STATUS    RESTARTS   AGE   IP            NODE          NOMINATED NODE   READINESS GATES
pod/mynginx-58b9b5b545-f7f6l   1/1     Running   0          70s   10.244.2.39   k8s-node-02   <none>           <none>

# 查看 endpoints
[root@k8s-master-01 k8s-test]# kubectl get endpoints
NAME          ENDPOINTS             AGE
kubernetes    192.168.152.23:6443   162d
mynginx-svc   10.244.2.39:80        23m

# 访问
[root@k8s-master-01 k8s-test]# curl 10.108.160.124
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

# 扩容一下 看看是不是轮询的方式
[root@k8s-master-01 k8s-test]# kubectl scale deploy mynginx --replicas=2
deployment.apps/mynginx scaled
[root@k8s-master-01 k8s-test]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
mynginx-58b9b5b545-f7f6l   1/1     Running   0          25m
mynginx-58b9b5b545-rthdd   1/1     Running   0          40s

# 修改对应pod的文件
[root@k8s-master-01 k8s-test]# kubectl exec -it mynginx-58b9b5b545-f7f6l bash
root@mynginx-58b9b5b545-f7f6l:/# echo 111 > /usr/share/nginx/html/index.html
root@mynginx-58b9b5b545-f7f6l:/# exit
exit
[root@k8s-master-01 k8s-test]# kubectl exec -it mynginx-58b9b5b545-rthdd bash
root@mynginx-58b9b5b545-rthdd:/#  echo 222 > /usr/share/nginx/html/index.html
root@mynginx-58b9b5b545-rthdd:/# exit
exit

# 再次访问
[root@k8s-master-01 k8s-test]# curl 10.108.160.124
111
[root@k8s-master-01 k8s-test]# curl 10.108.160.124
2222

Service -NodePort

# 可以用命令的形式kubectl patch svc nginx -p '{"spec":{"type":"NodePort"}}' 
[root@k8s-master-01 k8s-test]# vi nginx-service.yaml
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mynginx
  strategy: {}
  template:
    metadata:
      labels:
        app: mynginx
    spec:
      containers:
      - name: mynginx
        image: nginx
        ports:
        - containerPort: 80
          name: mynginx
---
apiVersion: v1
kind: Service
metadata:
  name: mynginx-svc
spec:
  type: NodePort #修改这个参数
  selector:
    app: mynginx
  ports:
  - name: mynginx-svc-port
    protocol: TCP
    port: 80 # 默认情况下,为了方便起见,`targetPort` 被设置为与 `port` 字段相同的值。将service的80端口映射到pod的80端口
    targetPort: 80
    nodePort: 30007 #默认情况下,为了方便起见,Kubernetes 控制平面会从某个范围内分配一个端口号(默认:30000-32767)
status:
  loadBalancer: {}

#执行
[root@k8s-master-01 k8s-test]# kubectl apply -f nginx-service.yaml
deployment.apps/mynginx unchanged
service/mynginx-svc configured

# 查看
[root@k8s-master-01 k8s-test]# kubectl get svc,pod -owide
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE    SELECTOR
service/kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP        162d   <none>
service/mynginx-svc   NodePort    10.108.160.124   <none>        80:30007/TCP   38m    app=mynginx

NAME                           READY   STATUS    RESTARTS   AGE     IP            NODE          NOMINATED NODE   READINESS GATES
pod/mynginx-58b9b5b545-4ptng   1/1     Running   0          7m25s   10.244.1.45   k8s-node-01   <none>           <none>
pod/mynginx-58b9b5b545-f7f6l   1/1     Running   0          38m     10.244.2.39   k8s-node-02   <none>           <none>

# 浏览器就可以访问master或者node任意一个节点的30007 http://192.168.152.26:30007/
# 注意: 是用服务器的ip地址加上30007端口访问
# 查看service的描述
[root@k8s-master-01 k8s-test]# kubectl describe svc mynginx-svc
Name:                     mynginx-svc
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=mynginx
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.160.124
IPs:                      10.108.160.124
Port:                     mynginx-svc-port  80/TCP
TargetPort:               80/TCP
NodePort:                 mynginx-svc-port  30007/TCP
Endpoints:                10.244.1.45:80,10.244.2.39:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

k8s的dns访问

# 临时启动一个pod 退出就删除
[root@k8s-master-01 k8s-test]# kubectl run -it --rm busybox --image=busybox -- sh

说明: 需要注意的是,Service 能够将一个接收 port 映射到任意的 targetPort。 默认情况下,targetPort 将被设置为与 port 字段相同的值。

多端口server

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 9376
    - name: https
      protocol: TCP
      port: 443
      targetPort: 9377
0

评论区