首页 > Kubernetes部署记录

busybox-signed,Kubernetes部署记录

互联网 2021-01-28 04:25:49
在线算命,八字测算命理 Kubernetes(1.8.1)部署记录1、环境说明

服务器规划:

IPHostnameRole192.168.119.180k8s-0、etcd-1Master、etcd、NFSServer192.168.119.181k8s-1、etcd-2Mission、etcd192.168.119.182k8s-2、etcd-3Mission、etcd192.168.119.183k8s-3Mission

操作系统及软件版本:

OS:CentOS Linux release 7.3.1611 (Core)ETCD:etcd-v3.2.9-linux-amd64Flannel:flannel.x86_64-0.7.1-2.el7Docker:docker.x86_64-2:1.12.6-61.git85d7426.el7.centosK8S:v1.8.1

所有服务器的hosts文件内容相同,如下:

[[email protected]0 ~]# cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.119.180k8s-0 etcd-1192.168.119.181k8s-1 etcd-2192.168.119.182k8s-2 etcd-3192.168.119.183k8s-3

关闭各个节点上的防火墙设置

[root@k8s-0 ~]# systemctl stop firewalld[root@k8s-0 ~]# systemctl disable firewalld[root@k8s-0 ~]# systemctl status firewalld

如果防火墙玩儿得不是很溜,还是先关闭为好,避免给自己挖坑!

关闭各个节点上的swap

[root@k8s-0 ~]# swapoff -a2、ETCD集群部署

etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines. It’s open-source and available on GitHub. etcd gracefully handles leader elections during network partitions and will tolerate machine failure, including the leader.

ETCD在集群中充当分布式的数据存储角色,存储kubernetes和flanneld的数据集配置信息,例如:kubernetes的节点信息、flanneld的网段信息等。

2.1、制作证书

关于证书的一些概念可以参考:网络安全相关知识简介

ETCD可以配置TLS证书实现客户端到服务器和服务器到服务器的身份认证。因此,集群中的每一个节点都需要有自己的证书,需要访问ETCD集群的客户端在访问集群时也需要提供证书。除此之外,所有的集群节点证书之间需要相互信任,也就是说:我们需要一个CA来生成自签名的根证书,并为集群中的每个节点和需要访问集群的客户端签发对应的证书。ETCD集群中的所有节点必须信任根证书,通过根证书来验证其它证书的真伪。集群中各个角色对应证书及其关系如下图:

这里写图片描述

这里我们不需要搭建自己的CA服务器,只需要提供根证书及其它集群角色所需要的证书即可,将根证书的副本分发到集群的各个角色上。生成证书时,我们使用CloudFlare的开源工具CFSSL。

2.1.1、安装CFSSL工具### 下载 ###[root@k8s-0 ~]# curl -s -L -o ./cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64[root@k8s-0 ~]# curl -s -L -o ./cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64[root@k8s-0 ~]# ls -l cf*-rw-r--r--. 1 root root 10376657 11月7 04:25 cfssl-rw-r--r--. 1 root root2277873 11月7 04:27 cfssljson### 授权 ###[root@k8s-0 ~]# chmod +x cf*[root@k8s-0 ~]# ls -l cf*-rwxr-xr-x. 1 root root 10376657 11月7 04:25 cfssl-rwxr-xr-x. 1 root root2277873 11月7 04:27 cfssljson### 移动到/usr/local/bin目录 ###[root@k8s-0 ~]# mv cf* /usr/local/bin/### 测试 ###[root@k8s-0 ~]# cfssl versionVersion: 1.2.0Revision: devRuntime: go1.62.1.2、生成证书文件

使用cfssl工具生成CA证书申请文件模版,JSON格式

### 生成默认的文件模版 ###[root@k8s-0 etcd]# pwd/root/cfssl/etcd[root@k8s-0 etcd]# cfssl print-defaults csr > ca-csr.json[root@k8s-0 etcd]# ls -l-rw-r--r--. 1 root root 287 11月7 05:11 ca-csr.json

csr表示证书签名请求

编辑ca-csr.json模版文件,修改后的文件内容如下

{ "CN": "ETCD-Cluster", "hosts": [ "localhost", "127.0.0.1", "etcd-1", "etcd-2", "etcd-3" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Wuhan", "ST": "Hubei", "O": "Dameng", "OU": "CloudPlatform" } ]}

hosts表示此证书可以在那些主机上使用

根据证书签名请求生成根证书及私钥

[[email protected]0 etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -2017/11/07 05:19:23 [INFO] generating a new CA key and certificate from CSR2017/11/07 05:19:23 [INFO] generate received request2017/11/07 05:19:23 [INFO] received CSR2017/11/07 05:19:23 [INFO] generating key: rsa-20482017/11/07 05:19:24 [INFO] encoded CSR2017/11/07 05:19:24 [INFO] signed certificate with serial number 72023613742258533689603590346479034316827863176[[email protected]0 etcd]# ls -l -rw-r--r--. 1 root root 1106 11月7 05:19 ca.csr-rw-r--r--. 1 root root390 11月7 05:19 ca-csr.json-rw-------. 1 root root 1675 11月7 05:19 ca-key.pem-rw-r--r--. 1 root root 1403 11月7 05:19 ca.pem

ca.pem为证书文件,文件中包含CA的公钥

ca-key.pem为私钥,妥善保管

ca.csr为证书签名请求,可以使用此文件重新申请一个新的证书

使用cfssl工具生成证书签发策略文件模版,该文件告诉CA该签发什么样的证书

[[email protected]-0 etcd]# pwd/root/cfssl/etcd[[email protected]-0 etcd]# cfssl print-defaults config > ca-config.json[[email protected]-0 etcd]# ls -l-rw-r--r--. 1 root root567 11月7 05:39 ca-config.json-rw-r--r--. 1 root root 1106 11月7 05:19 ca.csr-rw-r--r--. 1 root root390 11月7 05:19 ca-csr.json-rw-------. 1 root root 1675 11月7 05:19 ca-key.pem-rw-r--r--. 1 root root 1403 11月7 05:19 ca.pem

编辑ca-config.json模版文件,编辑后的文件内容如下:

{ "signing": { "default": { "expiry": "43800h" }, "profiles": { "server": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "server auth" ] }, "client": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "client auth" ] }, "peer": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "client auth", "server auth" ] } } }}

1、证书默认过期时间43800小时(5年)

2、三个profile:

​ server:用于服务器身份认证,存放在服务器端,表明服务器身份

​ client:用于客户端身份认证,存放在客户端,表明客户端身份

​ peer:可同时用于服务器和客户端身份认证

创建服务器证书签名请求JSON文件(certificates-node-1.json)

[[email protected]0 etcd]# pwd/root/cfssl/etcd### 先生成模版文件,然后填写自己的内容 ###[[email protected]0 etcd]# cfssl print-defaults csr > certificates-node-1.json[[email protected]0 etcd]# ls -l-rw-r--r--. 1 root root833 11月7 06:00 ca-config.json-rw-r--r--. 1 root root 1106 11月7 05:19 ca.csr-rw-r--r--. 1 root root390 11月7 05:19 ca-csr.json-rw-------. 1 root root 1675 11月7 05:19 ca-key.pem-rw-r--r--. 1 root root 1403 11月7 05:19 ca.pem-rw-r--r--. 1 root root287 11月7 06:01 certificates-node-1.json### 修改后的文件内容如下 ###[[email protected]0 etcd]# cat certificates-node-1.json { "CN": "etcd-node-1", "hosts": [ "etcd-1", "localhost", "127.0.0.1" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Wuhan", "ST": "Hubei", "O": "Dameng", "OU": "CloudPlatform" } ]}### 使用CA的私钥、证书以及证书签发策略文件签发证书 ###[[email protected]0 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server certificates-node-1.json | cfssljson -bare certificates-node-12017/11/07 06:23:02 [INFO] generate received request2017/11/07 06:23:02 [INFO] received CSR2017/11/07 06:23:02 [INFO] generating key: rsa-20482017/11/07 06:23:03 [INFO] encoded CSR2017/11/07 06:23:03 [INFO] signed certificate with serial number 507734072254855182524567212076642842079739312252017/11/07 06:23:03 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").[[email protected]0 etcd]# ls -l-rw-r--r--. 1 root root833 11月7 06:00 ca-config.json-rw-r--r--. 1 root root 1106 11月7 05:19 ca.csr-rw-r--r--. 1 root root390 11月7 05:19 ca-csr.json-rw-------. 1 root root 1675 11月7 05:19 ca-key.pem-rw-r--r--. 1 root root 1403 11月7 05:19 ca.pem-rw-r--r--. 1 root root 1082 11月7 06:23 certificates-node-1.csr-rw-r--r--. 1 root root353 11月7 06:08 certificates-node-1.json-rw-------. 1 root root 1675 11月7 06:23 certificates-node-1-key.pem-rw-r--r--. 1 root root 1452 11月7 06:23 certificates-node-1.pem

提示警告说证书没有hosts字段,可能导致此证书不适用于web站点。使用openssl x509 -in certificates-node-1.pem -text -noout 输出证书内容,在“X509v3 Subject Alternative Name”字段中包含了certificates-node-1.json文件中的“hosts”的内容。然后使用 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="etcd-1,localhost,127.0.0.1" certificates-node-1.json | cfssljson -bare certificates-node-1 命令生成证书时就没有警告,但是证书包含的内容是一样的。

Tips:ETCD集群中的所有节点可以使用同一份证书及私钥,即将certificates-node-1.json和certificates-node-1-key.pem文件分发到etcd-1、etcd-2和etcd-3服务器上。

重复上一部操作签发etcd-2和etcd-3的证书

[[email protected]0 etcd]# pwd/root/cfssl/etcd[[email protected]0 etcd]# cat certificates-node-2.json { "CN": "etcd-node-2", "hosts": [ "etcd-2", "localhost", "127.0.0.1" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Wuhan", "ST": "Hubei", "O": "Dameng", "OU": "CloudPlatform" } ]}[[email protected]0 etcd]# cat certificates-node-3.json { "CN": "etcd-node-3", "hosts": [ "etcd-3", "localhost", "127.0.0.1" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Wuhan", "ST": "Hubei", "O": "Dameng", "OU": "CloudPlatform" } ]}[[email protected]0 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="etcd-2,localhost,127.0.0.1" certificates-node-2.json | cfssljson -bare certificates-node-22017/11/07 06:37:54 [INFO] generate received request2017/11/07 06:37:54 [INFO] received CSR2017/11/07 06:37:54 [INFO] generating key: rsa-20482017/11/07 06:37:55 [INFO] encoded CSR2017/11/07 06:37:55 [INFO] signed certificate with serial number 53358189697471981482368171601115864435884153942[[email protected]0 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="etcd-3,localhost,127.0.0.1" certificates-node-3.json | cfssljson -bare certificates-node-32017/11/07 06:38:16 [INFO] generate received request2017/11/07 06:38:16 [INFO] received CSR2017/11/07 06:38:16 [INFO] generating key: rsa-20482017/11/07 06:38:17 [INFO] encoded CSR2017/11/07 06:38:17 [INFO] signed certificate with serial number 202032929825719668992436771371275796219870214492[[email protected]0 etcd]# ls -l-rw-r--r--. 1 root root833 11月7 06:00 ca-config.json-rw-r--r--. 1 root root 1106 11月7 05:19 ca.csr-rw-r--r--. 1 root root390 11月7 05:19 ca-csr.json-rw-------. 1 root root 1675 11月7 05:19 ca-key.pem-rw-r--r--. 1 root root 1403 11月7 05:19 ca.pem-rw-r--r--. 1 root root 1082 11月7 06:23 certificates-node-1.csr-rw-r--r--. 1 root root353 11月7 06:08 certificates-node-1.json-rw-------. 1 root root 1675 11月7 06:23 certificates-node-1-key.pem-rw-r--r--. 1 root root 1452 11月7 06:23 certificates-node-1.pem-rw-r--r--. 1 root root 1082 11月7 06:37 certificates-node-2.csr-rw-r--r--. 1 root root353 11月7 06:36 certificates-node-2.json-rw-------. 1 root root 1675 11月7 06:37 certificates-node-2-key.pem-rw-r--r--. 1 root root 1452 11月7 06:37 certificates-node-2.pem-rw-r--r--. 1 root root 1082 11月7 06:38 certificates-node-3.csr-rw-r--r--. 1 root root353 11月7 06:37 certificates-node-3.json-rw-------. 1 root root 1679 11月7 06:38 certificates-node-3-key.pem-rw-r--r--. 1 root root 1452 11月7 06:38 certificates-node-3.pem

将证书分发到对应的节点上

[root@k8s-0 etcd]# pwd/root/cfssl/etcd### 创建证书存放目录 ###[root@k8s-0 etcd]# mkdir -p /etc/etcd/ssl[root@k8s-0 etcd]# ssh root@k8s-1 mkdir -p /etc/etcd/ssl[root@k8s-0 etcd]# ssh root@k8s-2 mkdir -p /etc/etcd/ssl### 复制对应的证书文件到对应服务器的目录中 ###[root@k8s-0 etcd]# cp ca.pem /etc/etcd/ssl/[root@k8s-0 etcd]# cp certificates-node-1.pem /etc/etcd/ssl/[root@k8s-0 etcd]# cp certificates-node-1-key.pem /etc/etcd/ssl/[root@k8s-0 etcd]# ls -l /etc/etcd/ssl/-rw-r--r--. 1 root root 1403 11月7 19:56 ca.pem-rw-------. 1 root root 1675 11月7 19:57 certificates-node-1-key.pem-rw-r--r--. 1 root root 1452 11月7 19:55 certificates-node-1.pem### 复制文件到k8s-1节点上 ###[root@k8s-0 etcd]# scp ca.pem root@k8s-1:/etc/etcd/ssl/[root@k8s-0 etcd]# scp certificates-node-2.pem root@k8s-1:/etc/etcd/ssl/[root@k8s-0 etcd]# scp certificates-node-2-key.pem root@k8s-1:/etc/etcd/ssl/[root@k8s-0 etcd]# ssh root@k8s-1 ls -l /etc/etcd/ssl/-rw-r--r--. 1 root root 1403 11月7 19:58 ca.pem-rw-------. 1 root root 1675 11月7 20:00 certificates-node-2-key.pem-rw-r--r--. 1 root root 1452 11月7 19:59 certificates-node-2.pem### 复制文件到k8s-2节点上 ###[root@k8s-0 etcd]# scp ca.pem root@k8s-2:/etc/etcd/ssl/[root@k8s-0 etcd]# scp certificates-node-3.pem root@k8s-2:/etc/etcd/ssl/[root@k8s-0 etcd]# scp certificates-node-3-key.pem root@k8s-2:/etc/etcd/ssl/[root@k8s-0 etcd]# ssh root@k8s-2 ls -l /etc/etcd/ssl/-rw-r--r--. 1 root root 1403 11月7 20:03 ca.pem-rw-------. 1 root root 1675 11月7 20:04 certificates-node-3-key.pem-rw-r--r--. 1 root root 1452 11月7 20:03 certificates-node-3.pem

查看证书内容:openssl x509 -in ca.pem -text -noout

2.2、部署ETCD集群

下载安装包并解压

[root@k8s-0 ~]# pwd/root[root@k8s-0 ~]# wget http[root@k8s-0 ~]# ls -l-rw-r--r--. 1 rootroot10176896 11月6 19:18 etcd-v3.2.9-linux-amd64.tar.gz[root@k8s-0 ~]# tar -zxvf etcd-v3.2.9-linux-amd64.tar.gz [root@k8s-0 ~]# ls -ldrwxrwxr-x. 3 chenlei chenlei123 10月7 01:10 etcd-v3.2.9-linux-amd64-rw-r--r--. 1 rootroot10176896 11月6 19:18 etcd-v3.2.9-linux-amd64.tar.gz[root@k8s-0 ~]# ls -l etcd-v3.2.9-linux-amd64drwxrwxr-x. 11 chenlei chenlei 4096 10月7 01:10 Documentation-rwxrwxr-x.1 chenlei chenlei 17123360 10月7 01:10 etcd-rwxrwxr-x.1 chenlei chenlei 14640128 10月7 01:10 etcdctl-rw-rw-r--.1 chenlei chenlei33849 10月7 01:10 README-etcdctl.md-rw-rw-r--.1 chenlei chenlei 5801 10月7 01:10 README.md-rw-rw-r--.1 chenlei chenlei 7855 10月7 01:10 READMEv2-etcdctl.md[root@k8s-0 ~]# cp etcd-v3.2.9-linux-amd64/etcd /usr/local/bin/[root@k8s-0 ~]# cp etcd-v3.2.9-linux-amd64/etcdctl /usr/local/bin/[root@k8s-0 ~]# etcd --versionetcd Version: 3.2.9Git SHA: f1d7dd8Go Version: go1.8.4Go OS/Arch: linux/amd64

创建etcd配置文件

[[email protected]0 ~]# cat /etc/etcd/etcd.conf# [member]ETCD_NAME=etcd-1ETCD_DATA_DIR="/var/lib/etcd/default.etcd"#ETCD_WAL_DIR=""#ETCD_SNAPSHOT_COUNT="10000"#ETCD_HEARTBEAT_INTERVAL="100"#ETCD_ELECTION_TIMEOUT="1000"ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""##[cluster]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd-1:2380"# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."ETCD_INITIAL_CLUSTER="etcd-1=https://etcd-1:2380,etcd-2=https://etcd-2:2380,etcd-3=https://etcd-3:2380"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="https://etcd-1:2379"#ETCD_DISCOVERY=""#ETCD_DISCOVERY_SRV=""#ETCD_DISCOVERY_FALLBACK="proxy"#ETCD_DISCOVERY_PROXY=""#ETCD_STRICT_RECONFIG_CHECK="false"#ETCD_AUTO_COMPACTION_RETENTION="0"##[proxy]#ETCD_PROXY="off"#ETCD_PROXY_FAILURE_WAIT="5000"#ETCD_PROXY_REFRESH_INTERVAL="30000"#ETCD_PROXY_DIAL_TIMEOUT="1000"#ETCD_PROXY_WRITE_TIMEOUT="5000"#ETCD_PROXY_READ_TIMEOUT="0"##[security]ETCD_CERT_FILE="/etc/etcd/ssl/certificates-node-1.pem"ETCD_KEY_FILE="/etc/etcd/ssl/certificates-node-1-key.pem"ETCD_CLIENT_CERT_AUTH="true"ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"ETCD_AUTO_TLS="true"ETCD_PEER_CERT_FILE="/etc/etcd/ssl/certificates-node-1.pem"ETCD_PEER_KEY_FILE="/etc/etcd/ssl/certificates-node-1-key.pem"#ETCD_PEER_CLIENT_CERT_AUTH="false"ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"ETCD_PEER_AUTO_TLS="true"##[logging]#ETCD_DEBUG="false"# examples for -log-package-levels etcdserver=WARNING,security=DEBUG#ETCD_LOG_PACKAGE_LEVELS=""##[profiling]#ETCD_ENABLE_PPROF="false"#ETCD_METRICS="basic"

创建Unit服务文件以及启动服务的用户

[[email protected]0 ~]# cat /usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyWorkingDirectory=/var/lib/etcd/EnvironmentFile=-/etc/etcd/etcd.confUser=etcd# set GOMAXPROCS to number of processorsExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/local/bin/etcd --name=\"${ETCD_NAME}\" --cert-file=\"${ETCD_CERT_FILE}\" --key-file=\"${ETCD_KEY_FILE}\" --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" --peer-trusted-ca-file=\"${ETCD_PEER_TRUSTED_CA_FILE}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" --data-dir=\"${ETCD_DATA_DIR}\""Restart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.target### 创建etcd服务对应的用户 ###[[email protected]0 ~]# useradd etcd -d /var/lib/etcd -s /sbin/nologin -c "etcd user"### 修改证书文件的属主为etcd ###[[email protected]0 ~]# chown -R etcd:etcd /etc/etcd/[[email protected]0 ~]# ls -lR /etc/etcd//etc/etcd/:-rw-r--r--. 1 etcd etcd 1752 11月7 20:19 etcd.confdrwxr-xr-x. 2 etcd etcd 86 11月7 19:57 ssl/etc/etcd/ssl:-rw-r--r--. 1 etcd etcd 1403 11月7 19:56 ca.pem-rw-------. 1 etcd etcd 1675 11月7 19:57 certificates-node-1-key.pem-rw-r--r--. 1 etcd etcd 1452 11月7 19:55 certificates-node-1.pem

在k8s-1和k8s-2上重复上述1 - 3步骤

[[email protected]1 ~]# cat /etc/etcd/etcd.conf# [member]ETCD_NAME=etcd-2ETCD_DATA_DIR="/var/lib/etcd/default.etcd"#ETCD_WAL_DIR=""#ETCD_SNAPSHOT_COUNT="10000"#ETCD_HEARTBEAT_INTERVAL="100"#ETCD_ELECTION_TIMEOUT="1000"ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""##[cluster]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd-2:2380"# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."ETCD_INITIAL_CLUSTER="etcd-1=https://etcd-1:2380,etcd-2=https://etcd-2:2380,etcd-3=https://etcd-3:2380"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="https://etcd-2:2379"#ETCD_DISCOVERY=""#ETCD_DISCOVERY_SRV=""#ETCD_DISCOVERY_FALLBACK="proxy"#ETCD_DISCOVERY_PROXY=""#ETCD_STRICT_RECONFIG_CHECK="false"#ETCD_AUTO_COMPACTION_RETENTION="0"##[proxy]#ETCD_PROXY="off"#ETCD_PROXY_FAILURE_WAIT="5000"#ETCD_PROXY_REFRESH_INTERVAL="30000"#ETCD_PROXY_DIAL_TIMEOUT="1000"#ETCD_PROXY_WRITE_TIMEOUT="5000"#ETCD_PROXY_READ_TIMEOUT="0"##[security]ETCD_CERT_FILE="/etc/etcd/ssl/certificates-node-2.pem"ETCD_KEY_FILE="/etc/etcd/ssl/certificates-node-2-key.pem"ETCD_CLIENT_CERT_AUTH="true"ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"ETCD_AUTO_TLS="true"ETCD_PEER_CERT_FILE="/etc/etcd/ssl/certificates-node-2.pem"ETCD_PEER_KEY_FILE="/etc/etcd/ssl/certificates-node-2-key.pem"#ETCD_PEER_CLIENT_CERT_AUTH="false"ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"ETCD_PEER_AUTO_TLS="true"##[logging]#ETCD_DEBUG="false"# examples for -log-package-levels etcdserver=WARNING,security=DEBUG#ETCD_LOG_PACKAGE_LEVELS=""##[profiling]#ETCD_ENABLE_PPROF="false"#ETCD_METRICS="basic"[[email protected]1 ~]# ls -lR /etc/etcd//etc/etcd/:总用量 4-rw-r--r--. 1 etcd etcd 1752 11月7 20:46 etcd.confdrwxr-xr-x. 2 etcd etcd 86 11月7 20:00 ssl/etc/etcd/ssl:总用量 12-rw-r--r--. 1 etcd etcd 1403 11月7 19:58 ca.pem-rw-------. 1 etcd etcd 1675 11月7 20:00 certificates-node-2-key.pem-rw-r--r--. 1 etcd etcd 1452 11月7 19:59 certificates-node-2.pem[[email protected]2 ~]# cat /etc/etcd/etcd.conf# [member]ETCD_NAME=etcd-3ETCD_DATA_DIR="/var/lib/etcd/default.etcd"#ETCD_WAL_DIR=""#ETCD_SNAPSHOT_COUNT="10000"#ETCD_HEARTBEAT_INTERVAL="100"#ETCD_ELECTION_TIMEOUT="1000"ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""##[cluster]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd-3:2380"# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."ETCD_INITIAL_CLUSTER="etcd-1=https://etcd-1:2380,etcd-2=https://etcd-2:2380,etcd-3=https://etcd-3:2380"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="https://etcd-3:2379"#ETCD_DISCOVERY=""#ETCD_DISCOVERY_SRV=""#ETCD_DISCOVERY_FALLBACK="proxy"#ETCD_DISCOVERY_PROXY=""#ETCD_STRICT_RECONFIG_CHECK="false"#ETCD_AUTO_COMPACTION_RETENTION="0"##[proxy]#ETCD_PROXY="off"#ETCD_PROXY_FAILURE_WAIT="5000"#ETCD_PROXY_REFRESH_INTERVAL="30000"#ETCD_PROXY_DIAL_TIMEOUT="1000"#ETCD_PROXY_WRITE_TIMEOUT="5000"#ETCD_PROXY_READ_TIMEOUT="0"##[security]ETCD_CERT_FILE="/etc/etcd/ssl/certificates-node-3.pem"ETCD_KEY_FILE="/etc/etcd/ssl/certificates-node-3-key.pem"ETCD_CLIENT_CERT_AUTH="true"ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"ETCD_AUTO_TLS="true"ETCD_PEER_CERT_FILE="/etc/etcd/ssl/certificates-node-3.pem"ETCD_PEER_KEY_FILE="/etc/etcd/ssl/certificates-node-3-key.pem"#ETCD_PEER_CLIENT_CERT_AUTH="false"ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"ETCD_PEER_AUTO_TLS="true"##[logging]#ETCD_DEBUG="false"# examples for -log-package-levels etcdserver=WARNING,security=DEBUG#ETCD_LOG_PACKAGE_LEVELS=""##[profiling]#ETCD_ENABLE_PPROF="false"#ETCD_METRICS="basic"[[email protected]2 ~]# ls -lR /etc/etcd//etc/etcd/:-rw-r--r--. 1 etcd etcd 1752 11月7 20:50 etcd.confdrwxr-xr-x. 2 etcd etcd 86 11月7 20:04 ssl/etc/etcd/ssl:-rw-r--r--. 1 etcd etcd 1403 11月7 20:03 ca.pem-rw-------. 1 etcd etcd 1675 11月7 20:04 certificates-node-3-key.pem-rw-r--r--. 1 etcd etcd 1452 11月7 20:03 certificates-node-3.pem

启动etcd服务

### 在三个节点上分别执行 ###[root@k8s-0 ~]# systemctl start etcd[root@k8s-1 ~]# systemctl start etcd[root@k8s-2 ~]# systemctl start etcd[root@k8s-0 ~]# systemctl status etcd[root@k8s-1 ~]# systemctl status etcd[root@k8s-2 ~]# systemctl status etcd

检查集群健康状态

### 生产客户端证书 ###[[email protected]0 etcd]# pwd/root/cfssl/etcd[[email protected]0 etcd]# cat certificates-client.json { "CN": "etcd-client", "hosts": [ "k8s-0", "k8s-1", "k8s-2", "k8s-3", "localhost", "127.0.0.1" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Wuhan", "ST": "Hubei", "O": "Dameng", "OU": "CloudPlatform" } ]}[[email protected]0 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client -hostname="k8s-0,k8s-1,k8s-2,k8s-3,localhost,127.0.0.1" certificates-client.json | cfssljson -bare certificates-client2017/11/07 21:22:52 [INFO] generate received request2017/11/07 21:22:52 [INFO] received CSR2017/11/07 21:22:52 [INFO] generating key: rsa-20482017/11/07 21:22:52 [INFO] encoded CSR2017/11/07 21:22:52 [INFO] signed certificate with serial number 625476446160272733374126460300662233104566650826[[email protected]0 etcd]# ls -l-rw-r--r--. 1 root root833 11月7 06:00 ca-config.json-rw-r--r--. 1 root root 1106 11月7 05:19 ca.csr-rw-r--r--. 1 root root390 11月7 05:19 ca-csr.json-rw-------. 1 root root 1675 11月7 05:19 ca-key.pem-rw-r--r--. 1 root root 1403 11月7 05:19 ca.pem-rw-r--r--. 1 root root 1110 11月7 21:22 certificates-client.csr-rw-r--r--. 1 root root403 11月7 21:20 certificates-client.json-rw-------. 1 root root 1679 11月7 21:22 certificates-client-key.pem-rw-r--r--. 1 root root 1476 11月7 21:22 certificates-client.pem-rw-r--r--. 1 root root 1082 11月7 06:23 certificates-node-1.csr-rw-r--r--. 1 root root353 11月7 06:08 certificates-node-1.json-rw-------. 1 root root 1675 11月7 06:23 certificates-node-1-key.pem-rw-r--r--. 1 root root 1452 11月7 06:23 certificates-node-1.pem-rw-r--r--. 1 root root 1082 11月7 06:37 certificates-node-2.csr-rw-r--r--. 1 root root353 11月7 06:36 certificates-node-2.json-rw-------. 1 root root 1675 11月7 06:37 certificates-node-2-key.pem-rw-r--r--. 1 root root 1452 11月7 06:37 certificates-node-2.pem-rw-r--r--. 1 root root 1082 11月7 06:38 certificates-node-3.csr-rw-r--r--. 1 root root353 11月7 06:37 certificates-node-3.json-rw-------. 1 root root 1679 11月7 06:38 certificates-node-3-key.pem-rw-r--r--. 1 root root 1452 11月7 06:38 certificates-node-3.pem[[email protected]0 etcd]# etcdctl --ca-file=ca.pem --cert-file=certificates-client.pem --key-file=certificates-client-key.pem --endpoints=https://etcd-1:2379,https://etcd-2:2379,https://etcd-3:2379 cluster-healthmember 1a147ce6336081c1 is healthy: got healthy result from https://etcd-1:2379member ce10c39ce110475b is healthy: got healthy result from https://etcd-3:2379member ed2c681b974a3802 is healthy: got healthy result from https://etcd-2:2379cluster is healthy

客户端证书之后可用于kube-apiserver、flanneld等需要连接etcd集群的地方!

3、部署Kubernetes Master

Mater节点上运行的Kubernetes服务包括:kube-apiserver、kube-controller-manager和kube-scheduler。目前这三个服务需要部署在同一台服务器上。

Master节点启用TLS和TLS Bootstrapping,这里涉及到的证书交互非常复杂。为了弄明白其中的关系,我们为不同的对象建立不同的CA。首先我们来梳理一下我们可能会用到的CA:

CA-ApiServer,用来签发kube-apiserver的证书CA-Client,用来签发kubectl证书、kube-proxy证书以及kubelet自动签发证书CA-ServiceAccount,用来签发和验证Service Account的JWT bearer tokens

kubelet自动签发证书是由CA-Client签发的证书,但是并不直接作为kubelet服务的身份证书使用。因为启用TLS Bootstrapping,kubelet的身份证书由Kubenetes(具体可能是kube-controller-manager)签发,而kubelet自动签发证书将充当二级CA,复杂签发具体的kubelet身份证书。

kubectl和kube-proxy通过kubeconfig文件获得kube-apiserver的CA根证书用来验证kube-apiserver服务的身份证书,同时向kube-apiserver出示自己的身份证书。

除了这些CA之外,我们还需要ETCD的CA根证书以及ETCD的certificates-client来访问ETCD集群服务。

3.1、制作CA证书3.1.1、制作kubernetes根证书[[email protected]0 kubernetes]# pwd/root/cfssl/kubernetes### 生产kube-apiserver CA根证书 ###[[email protected]0 kubernetes]# cat kubernetes-root-ca-csr.json {"CN": "Kubernetes-Cluster","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Wuhan","ST": "Hubei","O": "Dameng","OU": "CloudPlatform"}]}[[email protected]0 kubernetes]# cfssl gencert -initca kubernetes-root-ca-csr.json | cfssljson -bare kubernetes-root-ca2017/11/10 19:20:36 [INFO] generating a new CA key and certificate from CSR2017/11/10 19:20:36 [INFO] generate received request2017/11/10 19:20:36 [INFO] received CSR2017/11/10 19:20:36 [INFO] generating key: rsa-20482017/11/10 19:20:37 [INFO] encoded CSR2017/11/10 19:20:37 [INFO] signed certificate with serial number 409390209095238242979736842166999327083180050042[[email protected]0 kubernetes]# ls -l-rw-r--r--. 1 root root 1021 11月 10 19:20 kubernetes-root-ca.csr-rw-r--r--. 1 root root279 11月 10 18:04 kubernetes-root-ca-csr.json-rw-------. 1 root root 1675 11月 10 19:20 kubernetes-root-ca-key.pem-rw-r--r--. 1 root root 1395 11月 10 19:20 kubernetes-root-ca.pem### 复制ETCD的证书策略文件 ###[[email protected]0 kubernetes]# cp ../etcd/ca-config.json .[[email protected]0 kubernetes]# ll-rw-r--r--. 1 root root833 11月 10 16:29 ca-config.json-rw-r--r--. 1 root root 1021 11月 10 19:20 kubernetes-root-ca.csr-rw-r--r--. 1 root root279 11月 10 18:04 kubernetes-root-ca-csr.json-rw-------. 1 root root 1675 11月 10 19:20 kubernetes-root-ca-key.pem-rw-r--r--. 1 root root 1395 11月 10 19:20 kubernetes-root-ca.pem3.1.2、根据根证书签发kubectl证书[[email protected]0 kubernetes]# pwd/root/cfssl/kubernetes### kubenetes会提取证书中的"O"作为其RBAC模型中的"Group"值 ###[[email protected]0 kubernetes]# cat kubernetes-client-kubectl-csr.json {"CN": "kubectl-admin","hosts": ["localhost","127.0.0.1","etcd-1"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Wuhan","ST": "Hubei","O": "system:masters","OU": "system"}]}[[email protected]0 kubernetes]# cfssl gencert -ca=kubernetes-root-ca.pem -ca-key=kubernetes-root-ca-key.pem-config=ca-config.json -profile=client -hostname="k8s-0,localhost,127.0.0.1" kubernetes-client-kubectl-csr.json | cfssljson -bare kubernetes-client-kubectl2017/11/10 19:28:53 [INFO] generate received request2017/11/10 19:28:53 [INFO] received CSR2017/11/10 19:28:53 [INFO] generating key: rsa-20482017/11/10 19:28:53 [INFO] encoded CSR2017/11/10 19:28:53 [INFO] signed certificate with serial number 48283780181062525775523310004102739160256608492[[email protected]0 kubernetes]# ls -l总用量 40-rw-r--r--. 1 root root833 11月 10 16:29 ca-config.json-rw-r--r--. 1 root root 1086 11月 10 19:28 kubernetes-client-kubectl.csr-rw-r--r--. 1 root root356 11月 10 18:17 kubernetes-client-kubectl-csr.json-rw-------. 1 root root 1675 11月 10 19:28 kubernetes-client-kubectl-key.pem-rw-r--r--. 1 root root 1460 11月 10 19:28 kubernetes-client-kubectl.pem-rw-r--r--. 1 root root 1021 11月 10 19:20 kubernetes-root-ca.csr-rw-r--r--. 1 root root279 11月 10 18:04 kubernetes-root-ca-csr.json-rw-------. 1 root root 1675 11月 10 19:20 kubernetes-root-ca-key.pem-rw-r--r--. 1 root root 1395 11月 10 19:20 kubernetes-root-ca.pem

基于Kubenetes的RBAC模型,kubenetes会从连接集群的客户端提供的身份证书(此处是kubernetes-client-kubectl.pem)中提取”CN”和”O”,分别作为RBAC中的username和group。我们此处使用的”system:masters”是kubenetes的内置group,该group在kubenetes中被绑定了内置role “cluster-admin”,”cluster-admin”具备访问集群所有API的权限。也就是说,使用此证书访问kubenetes集群,将拥有操作kubenetes所有API的权限。”CN”的值随你喜欢,kubenetes会为你创建这个用户,然后绑定权限。

3.1.3、根据根证书签发apiserver证书[[email protected]0 kubernetes]# pwd/root/cfssl/kubernetes[[email protected]0 kubernetes]# cat kubernetes-server-csr.json {"CN": "Kubernetes-Server","hosts": ["localhost","127.0.0.1","k8s-0","10.254.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Wuhan","ST": "Hubei","O": "Dameng","OU": "CloudPlatform"}]}[[email protected]0 kubernetes]# cfssl gencert -ca=kubernetes-root-ca.pem -ca-key=kubernetes-root-ca-key.pem-config=ca-config.json -profile=server kubernetes-server-csr.json | cfssljson -bare kubernetes-server2017/11/10 19:42:40 [INFO] generate received request2017/11/10 19:42:40 [INFO] received CSR2017/11/10 19:42:40 [INFO] generating key: rsa-20482017/11/10 19:42:40 [INFO] encoded CSR2017/11/10 19:42:40 [INFO] signed certificate with serial number 1362432505410447392030785147264253970972043588892017/11/10 19:42:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").[[email protected]0 kubernetes]# ls -l-rw-r--r--. 1 root root833 11月 10 16:29 ca-config.json-rw-r--r--. 1 root root 1086 11月 10 19:28 kubernetes-client-kubectl.csr-rw-r--r--. 1 root root356 11月 10 18:17 kubernetes-client-kubectl-csr.json-rw-------. 1 root root 1675 11月 10 19:28 kubernetes-client-kubectl-key.pem-rw-r--r--. 1 root root 1460 11月 10 19:28 kubernetes-client-kubectl.pem-rw-r--r--. 1 root root 1021 11月 10 19:20 kubernetes-root-ca.csr-rw-r--r--. 1 root root279 11月 10 18:04 kubernetes-root-ca-csr.json-rw-------. 1 root root 1675 11月 10 19:20 kubernetes-root-ca-key.pem-rw-r--r--. 1 root root 1395 11月 10 19:20 kubernetes-root-ca.pem-rw-r--r--. 1 root root 1277 11月 10 19:42 kubernetes-server.csr-rw-r--r--. 1 root root556 11月 10 19:40 kubernetes-server-csr.json-rw-------. 1 root root 1675 11月 10 19:42 kubernetes-server-key.pem-rw-r--r--. 1 root root 1651 11月 10 19:42 kubernetes-server.pem3.2、安装kubectl命令行客户端3.2.1、安装[[email protected]0 ~]# pwd/root### 下载 ###wget http://......[[email protected]0 ~]# ls -l-rw-------. 1 rootroot 1510 10月 10 18:47 anaconda-ks.cfgdrwxr-xr-x. 3 rootroot 18 11月7 05:05 cfssldrwxrwxr-x. 3 chenlei chenlei 123 10月7 01:10 etcd-v3.2.9-linux-amd64-rw-r--r--. 1 rootroot 10176896 11月6 19:18 etcd-v3.2.9-linux-amd64.tar.gz-rw-r--r--. 1 rootroot403881630 11月8 07:20 kubernetes-server-linux-amd64.tar.gz### 解压 ###[[email protected]0 ~]# tar -zxvf kubernetes-server-linux-amd64.tar.gz [[email protected]0 ~]# ls -l-rw-------. 1 rootroot 1510 10月 10 18:47 anaconda-ks.cfgdrwxr-xr-x. 3 rootroot 18 11月7 05:05 cfssldrwxrwxr-x. 3 chenlei chenlei 123 10月7 01:10 etcd-v3.2.9-linux-amd64-rw-r--r--. 1 rootroot 10176896 11月6 19:18 etcd-v3.2.9-linux-amd64.tar.gzdrwxr-x---. 4 rootroot 79 10月 12 07:38 kubernetes-rw-r--r--. 1 rootroot403881630 11月8 07:24 kubernetes-server-linux-amd64.tar.gz[[email protected]0 ~]# cp kubernetes/server/bin/kubectl /usr/local/bin/[[email protected]0 ~]# kubectl versionClient Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}The connection to the server localhost:8080 was refused - did you specify the right host or port?3.2.2、配置

为了让kubectl顺利的访问apiserver,我们需要为其配置如下信息:

kube-apiserver的服务地址kube-apiserver的根证书(apiserver-ca.pem),因为启用的TLS,我们需要此根证书来验明正身kubectl自己的证书(certificates-client-kubectl.pem)及密钥(certificates-client-kubectl-key.pem)

这些内容都定义在kubectl的kubeconfig文件中

[root@k8s-0 kubernetes]# pwd/root/cfssl/kubernetes### 创建证书存放路径 ###[root@k8s-0 kubernetes]# mkdir -p /etc/kubernetes/ssl/### 将kubectl需要用到的证书文件复制到证书存放目录 ###[root@k8s-0 kubernetes]# cp kubernetes-root-ca.pem kubernetes-client-kubectl.pem kubernetes-client-kubectl-key.pem /etc/kubernetes/ssl/[root@k8s-0 kubernetes]# ls -l /etc/kubernetes/ssl/-rw-------. 1 root root 1675 11月 10 19:46 kubernetes-client-kubectl-key.pem-rw-r--r--. 1 root root 1460 11月 10 19:46 kubernetes-client-kubectl.pem-rw-r--r--. 1 root root 1395 11月 10 19:46 kubernetes-root-ca.pem### 配置kubectl的kubeconfig ###[root@k8s-0 kubernetes]# kubectl config set-cluster kubernetes-cluster --certificate-authority=/etc/kubernetes/ssl/kubernetes-root-ca.pem --embed-certs=true --server="https://k8s-0:6443"Cluster "kubernetes-cluster" set.[root@k8s-0 kubernetes]# kubectl config set-credentials kubernetes-kubectl --client-certificate=/etc/kubernetes/ssl/kubernetes-client-kubectl.pem --embed-certs=true --client-key=/etc/kubernetes/ssl/kubernetes-client-kubectl-key.pemUser "kubernetes-kubectl" set.[root@k8s-0 kubernetes]# kubectl config set-context kubernetes-cluster-context --cluster=kubernetes-cluster --user=kubernetes-kubectlContext "kubernetes-cluster-context" created.[root@k8s-0 kubernetes]# kubectl config use-context kubernetes-cluster-contextSwitched to context "kubernetes-cluster-context".[root@k8s-0 kubernetes]# ls -l ~/.kube/总用量 8-rw-------. 1 root root 6445 11月8 21:37 config

如果你的kubectl需要连接多个不同的集群环境,你也可以定义多个context,根据实际需要来进行切换

set-cluster用来配置你的集群地址和CA根证书,kubernetes-cluster是集群的名称,有点Oracle TNS的感觉

set-credentials用来配置客户端证书及密钥,也就是访问集群的用户,用户信息在证书中

set-context用来组合cluster和credentials,也就是访问集群的上下文环境,你可以使用use-context来进行切换

3.2.3、测试[[email protected]0 kubernetes]# kubectl versionClient Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}The connection to the server k8s-0:6443 was refused - did you specify the right host or port?

此时的提示信息中”connection to the server k8s-0:6443”是上一步配置的kubeconfig中指定的地址,只是服务尚未启动

3.3、安装kube-apiserver服务3.3.1、安装[root@k8s-0 ~]# pwd/root[root@k8s-0 ~]# cp kubernetes/server/bin/kube-apiserver /usr/local/bin/### 创建Unit文件 ###[root@k8s-0 ~]# cat /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.targetAfter=etcd.service[Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/apiserverUser=kubeExecStart=/usr/local/bin/kube-apiserver \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_ETCD_SERVERS \$KUBE_API_ADDRESS \$KUBE_API_PORT \$KUBELET_PORT \$KUBE_ALLOW_PRIV \$KUBE_SERVICE_ADDRESSES \$KUBE_ADMISSION_CONTROL \$KUBE_API_ARGSRestart=on-failureType=notifyLimitNOFILE=65536[Install]WantedBy=multi-user.target

接下来的步骤中,我们会创建两个EnvironmentFile:/etc/kubernetes/config和/etc/kubernetes/apiserver。ExecStart对应kube-apiserver文件存放的位置。

3.3.2、配置3.3.2.1、准备证书[[email protected]0 kubernetes]# pwd/root/cfssl/kubernetes### 复制kubernetes-server.pem和kubernetes-server-key.pem到证书存放目录 ###[[email protected]0 kubernetes]# cp kubernetes-server.pem kubernetes-server-key.pem /etc/kubernetes/ssl/### 复制kubernetes-root-ca-key.pem到证书存放目录 ###[[email protected]0 kubernetes]# cp kubernetes-root-ca-key.pem /etc/kubernetes/ssl/### 准本ETCD客户端证书,这里我们直接使用上面ETCD测试用的客户端证书 ###[[email protected]0 etcd]# pwd/root/cfssl/etcd[[email protected]0 etcd]# cp ca.pem /etc/kubernetes/ssl/etcd-root-ca.pem[[email protected]0 etcd]# cp certificates-client.pem /etc/kubernetes/ssl/etcd-client-kubernetes.pem[[email protected]0 etcd]# cp certificates-client-key.pem /etc/kubernetes/ssl/etcd-client-kubernetes-key.pem[[email protected]0 etcd]# ls -l /etc/kubernetes/ssl/-rw-------. 1 root root 1679 11月 10 19:58 etcd-client-kubernetes-key.pem-rw-r--r--. 1 root root 1476 11月 10 19:58 etcd-client-kubernetes.pem-rw-r--r--. 1 root root 1403 11月 10 19:57 etcd-root-ca.pem-rw-------. 1 root root 1675 11月 10 19:46 kubernetes-client-kubectl-key.pem-rw-r--r--. 1 root root 1460 11月 10 19:46 kubernetes-client-kubectl.pem-rw-------. 1 root root 1675 11月 10 19:57 kubernetes-root-ca-key.pem-rw-r--r--. 1 root root 1395 11月 10 19:46 kubernetes-root-ca.pem-rw-------. 1 root root 1675 11月 10 19:56 kubernetes-server-key.pem-rw-r--r--. 1 root root 1651 11月 10 19:56 kubernetes-server.pem

这里kubectl和kube-apiserver安装在一台服务器上,共用同一个证书存放目录

3.3.2.2、准备TLS bootstrapping配置

TLS bootstrap是指客户端证书由kube-apiserver自动签发,不需要手工为其准备身份证书。此功能目前仅支持为kubelet自动签发证书,kubelet加入集群时会向集群提出csr申请,管理员审批通过之后将自动为其签发证书。

TLS bootstrapping使用token认证,ApiServer必须先配置一个token认证,通过该token认证的用户需要具备”system:bootstrappers”组的权限。kubelet将使用token认证获得”system:bootstrappers”组的权限,然后提交CSR申请。token文件格式为:”token,username,userid,groups”,例如:

02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,"system:bootstrappers"

Token可以是任意的包涵128 bit的字符串,可以使用安全的随机数发生器生成。

[root@k8s-0 kubernetes]# pwd/etc/kubernetes### 生产token文件,文件名以csv结尾 ###[root@k8s-0 kubernetes]# BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')[root@k8s-0 kubernetes]# cat > token.csv$BOOTSTRAP_TOKEN,kubelet-bootstrap,10001,"system:bootstrappers"> EOF[root@k8s-0 kubernetes]# cat token.csv 4f2c8c078e69cfc8b1ab7d640bbcb6f2,kubelet-bootstrap,10001,"system:bootstrappers"[root@k8s-0 kubernetes]# ls -ldrwxr-xr-x. 2 kube kube 4096 11月 10 19:58 ssl-rw-r--r--. 1 root root 80 11月 10 20:00 token.csv### 配置kubelet bootstrapping kubeconfig ###[root@k8s-0 kubernetes]# kubectl config set-cluster kubernetes-cluster --certificate-authority=/etc/kubernetes/ssl/kubernetes-root-ca.pem --embed-certs=true --server="https://k8s-0:6443" --kubeconfig=bootstrap.kubeconfigCluster "kubernetes-cluster" set.### 确保你的${BOOTSTRAP_TOKEN}变量有效,其和上面一致 ### [root@k8s-0 kubernetes]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=bootstrap.kubeconfigUser "kubelet-bootstrap" set.[root@k8s-0 kubernetes]# kubectl config set-context kubelet-bootstrap --cluster=kubernetes-cluster --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfigContext "kubelet-bootstrap" created.[root@k8s-0 kubernetes]# kubectl config use-context kubelet-bootstrap --kubeconfig=bootstrap.kubeconfigSwitched to context "kubelet-bootstrap".[root@k8s-0 kubernetes]# ls -l总用量 8-rw-------. 1 root root 2265 11月 10 20:03 bootstrap.kubeconfigdrwxr-xr-x. 2 kube kube 4096 11月 10 19:58 ssl-rw-r--r--. 1 root root 80 11月 10 20:00 token.csv[root@k8s-0 kubernetes]# cat bootstrap.kubeconfig apiVersion: v1clusters:- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQyakNDQXNLZ0F3SUJBZ0lVUjdXeEh5NzdHc3h4S3R5QlJTd1VRMXgzeG5vd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2N6RUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVQTUEwR0ExVUVDaE1HUkdGdFpXNW5NUll3RkFZRFZRUUxFdzFEYkc5MVpGQnNZWFJtYjNKdE1Sc3dHUVlEClZRUURFeEpMZFdKbGNtNWxkR1Z6TFVOc2RYTjBaWEl3SGhjTk1UY3hNVEV3TVRFeE5qQXdXaGNOTWpJeE1UQTUKTVRFeE5qQXdXakJ6TVFzd0NRWURWUVFHRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVApCVmQxYUdGdU1ROHdEUVlEVlFRS0V3WkVZVzFsYm1jeEZqQVVCZ05WQkFzVERVTnNiM1ZrVUd4aGRHWnZjbTB4Ckd6QVpCZ05WQkFNVEVrdDFZbVZ5Ym1WMFpYTXRRMngxYzNSbGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQUQKZ2dFUEFEQ0NBUW9DZ2dFQkFMQ3hXNWhQNjU4RFl3VGFCZ24xRWJIaTBNUnYyUGVCM0Y1b3M5bHZaeXZVVlZZKwpPNU9MR1plU3hZamdYcnVWRm9jTHhUTE1uUldtcmZNaUx6UG9FQlpZZ0czMXpqRzlJMG5kTm55RWVBM0ltYWdBCndsRThsZ2N5VVd6MVA3ZWx0V1FTOThnWm5QK05ieHhCT3Nick1YMytsM0ZKSDZTUXM4NFR3dVo1MVMvbi9kUWoKQ1ZFMkJvME14ZFhZZ3FESkc3MUl2WVRUcjdqWkd4d2VLZCtvWUsvTVc5ZFFjbDNraklkU1BOQUhGTW5lMVRmTwpvdlpwazF6SDRRdEJ3b3FNSHh6ZDhsUG4yd3ZzR3NRZVRkNzdqRTlsTGZjRDdOK3NyL0xiL2VLWHlQbTFPV1c3CmxLOUFtQjNxTmdBc0xZVUxGNTV1NWVQN2ZwS3pTdTU3V1Qzc3hac0NBd0VBQWFObU1HUXdEZ1lEVlIwUEFRSC8KQkFRREFnRUdNQklHQTFVZEV3RUIvd1FJTUFZQkFmOENBUUl3SFFZRFZSME9CQllFRkc4dWNWTk5tKzJtVS9CcApnbURuS2RBK3FMcGZNQjhHQTFVZEl3UVlNQmFBRkc4dWNWTk5tKzJtVS9CcGdtRG5LZEErcUxwZk1BMEdDU3FHClNJYjNEUUVCQ3dVQUE0SUJBUUJiS0pSUG1kSWpRS3E1MWNuS2lYNkV1TzJVakpVYmNYOFFFaWYzTDh2N09IVGcKcnVMY1FDUGRkbHdSNHdXUW9GYU9yZWJTbllwcmduV2EvTE4yN3lyWC9NOHNFeG83WHBEUDJoNUYybllNSFVIcAp2V1hKSUFoR3FjNjBqNmg5RHlDcGhrWVV5WUZoRkovNkVrVEJvZ241S2Z6OE1ITkV3dFdnVXdSS29aZHlGZStwCk1sL3RWOHJkYVo4eXpMY2sxejJrMXdXRDlmSWk2R2VCTG1JTnJ1ZDVVaS9QTGI2Z2YwOERZK0ZTODBIZDhZdnIKM2dTc2VCQURlOXVHMHhZZitHK1V1YUtvMHdNSHc2VGxkWGlqcVQxU0Eyc1M0ZWpGRjl0TldPaVdPcVpLakxjMgpPM2tIYllUOTVYZGQ5MHplUU1KTmR2RTU5WmdIdmpwY09sZlNEdDhOCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kserver: https://k8s-0:6443name: kubernetes-clustercontexts:- context:cluster: kubernetes-clusteruser: kubelet-bootstrapname: kubelet-bootstrapcurrent-context: kubelet-bootstrapkind: Configpreferences: {}users:- name: kubelet-bootstrapuser:as-user-extra: {}token: 4f2c8c078e69cfc8b1ab7d640bbcb6f23.3.2.3、配置config[[email protected]0 kubernetes]# cat /etc/kubernetes/config #### kubernetes system config## The following values are used to configure various aspects of all# kubernetes services, including## kube-apiserver.service# kube-controller-manager.service# kube-scheduler.service# kubelet.service# kube-proxy.service# logging to stderr means we get it in the systemd journalKUBE_LOGTOSTDERR="--logtostderr=true"# journal message level, 0 is debugKUBE_LOG_LEVEL="--v=0"# Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV="--allow-privileged=true"# How the controller-manager, scheduler, and proxy find the apiserver#KUBE_MASTER="--master=http://127.0.0.1:8080"KUBE_MASTER="--master=http://k8s-0:8080"### 配置apiserver文件 ###3.3.2.4、配置apiserver[root@k8s-0 kubernetes]# pwd/etc/kubernetes### 配置审计日志策略 ###[root@k8s-0 kubernetes]# cat audit-policy.yaml # Log all requests at the Metadata level.apiVersion: audit.k8s.io/v1beta1kind: Policyrules:- level: Metadata[root@k8s-0 ~]# mkdir -p /var/log/kube-audit[root@k8s-0 ~]# chown kube:kube /var/log/kube-audit/[root@k8s-0 ~]# ls -l /var/logdrwxr-xr-x. 2 kube kube23 11月8 23:57 kube-audit[[email protected] kubernetes]# cat apiserver #### kubernetes system config## The following values are used to configure the kube-apiserver## The address on the local server to listen to.KUBE_API_ADDRESS="--advertise-address=192.168.119.180 --bind-address=192.168.119.180 --insecure-bind-address=0.0.0.0"# The port on the local server to listen on.KUBE_API_PORT="--insecure-port=8080 --secure-port=6443"# Port minions listen on# KUBELET_PORT="--kubelet-port=10250"# Comma separated list of nodes in the etcd clusterKUBE_ETCD_SERVERS="--etcd-servers=https://etcd-1:2379,https://etcd-2:2379,https://etcd-3:2379"# Address range to use for servicesKUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"# default admission control policiesKUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction"# Add your own!KUBE_API_ARGS="--authorization-mode=RBAC,Node \ --anonymous-auth=false \ --kubelet-https=true \ --enable-bootstrap-token-auth \ --token-auth-file=/etc/kubernetes/token.csv \ --service-node-port-range=30000-32767 \ --tls-cert-file=/etc/kubernetes/ssl/kubernetes-server.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-server-key.pem \ --client-ca-file=/etc/kubernetes/ssl/kubernetes-root-ca.pem \ --service-account-key-file=/etc/kubernetes/ssl/kubernetes-root-ca.pem \ --etcd-quorum-read=true \ --storage-backend=etcd3 \ --etcd-cafile=/etc/kubernetes/ssl/etcd-root-ca.pem \ --etcd-certfile=/etc/kubernetes/ssl/etcd-client-kubernetes.pem \ --etcd-keyfile=/etc/kubernetes/ssl/etcd-client-kubernetes-key.pem \ --enable-swagger-ui=true \ --apiserver-count=3 \ --audit-policy-file=/etc/kubernetes/audit-policy.yaml \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-audit/audit.log \ --event-ttl=1h"

–service-account-key-file用于service account访问kubernetes api校验token。service account校验采用的是JWT校验方式。

3.3.3、启动### 创建kube用户,用于启动kubernetes相关服务 ###[[email protected]0 ~]# useradd kube -d /var/lib/kube -s /sbin/nologin -c "Kubernetes user"### 修改目录属主 ###[[email protected]0 ~]# chown -Rf kube:kube /etc/kubernetes/[[email protected]0 kubernetes]# ls -lR /etc/kubernetes//etc/kubernetes/:-rw-r--r--. 1 kube kube 2172 11月 10 20:06 apiserver-rw-r--r--. 1 kube kube113 11月8 23:42 audit-policy.yaml-rw-------. 1 kube kube 2265 11月 10 20:03 bootstrap.kubeconfig-rw-r--r--. 1 kube kube696 11月8 23:23 configdrwxr-xr-x. 2 kube kube 4096 11月 10 19:58 ssl-rw-r--r--. 1 kube kube 80 11月 10 20:00 token.csv/etc/kubernetes/ssl:-rw-------. 1 kube kube 1679 11月 10 19:58 etcd-client-kubernetes-key.pem-rw-r--r--. 1 kube kube 1476 11月 10 19:58 etcd-client-kubernetes.pem-rw-r--r--. 1 kube kube 1403 11月 10 19:57 etcd-root-ca.pem-rw-------. 1 kube kube 1675 11月 10 19:46 kubernetes-client-kubectl-key.pem-rw-r--r--. 1 kube kube 1460 11月 10 19:46 kubernetes-client-kubectl.pem-rw-------. 1 kube kube 1675 11月 10 19:57 kubernetes-root-ca-key.pem-rw-r--r--. 1 kube kube 1395 11月 10 19:46 kubernetes-root-ca.pem-rw-------. 1 kube kube 1675 11月 10 19:56 kubernetes-server-key.pem-rw-r--r--. 1 kube kube 1651 11月 10 19:56 kubernetes-server.pem[[email protected]0 ~]# chown kube:kube /usr/local/bin/kube-apiserver [[email protected]0 ~]# ls -l /usr/local/bin/kube-apiserver -rwxr-x---. 1 kube kube 192911402 11月8 21:59 /usr/local/bin/kube-apiserver### 启动kube-apiserver服务 ###[[email protected]0 ~]# systemctl start kube-apiserver[[email protected]0 ~]# systemctl status kube-apiserver● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; disabled; vendor preset: disabled) Active: active (running) since 五 2017-11-10 20:13:42 CST; 9s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 3837 (kube-apiserver) CGroup: /system.slice/kube-apiserver.service └─3837 /usr/local/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=https://etcd-1:2379...11月 10 20:13:42 k8s-0 kube-apiserver[3837]: I1110 20:13:42.9326473837 controller_utils.go:1041] W...ller11月 10 20:13:42 k8s-0 systemd[1]: Started Kubernetes API Server.11月 10 20:13:42 k8s-0 kube-apiserver[3837]: I1110 20:13:42.9447743837 customresource_discovery_co...ller11月 10 20:13:42 k8s-0 kube-apiserver[3837]: I1110 20:13:42.9448353837 naming_controller.go:277] S...ller11月 10 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.0310943837 cache.go:39] Caches are syn...ller11月 10 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.0341683837 controller_utils.go:1048] C...ller11月 10 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.0342043837 cache.go:39] Caches are syn...ller11月 10 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.0395143837 autoregister_controller.go:...ller11月 10 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.0395273837 cache.go:32] Waiting for ca...ller11月 10 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.1398103837 cache.go:39] Caches are syn...llerHint: Some lines were ellipsized, use -l to show in full.3.4、安装kube-controller-manager服务3.4.1、安装[root@k8s-0 ~]# pwd/root[root@k8s-0 ~]# cp kubernetes/server/bin/kube-controller-manager /usr/local/bin/[root@k8s-0 ~]# kube-controller-manager versionI1109 00:08:25.2542755281 controllermanager.go:109] Version: v1.8.1W1109 00:08:25.2543805281 client_config.go:529] Neither --kubeconfig nor --master was specified.Using the inClusterConfig.This might not work.W1109 00:08:25.2543905281 client_config.go:534] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be definedinvalid configuration: no configuration has been provided### 创建Unit文件 ###[root@k8s-0 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/controller-managerUser=kubeExecStart=/usr/local/bin/kube-controller-manager \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_CONTROLLER_MANAGER_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target3.4.2、配置controller-manager### 配置/etc/kubernetes/controller-manager文件 ###[[email protected] kubernetes]# cat controller-manager #### The following values are used to configure the kubernetes controller-manager# defaults from config and apiserver should be adequate# Add your own!KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 \--service-cluster-ip-range=10.254.0.0/16 \--cluster-name=kubernetes-cluster \--cluster-signing-cert-file=/etc/kubernetes/ssl/kubernetes-root-ca.pem \--cluster-signing-key-file=/etc/kubernetes/ssl/kubernetes-root-ca-key.pem \--service-account-private-key-file=/etc/kubernetes/ssl/kubernetes-root-ca-key.pem \--root-ca-file=/etc/kubernetes/ssl/kubernetes-root-ca.pem \--leader-elect=true \--node-monitor-grace-period=40s \--node-monitor-period=5s \--pod-eviction-timeout=5m0s"

–service-account-private-key-file与之前的–service-account-key-file对应,用于签署JWT token

–cluster-signing-*将被用来签发TLS Bootstrapping证书,需要与–client-ca-file建立信任关系(理论上,–cluster-signing的CA是–client-ca-file的下级CA也是被信任的,但是在实际操作过程中并没有达到预期的效果,kubelet顺利发送CSR申请,但是node无法加入集群,疑问待解!)。

3.4.3、启动### 修改证书文件属主 ###[[email protected]0 kubernetes]# chown -R kube:kube /etc/kubernetes/[[email protected]0 kubernetes]# ls -lR /etc/kubernetes//etc/kubernetes/:-rw-r--r--. 1 kube kube 2172 11月 10 20:06 apiserver-rw-r--r--. 1 kube kube113 11月8 23:42 audit-policy.yaml-rw-------. 1 kube kube 2265 11月 10 20:03 bootstrap.kubeconfig-rw-r--r--. 1 kube kube696 11月8 23:23 config-rw-r--r--. 1 kube kube995 11月 10 18:32 controller-managerdrwxr-xr-x. 2 kube kube 4096 11月 10 19:58 ssl-rw-r--r--. 1 kube kube 80 11月 10 20:00 token.csv/etc/kubernetes/ssl:-rw-------. 1 kube kube 1679 11月 10 19:58 etcd-client-kubernetes-key.pem-rw-r--r--. 1 kube kube 1476 11月 10 19:58 etcd-client-kubernetes.pem-rw-r--r--. 1 kube kube 1403 11月 10 19:57 etcd-root-ca.pem-rw-------. 1 kube kube 1675 11月 10 19:46 kubernetes-client-kubectl-key.pem-rw-r--r--. 1 kube kube 1460 11月 10 19:46 kubernetes-client-kubectl.pem-rw-------. 1 kube kube 1675 11月 10 19:57 kubernetes-root-ca-key.pem-rw-r--r--. 1 kube kube 1395 11月 10 19:46 kubernetes-root-ca.pem-rw-------. 1 kube kube 1675 11月 10 19:56 kubernetes-server-key.pem-rw-r--r--. 1 kube kube 1651 11月 10 19:56 kubernetes-server.pem[[email protected]0 kubernetes]# chown kube:kube /usr/local/bin/kube-controller-manager [[email protected]0 kubernetes]# ls -l /usr/local/bin/kube-controller-manager -rwxr-x---. 1 kube kube 128087389 11月9 00:08 /usr/local/bin/kube-controller-manager[[email protected]0 kubernetes]# systemctl start kube-controller-manager[[email protected]0 kubernetes]# systemctl status kube-controller-manager● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; disabled; vendor preset: disabled) Active: active (running) since 五 2017-11-10 20:14:12 CST; 1s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 3851 (kube-controller) CGroup: /system.slice/kube-controller-manager.service └─3851 /usr/local/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://k8s-0:808...11月 10 20:14:13 k8s-0 kube-controller-manager[3851]: I1110 20:14:13.9716713851 controller_utils.go...ler11月 10 20:14:13 k8s-0 kube-controller-manager[3851]: I1110 20:14:13.9923543851 controller_utils.go...ler11月 10 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.0356073851 controller_utils.go...ler11月 10 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.0415183851 controller_utils.go...ler11月 10 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.0497643851 controller_utils.go...ler11月 10 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.0497993851 garbagecollector.go...age11月 10 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.0711553851 controller_utils.go...ler11月 10 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.0713943851 controller_utils.go...ler11月 10 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.0715633851 controller_utils.go...ler11月 10 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.0924503851 controller_utils.go...lerHint: Some lines were ellipsized, use -l to show in full.3.5、安装kube-scheduler3.5.1、安装[[email protected] ~]# pwd/root[[email protected] ~]# cp kubernetes/server/bin/kube-scheduler /usr/local/bin/### 创建Unit文件 ###[[email protected] ~]# cat /usr/lib/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes Scheduler PluginDocumentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/schedulerUser=kubeExecStart=/usr/local/bin/kube-scheduler \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_SCHEDULER_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target3.5.2、配置### 配置/etc/kubernetes/scheduler文件 ###[[email protected]0 ~]# cat /etc/kubernetes/scheduler#### kubernetes scheduler config# default config should be adequate# Add your own!KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0"3.5.3、启动### 修改启动文件属主 ###[[email protected]0 ~]# chown kube:kube /usr/local/bin/kube-scheduler [[email protected]0 ~]# ls -l /usr/local/bin/kube-scheduler -rwxr-x---. 1 kube kube 53754721 11月9 01:04 /usr/local/bin/kube-scheduler### 启动服务 ###[[email protected]0 ~]# systemctl start kube-scheduler[[email protected]0 ~]# systemctl status kube-scheduler● kube-scheduler.service - Kubernetes Scheduler Plugin Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; disabled; vendor preset: disabled) Active: active (running) since 五 2017-11-10 20:14:24 CST; 4s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 3862 (kube-scheduler) CGroup: /system.slice/kube-scheduler.service └─3862 /usr/local/bin/kube-scheduler --logtostderr=true --v=0 --master=http://k8s-0:8080 --leade...11月 10 20:14:24 k8s-0 systemd[1]: Started Kubernetes Scheduler Plugin.11月 10 20:14:24 k8s-0 systemd[1]: Starting Kubernetes Scheduler Plugin...11月 10 20:14:24 k8s-0 kube-scheduler[3862]: I1110 20:14:24.9049843862 controller_utils.go:1041] W...ller11月 10 20:14:25 k8s-0 kube-scheduler[3862]: I1110 20:14:25.0054513862 controller_utils.go:1048] C...ller11月 10 20:14:25 k8s-0 kube-scheduler[3862]: I1110 20:14:25.0055333862 leaderelection.go:174] atte...e...11月 10 20:14:25 k8s-0 kube-scheduler[3862]: I1110 20:14:25.0152983862 leaderelection.go:184] succ...uler11月 10 20:14:25 k8s-0 kube-scheduler[3862]: I1110 20:14:25.0157613862 event.go:218] Event(v1.Obje...aderHint: Some lines were ellipsized, use -l to show in full.3.6、检查Master节点服务状态[root@k8s-0 ~]# kubectl get csNAME STATUSMESSAGEERRORschedulerHealthy ok controller-manager Healthy ok etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"}

出现上述信息,说明目前Master节点上的服务部署正常,且kubectl与集群之间通信正常。

扩展阅读:

1、什么是 JWT

2、Kubernetes里的证书认证

4、部署Kubernetes Mision

Mater节点上运行的Kubernetes服务包括:kubelet和kube-proxy。因为之前配置了TLS Bootstrapping,kubelet的证书由Master自动签发。前提是kubelet通过token认证,取得”system:bootstrappers”组权限。kube-proxy属于客户端,需要根证书client-ca.pem签发对应的客户端证书。

4.1、安装docker### 直接使用yum安装,选查看版本信息 ###[root@k8s-1 ~]# yum list | grep docker-common[root@k8s-1 ~]# yum list | grep dockerdocker-client.x86_642:1.12.6-61.git85d7426.el7.centosdocker-common.x86_642:1.12.6-61.git85d7426.el7.centoscockpit-docker.x86_64 151-1.el7.centos extras docker.x86_64 2:1.12.6-61.git85d7426.el7.centosdocker-client-latest.x86_64 1.13.1-26.git1faa135.el7.centosdocker-devel.x86_64 1.3.2-4.el7.centos extras docker-distribution.x86_642.6.2-1.git48294d9.el7 extras docker-forward-journald.x86_641.10.3-44.el7.centos extras docker-latest.x86_641.13.1-26.git1faa135.el7.centosdocker-latest-logrotate.x86_641.13.1-26.git1faa135.el7.centosdocker-latest-v1.10-migrator.x86_64 1.13.1-26.git1faa135.el7.centosdocker-logrotate.x86_64 2:1.12.6-61.git85d7426.el7.centosdocker-lvm-plugin.x86_642:1.12.6-61.git85d7426.el7.centosdocker-novolume-plugin.x86_64 2:1.12.6-61.git85d7426.el7.centosdocker-python.x86_641.4.0-115.el7extras docker-registry.x86_640.9.1-7.el7extras docker-unit-test.x86_64 2:1.12.6-61.git85d7426.el7.centosdocker-v1.10-migrator.x86_642:1.12.6-61.git85d7426.el7.centospcp-pmda-docker.x86_643.11.8-7.el7 base python-docker-py.noarch 1.10.6-3.el7 extras python-docker-pycreds.noarch1.10.6-3.el7 extras### 安装docker ###[root@k8s-1 ~]# yum install -y docker### 启动docker服务 ###[root@k8s-1 ~]# systemctl start docker[root@k8s-1 ~]# systemctl status docker● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since 四 2017-11-09 02:29:12 CST; 14s ago Docs: http://docs.docker.com Main PID: 4833 (dockerd-current) CGroup: /system.slice/docker.service ├─4833 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-curren... └─4837 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-contain...11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.015481471+08:00" level=info ms...ase"11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.049872681+08:00" level=info ms...nds"11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.050724567+08:00" level=info ms...rt."11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.068030608+08:00" level=info ms...lse"11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.128054846+08:00" level=info ms...ess"11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.189306705+08:00" level=info ms...ne."11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.189594801+08:00" level=info ms...ion"11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.189610061+08:00" level=info ms...12.611月 09 02:29:12 k8s-1 systemd[1]: Started Docker Application Container Engine.11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.210430475+08:00" level=info ms...ock"Hint: Some lines were ellipsized, use -l to show in full.[root@k8s-1 ~]# ip address1: lo:mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever2: ens33:mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ffinet 192.168.119.181/24 brd 192.168.119.255 scope global ens33 valid_lft forever preferred_lft forever3: docker0:mtu 1500 qdisc noqueue state DOWN link/ether 02:42:cd:de:a1:b0 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever

docker info命令可以查看Cgroup Driver

4.2、安装kubelet服务

前面在Master节点上下载的kubernetes-server-linux-amd64.tar.gz文件中有部署kubernetes集群所需要的全部二进制文件。

4.2.1、安装### 传输kubelet文件到mision节点上 ###[[email protected]0 ~]# scp kubernetes/server/bin/kubelet [email protected]:/usr/local/bin/### 创建Unit文件 ###[[email protected]1 ~]# cat /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes Kubelet ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service[Service]WorkingDirectory=/var/lib/kubeletEnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/kubeletExecStart=/usr/local/bin/kubelet \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBELET_API_SERVER \$KUBELET_ADDRESS \$KUBELET_PORT \$KUBELET_HOSTNAME \$KUBE_ALLOW_PRIV \$KUBELET_POD_INFRA_CONTAINER \$KUBELET_ARGSRestart=on-failure[Install]WantedBy=multi-user.target4.2.2、配置4.2.2.1、准备bootstrap.kubeconfig文件### 创建目录 ###[root@k8s-1 ~]# mkdir -p /etc/kubernetes/ssl### 将bootstrap.kubeconfig文件从master节点复制到mission节点 ###[root@k8s-0 ~]# scp /etc/kubernetes/bootstrap.kubeconfig root@k8s-1:/etc/kubernetes/[root@k8s-1 ~]# ls -l /etc/kubernetes/-rw-------. 1 root root 2265 11月 10 20:26 bootstrap.kubeconfigdrwxr-xr-x. 2 root root6 11月9 02:34 ssl4.2.2.2、为TLS Bootstrapping对应的用户kubelet-bootstrap绑定角色[root@k8s-0 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrapclusterrolebinding "kubelet-bootstrap" created

kubelet-bootstrap是TLS Bootstrapping token文件中指定的用户,如果不为此用户绑定角色,将无法提交CSR申请。启动kubelet是肯出现异常:

error: failed to run Kubelet: cannot create certificate signing request: ce

rtificatesigningrequests.certificates.k8s.io is forbidden: User “kubelet-bootstrap” cannot create certificatesign

ingrequests.certificates.k8s.io at the cluster scope

4.2.2.3、配置config[[email protected]1 kubernetes]# pwd/etc/kubernetes[[email protected]1 kubernetes]# cat config #### kubernetes system config## The following values are used to configure various aspects of all# kubernetes services, including## kube-apiserver.service# kube-controller-manager.service# kube-scheduler.service# kubelet.service# kube-proxy.service# logging to stderr means we get it in the systemd journalKUBE_LOGTOSTDERR="--logtostderr=true"# journal message level, 0 is debugKUBE_LOG_LEVEL="--v=0"# Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV="--allow-privileged=true"# How the controller-manager, scheduler, and proxy find the apiserver#KUBE_MASTER="--master=http://127.0.0.1:8080"4.2.2.4、配置kubelet[[email protected] kubernetes]# pwd/etc/kubernetes[[email protected] kubernetes]# cat kubelet #### kubernetes kubelet (minion) config# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)KUBELET_ADDRESS="--address=192.168.119.181"# The port for the info server to serve on# KUBELET_PORT="--port=10250"# You may leave this blank to use the actual hostnameKUBELET_HOSTNAME="--hostname-override=kubernetes-mision-1"# location of the api-server# KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"# pod infrastructure containerKUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"# Add your own!KUBELET_ARGS="--cgroup-driver=systemd \--cluster-dns=10.254.0.2 \--resolv-conf=/etc/resolv.conf \--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \--fail-swap-on=false \--cert-dir=/etc/kubernetes/ssl \--cluster-domain=cluster.local. \--hairpin-mode=promiscuous-bridge \--serialize-image-pulls=false \--runtime-cgroups=/systemd/system.slice \--kubelet-cgroups=/systemd/system.slice"[[email protected] kubernetes]# ls -l-rw-------. 1 root root 2265 11月 10 20:26 bootstrap.kubeconfig-rw-r--r--. 1 root root655 11月9 02:48 config-rw-r--r--. 1 root root 1205 11月 10 15:40 kubeletdrwxr-xr-x. 2 root root6 11月 10 17:46 ssl4.2.3、启动### 启动kubelet服务 ###[[email protected]1 kubernetes]# systemctl start kubelet [[email protected]1 kubernetes]# systemctl status kubelet● kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled) Active: active (running) since 五 2017-11-10 20:27:39 CST; 7s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 3837 (kubelet) CGroup: /system.slice/kubelet.service └─3837 /usr/local/bin/kubelet --logtostderr=true --v=0 --address=192.168.119.181 --hostname-over...11月 10 20:27:39 k8s-1 systemd[1]: Started Kubernetes Kubelet Server.11月 10 20:27:39 k8s-1 systemd[1]: Starting Kubernetes Kubelet Server...11月 10 20:27:39 k8s-1 kubelet[3837]: I1110 20:27:39.5432273837 feature_gate.go:156] feature gates: map[]11月 10 20:27:39 k8s-1 kubelet[3837]: I1110 20:27:39.5434793837 controller.go:114] kubelet config...oller11月 10 20:27:39 k8s-1 kubelet[3837]: I1110 20:27:39.5434833837 controller.go:118] kubelet config...flags11月 10 20:27:40 k8s-1 kubelet[3837]: I1110 20:27:40.0642893837 client.go:75] Connecting to docke....sock11月 10 20:27:40 k8s-1 kubelet[3837]: I1110 20:27:40.0643223837 client.go:95] Start docker client...=2m0s11月 10 20:27:40 k8s-1 kubelet[3837]: W1110 20:27:40.0672463837 cni.go:196] Unable to update cni ...net.d11月 10 20:27:40 k8s-1 kubelet[3837]: I1110 20:27:40.0768663837 feature_gate.go:156] feature gates: map[]11月 10 20:27:40 k8s-1 kubelet[3837]: W1110 20:27:40.0769803837 server.go:289] --cloud-provider=a...citlyHint: Some lines were ellipsized, use -l to show in full.4.2.4、查询CSR申请并审批[root@k8s-0 kubernetes]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbPro 10s kubelet-bootstrap Pending[root@k8s-0 kubernetes]# kubectl certificate approve node-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbProcertificatesigningrequest "node-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbPro" approved[root@k8s-0 kubernetes]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbPro 2mkubelet-bootstrap Approved,Issued[root@k8s-0 kubernetes]# kubectl get nodesNAMESTATUSROLES AGE VERSIONk8s-1 Ready 6sv1.8.14.3、安装kube-proxy服务4.3.1、安装[[email protected] ~]# pwd/root[[email protected] ~]# scp kubernetes/server/bin/kube-proxy [email protected]:/usr/local/bin/[[email protected] kubernetes]# cat /usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes Kube-Proxy ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target[Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/proxyExecStart=/usr/local/bin/kube-proxy \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_PROXY_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target4.3.2、配置4.3.2.1、签发kube-proxy证书[[email protected]0 kubernetes]# pwd/root/cfssl/kubernetes[[email protected]0 kubernetes]# cat kubernetes-client-proxy-csr.json {"CN": "system:kube-proxy","hosts": ["localhost","127.0.0.1","k8s-1","k8s-2","k8s-3"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Wuhan","ST": "Hubei","O": "Dameng","OU": "system"}]}[[email protected]0 kubernetes]# cfssl gencert -ca=kubernetes-root-ca.pem -ca-key=kubernetes-root-ca-key.pem-config=ca-config.json -profile=client kubernetes-client-proxy-csr.json | cfssljson -bare kubernetes-client-proxy2017/11/10 20:50:27 [INFO] generate received request2017/11/10 20:50:27 [INFO] received CSR2017/11/10 20:50:27 [INFO] generating key: rsa-20482017/11/10 20:50:28 [INFO] encoded CSR2017/11/10 20:50:28 [INFO] signed certificate with serial number 3199261412827086421249953293789526789537903368682017/11/10 20:50:28 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").[[email protected]0 kubernetes]# ls -l-rw-r--r--. 1 root root833 11月 10 16:29 ca-config.json-rw-r--r--. 1 root root 1086 11月 10 19:28 kubernetes-client-kubectl.csr-rw-r--r--. 1 root root356 11月 10 18:17 kubernetes-client-kubectl-csr.json-rw-------. 1 root root 1675 11月 10 19:28 kubernetes-client-kubectl-key.pem-rw-r--r--. 1 root root 1460 11月 10 19:28 kubernetes-client-kubectl.pem-rw-r--r--. 1 root root 1074 11月 10 20:50 kubernetes-client-proxy.csr-rw-r--r--. 1 root root347 11月 10 20:48 kubernetes-client-proxy-csr.json-rw-------. 1 root root 1679 11月 10 20:50 kubernetes-client-proxy-key.pem-rw-r--r--. 1 root root 1452 11月 10 20:50 kubernetes-client-proxy.pem-rw-r--r--. 1 root root 1021 11月 10 19:20 kubernetes-root-ca.csr-rw-r--r--. 1 root root279 11月 10 18:04 kubernetes-root-ca-csr.json-rw-------. 1 root root 1675 11月 10 19:20 kubernetes-root-ca-key.pem-rw-r--r--. 1 root root 1395 11月 10 19:20 kubernetes-root-ca.pem-rw-r--r--. 1 root root 1277 11月 10 19:42 kubernetes-server.csr-rw-r--r--. 1 root root556 11月 10 19:40 kubernetes-server-csr.json-rw-------. 1 root root 1675 11月 10 19:42 kubernetes-server-key.pem-rw-r--r--. 1 root root 1651 11月 10 19:42 kubernetes-server.pem[[email protected]0 kubernetes]# cp kubernetes-client-proxy.pem kubernetes-client-proxy-key.pem /etc/kubernetes/ssl/[[email protected]0 kubernetes]# ls -l /etc/kubernetes/ssl/-rw-------. 1 kube kube 1679 11月 10 19:58 etcd-client-kubernetes-key.pem-rw-r--r--. 1 kube kube 1476 11月 10 19:58 etcd-client-kubernetes.pem-rw-r--r--. 1 kube kube 1403 11月 10 19:57 etcd-root-ca.pem-rw-------. 1 kube kube 1675 11月 10 19:46 kubernetes-client-kubectl-key.pem-rw-r--r--. 1 kube kube 1460 11月 10 19:46 kubernetes-client-kubectl.pem-rw-------. 1 root root 1679 11月 10 20:51 kubernetes-client-proxy-key.pem-rw-r--r--. 1 root root 1452 11月 10 20:51 kubernetes-client-proxy.pem-rw-------. 1 kube kube 1675 11月 10 19:57 kubernetes-root-ca-key.pem-rw-r--r--. 1 kube kube 1395 11月 10 19:46 kubernetes-root-ca.pem-rw-------. 1 kube kube 1675 11月 10 19:56 kubernetes-server-key.pem-rw-r--r--. 1 kube kube 1651 11月 10 19:56 kubernetes-server.pem4.3.2.2、配置kube-proxy.kubeconfig文件[[email protected]-0 kubernetes]# pwd/etc/kubernetes[[email protected]-0 kubernetes]# kubectl config set-cluster kubernetes-cluster --certificate-authority=/etc/kubernetes/ssl/kubernetes-root-ca.pem --embed-certs=true --server="https://k8s-0:6443" --kubeconfig=kube-proxy.kubeconfigCluster "kubernetes-cluster" set.[[email protected]-0 kubernetes]# kubectl config set-credentials kube-proxy --client-certificate=/etc/kubernetes/ssl/kubernetes-client-proxy.pem --client-key=/etc/kubernetes/ssl/kubernetes-client-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfigUser "kube-proxy" set.[[email protected]-0 kubernetes]# kubectl config set-context kube-proxy --cluster=kubernetes-cluster --user=kube-proxy --kubeconfig=kube-proxy.kubeconfigContext "kube-proxy" created.[[email protected]-0 kubernetes]# kubectl config use-context kube-proxy --kubeconfig=kube-proxy.kubeconfigSwitched to context "kube-proxy".[[email protected]-0 kubernetes]# ls -l-rw-r--r--. 1 kube kube 2172 11月 10 20:06 apiserver-rw-r--r--. 1 kube kube113 11月8 23:42 audit-policy.yaml-rw-------. 1 kube kube 2265 11月 10 20:03 bootstrap.kubeconfig-rw-r--r--. 1 kube kube696 11月8 23:23 config-rw-r--r--. 1 kube kube991 11月 10 20:35 controller-manager-rw-------. 1 root root 6421 11月 10 20:57 kube-proxy.kubeconfig-rw-r--r--. 1 kube kube148 11月9 01:07 schedulerdrwxr-xr-x. 2 kube kube 4096 11月 10 20:51 ssl-rw-r--r--. 1 kube kube 80 11月 10 20:00 token.csv[[email protected]-0 kubernetes]# scp kube-proxy.kubeconfig [email protected]-1:/etc/kubernetes/4.3.2.3、配置proxy[root@k8s-1 kubernetes]# pwd/etc/kubernetes[root@k8s-1 kubernetes]# cat proxy #### kubernetes proxy config# default config should be adequate# Add your own!KUBE_PROXY_ARGS="--bind-address=192.168.119.181 \ --hostname-override=kubernetes-mision-1 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \ --cluster-cidr=10.254.0.0/16"4.3.2.4、启动kube-proxy[root@k8s-1 kubernetes]# systemctl start kube-proxy[root@k8s-1 kubernetes]# systemctl status kube-proxy ● kube-proxy.service - Kubernetes Kube-Proxy Server Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled) Active: active (running) since 五 2017-11-10 21:07:08 CST; 6s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 4334 (kube-proxy) CGroup: /system.slice/kube-proxy.service ‣ 4334 /usr/local/bin/kube-proxy --logtostderr=true --v=0 --bind-address=192.168.119.181 --hostn...11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.1590554334 conntrack.go:98] Set sysctl 'ne...107211月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.1590834334 conntrack.go:52] Setting nf_con...107211月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.1591074334 conntrack.go:98] Set sysctl 'ne...640011月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.1591204334 conntrack.go:98] Set sysctl 'ne...360011月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.1599584334 config.go:202] Starting service...ller11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.1599684334 controller_utils.go:1041] Waiti...ller11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.1600914334 config.go:102] Starting endpoin...ller11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.1601014334 controller_utils.go:1041] Waiti...ller11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.2606704334 controller_utils.go:1048] Cache...ller11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.2608624334 controller_utils.go:1048] Cache...llerHint: Some lines were ellipsized, use -l to show in full.4.4、其它mission节点(增加节点到已有集群)

部署方式:重复4.1到4.3,只是不需要在生成证书和kubeconfig文件

4.4.1、安装[root@k8s-2 ~]# yum install -y docker[root@k8s-0 ~]# scp kubernetes/server/bin/kubelet root@k8s-2:/usr/local/bin/[root@k8s-0 ~]# scp kubernetes/server/bin/kube-proxy root@k8s-2:/usr/local/bin/[root@k8s-2 ~]# mkdir -p /etc/kubernetes/ssl/[root@k8s-0 ~]# scp /etc/kubernetes/bootstrap.kubeconfig root@k8s-2:/etc/kubernetes/[root@k8s-0 ~]# scp /etc/kubernetes/kube-proxy.kubeconfig root@k8s-2:/etc/kubernetes/[root@k8s-2 ~]# ls -lR /etc/kubernetes//etc/kubernetes/:-rw-------. 1 root root 2265 11月 12 11:49 bootstrap.kubeconfig-rw-------. 1 root root 6453 11月 12 11:50 kube-proxy.kubeconfigdrwxr-xr-x. 2 root root6 11月 12 11:49 ssl/etc/kubernetes/ssl:[root@k8s-2 ~]# cat /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes Kubelet ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service[Service]WorkingDirectory=/var/lib/kubeletEnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/kubeletExecStart=/usr/local/bin/kubelet \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBELET_API_SERVER \$KUBELET_ADDRESS \$KUBELET_PORT \$KUBELET_HOSTNAME \$KUBE_ALLOW_PRIV \$KUBELET_POD_INFRA_CONTAINER \$KUBELET_ARGSRestart=on-failure[Install]WantedBy=multi-user.target[root@k8s-2 ~]# cat /usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes Kube-Proxy ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target[Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/proxyExecStart=/usr/local/bin/kube-proxy \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_PROXY_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target4.4.2、配置[[email protected]2 kubernetes]# cat /etc/kubernetes/config #### kubernetes system config## The following values are used to configure various aspects of all# kubernetes services, including## kube-apiserver.service# kube-controller-manager.service# kube-scheduler.service# kubelet.service# kube-proxy.service# logging to stderr means we get it in the systemd journalKUBE_LOGTOSTDERR="--logtostderr=true"# journal message level, 0 is debugKUBE_LOG_LEVEL="--v=0"# Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV="--allow-privileged=true"# How the controller-manager, scheduler, and proxy find the apiserver#KUBE_MASTER="--master=http://127.0.0.1:8080"[[email protected]2 kubernetes]# cat /etc/kubernetes/kubelet #### kubernetes kubelet (minion) config# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)KUBELET_ADDRESS="--address=192.168.119.182"# The port for the info server to serve on# KUBELET_PORT="--port=10250"# You may leave this blank to use the actual hostnameKUBELET_HOSTNAME="--hostname-override=k8s-2"# location of the api-server# KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"# pod infrastructure containerKUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"# Add your own!KUBELET_ARGS="--cgroup-driver=systemd \--cluster-dns=10.254.0.2 \--resolv-conf=/etc/resolv.conf \--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \--fail-swap-on=false \--cert-dir=/etc/kubernetes/ssl \--cluster-domain=cluster.local. \--hairpin-mode=promiscuous-bridge \--serialize-image-pulls=false \--runtime-cgroups=/systemd/system.slice \--kubelet-cgroups=/systemd/system.slice"[[email protected]2 kubernetes]# cat /etc/kubernetes/proxy #### kubernetes proxy config# default config should be adequate# Add your own!KUBE_PROXY_ARGS="--bind-address=192.168.119.182 \ --hostname-override=kubernetes-mision-1 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \ --cluster-cidr=10.254.0.0/16"[[email protected]2 kubernetes]# ls -lR /etc/kubernetes//etc/kubernetes/:-rw-------. 1 root root 2265 11月 12 11:49 bootstrap.kubeconfig-rw-r--r--. 1 root root655 11月 12 11:57 config-rw-r--r--. 1 root root 1205 11月 12 11:59 kubelet-rw-------. 1 root root 6453 11月 12 11:50 kube-proxy.kubeconfig-rw-r--r--. 1 root root310 11月 12 12:16 proxydrwxr-xr-x. 2 root root6 11月 12 11:49 ssl/etc/kubernetes/ssl:4.4.3、启动[[email protected] ~]# systemctl start kubelet[[email protected]2 ~]# systemctl status kubelet● kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled) Active: active (running) since 日 2017-11-12 12:11:03 CST; 3s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 2301 (kubelet) CGroup: /system.slice/kubelet.service └─2301 /usr/local/bin/kubelet --logtostderr=true --v=0 --address=192.168.119.182 --hostname-over...11月 12 12:11:03 k8s-2 systemd[1]: Started Kubernetes Kubelet Server.11月 12 12:11:03 k8s-2 systemd[1]: Starting Kubernetes Kubelet Server...11月 12 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.8684882301 feature_gate.go:156] feature gates: map[]11月 12 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.8688482301 controller.go:114] kubelet config...oller11月 12 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.8688552301 controller.go:118] kubelet config...flags11月 12 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.8815412301 client.go:75] Connecting to docke....sock11月 12 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.8816402301 client.go:95] Start docker client...=2m0s11月 12 12:11:03 k8s-2 kubelet[2301]: W1112 12:11:03.8913642301 cni.go:196] Unable to update cni ...net.d11月 12 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.9114962301 feature_gate.go:156] feature gates: map[]11月 12 12:11:03 k8s-2 kubelet[2301]: W1112 12:11:03.9116262301 server.go:289] --cloud-provider=a...citlyHint: Some lines were ellipsized, use -l to show in full.[[email protected]0 ~]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbPro 1dkubelet-bootstrap Approved,Issuednode-csr-vB90SM4Qb4tW36zoSAf5lZZ8q8fB3mOF2g8VL06gjBo 32s kubelet-bootstrap Pending[[email protected]0 ~]# kubectl certificate approve node-csr-vB90SM4Qb4tW36zoSAf5lZZ8q8fB3mOF2g8VL06gjBocertificatesigningrequest "node-csr-vB90SM4Qb4tW36zoSAf5lZZ8q8fB3mOF2g8VL06gjBo" approved[[email protected]0 ~]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbPro 1dkubelet-bootstrap Approved,Issuednode-csr-vB90SM4Qb4tW36zoSAf5lZZ8q8fB3mOF2g8VL06gjBo 1mkubelet-bootstrap Approved,Issued[[email protected]0 ~]# kubectl get nodesNAMESTATUSROLES AGE VERSIONk8s-1 Ready 1dv1.8.1k8s-2 Ready 10s v1.8.1[[email protected]2 kubernetes]# systemctl start kube-proxy[[email protected]2 kubernetes]# systemctl status kube-proxy● kube-proxy.service - Kubernetes Kube-Proxy Server Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled) Active: active (running) since 日 2017-11-12 12:18:05 CST; 3s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 2531 (kube-proxy) CGroup: /system.slice/kube-proxy.service ‣ 2531 /usr/local/bin/kube-proxy --logtostderr=true --v=0 --bind-address=192.168.119.182 --hostn...11月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.5168952531 conntrack.go:52] Setting nf_con...107211月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.5267252531 conntrack.go:83] Setting conntr...276811月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.5269462531 conntrack.go:98] Set sysctl 'ne...640011月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.5269772531 conntrack.go:98] Set sysctl 'ne...360011月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.5277252531 config.go:202] Starting service...ller11月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.5277352531 controller_utils.go:1041] Waiti...ller11月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.5277772531 config.go:102] Starting endpoin...ller11月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.5277812531 controller_utils.go:1041] Waiti...ller11月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.6280352531 controller_utils.go:1048] Cache...ller11月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.6281252531 controller_utils.go:1048] Cache...llerHint: Some lines were ellipsized, use -l to show in full.5、安装配置Flanneld

Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有集群内唯一的虚拟IP地址。

在默认的Docker配置中,每个节点上的Docker服务会分别负责所在节点容器的IP分配。这样导致的一个问题是,不同节点上容器可能获得相同的内外IP地址。Flannel的设计目的就是为集群中的所有节点重新规划IP地址的使用规则,从而使得不同节点上的容器能够获得“同属一个内网”且”不重复的”IP地址,并让属于不同节点上的容器能够直接通过内网IP通信。

5.1、安装[[email protected] images]# yum list | grep flannelflannel.x86_640.7.1-2.el7extras [[email protected] images]# yum install -y flannel[[email protected] ~]# cat /usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network.targetAfter=network-online.targetWants=network-online.targetAfter=etcd.serviceBefore=docker.service[Service]Type=notifyEnvironmentFile=/etc/sysconfig/flanneldEnvironmentFile=-/etc/sysconfig/docker-networkExecStart=/usr/bin/flanneld-start $FLANNEL_ETCD_ENDPOINTS $FLANNEL_ETCD_PREFIX $FLANNEL_OPTIONS ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/dockerRestart=on-failure[Install]WantedBy=multi-user.targetRequiredBy=docker.service5.2、证书

flannel需要连接etcd,所以需要配置etcd的客户端证书,这里直接使用之前etcd测试用的客户端证书,将证书存放于/etc/kubernetes/ssl。

[[email protected] ssl]# scp -r etcd-* [email protected]:/etc/kubernetes/ssl/etcd-client-kubernetes-key.pem100% 1679 1.6KB/s 00:00etcd-client-kubernetes.pem100% 1476 1.4KB/s 00:00etcd-root-ca.pem100% 1403 1.4KB/s 00:00[[email protected] ~]# ls -l /etc/kubernetes/ssl/-rw-------. 1 root root 1679 11月 12 18:50 etcd-client-kubernetes-key.pem-rw-r--r--. 1 root root 1476 11月 12 18:50 etcd-client-kubernetes.pem-rw-r--r--. 1 root root 1403 11月 12 18:50 etcd-root-ca.pem-rw-r--r--. 1 root root 1054 11月 10 20:39 kubelet-client.crt-rw-------. 1 root root227 11月 10 20:38 kubelet-client.key-rw-r--r--. 1 root root 1094 11月 10 20:38 kubelet.crt-rw-------. 1 root root 1679 11月 10 20:38 kubelet.key5.3、配置[root@k8s-1 ~]# cat /etc/sysconfig/flanneld# Flanneld configuration options# etcd url location.Point this to the server where etcd runsFLANNEL_ETCD_ENDPOINTS="-etcd-endpoints=https://etcd-1:2379,https://etcd-2:2379,https://etcd-3:2379"# etcd config key.This is the configuration key that flannel queries# For address range assignmentFLANNEL_ETCD_PREFIX="-etcd-prefix=/atomic.io/network"# Any additional options that you want to passFLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/etcd-root-ca.pem -etcd-certfile=/etc/kubernetes/ssl/etcd-client-kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/etcd-client-kubernetes-key.pem"[root@k8s-0 etcd]# pwd/root/cfssl/etcd[root@k8s-0 etcd]# etcdctl --ca-file=ca.pem --cert-file=certificates-client.pem --key-file=certificates-client-key.pem --endpoints=https://etcd-1:2379,https://etcd-2:2379,https://etcd-3:2379 mk /atomic.io/network/config '{ "Network": "10.2.0.0/16", "SubnetLen":24, "Backend": {"Type": "vxlan"}}'{ "Network": "10.2.0.0/16", "SubnetLen":24, "Backend": {"Type": "vxlan"}}5.4、启动[[email protected] ~]# systemctl start flanneld[[email protected]1 ~]# systemctl status flanneld● flanneld.service - Flanneld overlay address etcd agent Loaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled) Active: active (running) since 一 2017-11-13 10:37:49 CST; 1min 12s agoProcess: 2255 ExecStartPost=/usr/libexec/flannel Main PID: 2243 (flanneld) CGroup: /system.slice/flanneld.service └─2243 /usr/bin/flanneld -etcd-endpoints=-etcd-endpoints=https://etcd-1:2379,https://etcd-2:2379...11月 13 10:37:49 k8s-1 flanneld[2243]: warning: ignoring ServerName for user-provided CA for backwards...ated11月 13 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.4054032243 main.go:132] Installing sig...lers11月 13 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.4071742243 manager.go:136] Determining...face11月 13 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.4078802243 manager.go:149] Using inter....18111月 13 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.4079422243 manager.go:166] Defaulting ...181)11月 13 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.4593982243 local_manager.go:134] Found...sing11月 13 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.4647762243 manager.go:250] Lease acqui...0/2411月 13 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.4653182243 network.go:58] Watching for...sses11月 13 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.4653272243 network.go:66] Watching for...ases11月 13 10:37:49 k8s-1 systemd[1]: Started Flanneld overlay address etcd agent.Hint: Some lines were ellipsized, use -l to show in full.[[email protected]1 ~]# ip a1: lo:mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever2: ens33:mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ffinet 192.168.119.181/24 brd 192.168.119.255 scope global ens33 valid_lft forever preferred_lft foreverinet6 fe80::250:56ff:fe39:3d4c/64 scope link valid_lft forever preferred_lft forever3: flannel.1:mtu 1450 qdisc noqueue state UNKNOWN link/ether 4a:77:38:8c:94:ce brd ff:ff:ff:ff:ff:ffinet 10.2.81.0/32 scope global flannel.1 valid_lft forever preferred_lft forever### 重启docker,正常情况下flannel.1和docker0在一个网段内 ###[[email protected]1 ~]# systemctl restart docker[[email protected]1 ~]# ip a1: lo:mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever2: ens33:mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ffinet 192.168.119.181/24 brd 192.168.119.255 scope global ens33 valid_lft forever preferred_lft foreverinet6 fe80::250:56ff:fe39:3d4c/64 scope link valid_lft forever preferred_lft forever3: flannel.1:mtu 1450 qdisc noqueue state UNKNOWN link/ether 4a:77:38:8c:94:ce brd ff:ff:ff:ff:ff:ffinet 10.2.81.0/32 scope global flannel.1 valid_lft forever preferred_lft forever4: docker0:mtu 1500 qdisc noqueue state DOWN link/ether 02:42:ed:34:86:6c brd ff:ff:ff:ff:ff:ffinet 10.2.81.1/24 scope global docker0 valid_lft forever preferred_lft forever

剩余两个节点的操作方式和这里一样,只是不需要再向etcd中注册网段信息!

6、部署traefik-ingress服务

ingress用于对外暴露集群内部的服务。ingress通过hostNetwork直接连接物理网络,接收外部请求,然后根据配置规则将请求转发到集群内部。

6.1、YML文件内容[[email protected] addons]# pwd/root/yml/addons[[email protected] addons]# cat traefik-ingress.yaml ---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: traefik-ingress-controllerrules:- apiGroups:- ""resources:- services- endpoints- secretsverbs:- get- list- watch- apiGroups:- extensionsresources:- ingressesverbs:- get- list- watch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: traefik-ingress-controllerroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: traefik-ingress-controllersubjects:- kind: ServiceAccountname: traefik-ingress-controllernamespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:name: traefik-ingress-controllernamespace: kube-system---kind: DaemonSetapiVersion: extensions/v1beta1metadata:name: traefik-ingress-controllernamespace: kube-systemlabels:k8s-app: traefik-ingress-lbspec:template:metadata:labels:k8s-app: traefik-ingress-lbname: traefik-ingress-lbspec:serviceAccountName: traefik-ingress-controllerterminationGracePeriodSeconds: 60hostNetwork: truenodeSelector:edgenode: "true"containers:- image: docker.io/traefik:v1.4.1name: traefik-ingress-lbports:- name: httpcontainerPort: 80hostPort: 80- name: admincontainerPort: 8080securityContext:privileged: trueargs:- -d- --web- --kubernetes---apiVersion: v1kind: Servicemetadata:name: traefik-web-uinamespace: kube-systemspec:selector:k8s-app: traefik-ingress-lbports:- port: 80targetPort: 8080---apiVersion: extensions/v1beta1kind: Ingressmetadata:name: traefik-web-uinamespace: kube-systemannotations:kubernetes.io/ingress.class: traefikspec:rules:- host: traefik-ui.minikubehttp:paths:- backend:serviceName: traefik-web-uiservicePort: 806.2、标记边缘节点

边缘节点就是需要接收外部请求的节点,一个kubernetes集群可能由几十上百个鸡诶安组成,但是需要部署ingress的节点之上其中的一部分,这不是就被称之为边缘节点。

[root@k8s-0 addons]# kubectl label nodes k8s-1 edgenode=truenode "k8s-1" labeled[root@k8s-0 addons]# kubectl label nodes k8s-2 edgenode=truenode "k8s-2" labeled[root@k8s-0 addons]# kubectl label nodes k8s-3 edgenode=truenode "k8s-3" labeled

在traefik-ingress.yaml中,通过nodeSelector来选择在那些节点上部署服务。

6.3、启动服务[[email protected] addons]# kubectl create -f traefik-ingress.yaml clusterrole "traefik-ingress-controller" createdclusterrolebinding "traefik-ingress-controller" createdserviceaccount "traefik-ingress-controller" createddaemonset "traefik-ingress-controller" createdservice "traefik-web-ui" createdingress "traefik-web-ui" created[[email protected]0 addons]# kubectl get pods -n kube-systemNAME READY STATUSRESTARTS AGEtraefik-ingress-controller-gnnn8 1/1 Running 01mtraefik-ingress-controller-v6c86 1/1 Running 01mtraefik-ingress-controller-wtmf8 1/1 Running 01m[[email protected]0 addons]# kubectl get all -n kube-systemNAMEDESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEds/traefik-ingress-controller 3 3 3 33 edgenode=true 8mds/traefik-ingress-controller 3 3 3 33 edgenode=true 8mNAMEREADY STATUSRESTARTS AGEpo/traefik-ingress-controller-gnnn8 1/1 Running 02mpo/traefik-ingress-controller-v6c86 1/1 Running 02mpo/traefik-ingress-controller-wtmf8 1/1 Running 02mNAME TYPECLUSTER-IPEXTERNAL-IP PORT(S) AGEsvc/traefik-web-ui ClusterIP 10.254.59.121 80/TCP8m6.4、测试

这里写图片描述

这里是直接访问mision node上的服务,也可以通过在hosts文件中配置traefik-ui.minikube来进行访问。

6.5、部署keepalived

keepalived可以在多个物理节点之间产生一个VIP,VIP同一时间只在一个节点上存在,如果所在节点宕机就会自动漂移到其它节点上继续服务,借此实现HA。

6.5.1、安装[root@k8s-1 ~]# wget http://www.keepalived.org/software/keepalived-1.3.9.tar.gz[root@k8s-1 ~]# tar -zxvf keepalived-1.3.9.tar.gz [root@k8s-1 ~]# yum -y install gcc[root@k8s-1 ~]# yum -y install openssl-devel[root@k8s-1 ~]# cd keepalived-1.3.9[root@k8s-1 keepalived-1.3.9]# ./configure [root@k8s-1 keepalived-1.3.9]# make[root@k8s-1 keepalived-1.3.9]# make install

在CentOS 7上建议使用源码编译安装,之前尝试使用rpm安装之后无法正常启动服,日志/var/log/messages提示:keepalived[2198]: segfault at 0 ip (null) sp 00007ffed57ac318 error 14 in libnss_files-2.17.so

进行编译安装时,提前安装好gcc和openssl-devel。

K8s-2和k8s-3的安装方式与此一致,只需要在边缘节点上安装此服务。

6.5.2、配置[root@k8s-1 keepalived-1.3.9]# mkdir -p /etc/keepalived/[root@k8s-1 keepalived-1.3.9]# cat /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs { notification_email { root@localhost } notification_email_from kaadmin@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL}vrrp_instance VI_1 {state MASTERinterface ens33 virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.119.100}}virtual_server 192.168.119.100 80 {delay_loop 6lb_algo loadbalancelb_kind DRnat_mask 255.255.255.0persistence_timeout 0protocol TCPreal_server 192.168.119.181 80 {weight 1TCP_CHECK {connect_timeout 3}}real_server 192.168.119.182 80 {weight 1TCP_CHECK {connect_timeout 3}}real_server 192.168.119.183 80 {weight 1TCP_CHECK {connect_timeout 3}}}

state表示状态,state为MASTER的节点将在启动服务时获得VIP。当MASTER节点宕机之后,其它节点根据priority(优先级)大小来获得VIP。

在配置时,只有一个节点的state被设置为MASTER,其它节点为BACKUP

6.5.3、启动服务[[email protected]1 keepalived-1.3.9]# systemctl start keepalived[[email protected]1 keepalived-1.3.9]# systemctl status keepalived● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled) Active: active (running) since 一 2017-11-13 13:22:29 CST; 20s agoProcess: 14593 ExecStart=/usr/local/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 14594 (keepalived) CGroup: /system.slice/keepalived.service ├─14594 /usr/local/sbin/keepalived -D ├─14595 /usr/local/sbin/keepalived -D └─14596 /usr/local/sbin/keepalived -D11月 13 13:22:31 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.10011月 13 13:22:31 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.10011月 13 13:22:31 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.10011月 13 13:22:31 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.10011月 13 13:22:36 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.10011月 13 13:22:36 k8s-1 Keepalived_vrrp[14596]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on....10011月 13 13:22:36 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.10011月 13 13:22:36 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.10011月 13 13:22:36 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.10011月 13 13:22:36 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.100Hint: Some lines were ellipsized, use -l to show in full.[[email protected]1 keepalived-1.3.9]# ip address show ens332: ens33:mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ffinet 192.168.119.181/24 brd 192.168.119.255 scope global ens33 valid_lft forever preferred_lft foreverinet 192.168.119.100/32 scope global ens33 valid_lft forever preferred_lft foreverinet6 fe80::250:56ff:fe39:3d4c/64 scope link valid_lft forever preferred_lft forever[[email protected]0 ~]# ping 192.168.119.100PING 192.168.119.100 (192.168.119.100) 56(84) bytes of data.64 bytes from 192.168.119.100: icmp_seq=1 ttl=64 time=2.54 ms64 bytes from 192.168.119.100: icmp_seq=2 ttl=64 time=0.590 ms64 bytes from 192.168.119.100: icmp_seq=3 ttl=64 time=0.427 ms^C--- 192.168.119.100 ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2002msrtt min/avg/max/mdev = 0.427/1.188/2.548/0.964 ms6.5.4、测试### 在k8s-1、k8s-2和k8s-3上均启动服务之后 ###[root@k8s-1 keepalived-1.3.9]# ip address show ens332: ens33:mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ffinet 192.168.119.181/24 brd 192.168.119.255 scope global ens33 valid_lft forever preferred_lft foreverinet 192.168.119.100/32 scope global ens33 valid_lft forever preferred_lft foreverinet6 fe80::250:56ff:fe39:3d4c/64 scope link valid_lft forever preferred_lft forever[root@k8s-2 keepalived-1.3.9]# ip address show ens332: ens33:mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:3e:9b:fa brd ff:ff:ff:ff:ff:ffinet 192.168.119.182/24 brd 192.168.119.255 scope global ens33 valid_lft forever preferred_lft forever[root@k8s-3 keepalived-1.3.9]# ip address show ens332: ens33:mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:35:af:f9 brd ff:ff:ff:ff:ff:ffinet 192.168.119.183/24 brd 192.168.119.255 scope global ens33 valid_lft forever preferred_lft forever### 在k8s-1上停止服务之后 ###[root@k8s-1 keepalived-1.3.9]# systemctl stop keepalived[root@k8s-1 keepalived-1.3.9]# ip address show ens332: ens33:mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ffinet 192.168.119.181/24 brd 192.168.119.255 scope global ens33 valid_lft forever preferred_lft foreverinet6 fe80::250:56ff:fe39:3d4c/64 scope link valid_lft forever preferred_lft forever[root@k8s-2 keepalived-1.3.9]# ip address show ens332: ens33:mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:3e:9b:fa brd ff:ff:ff:ff:ff:ffinet 192.168.119.182/24 brd 192.168.119.255 scope global ens33 valid_lft forever preferred_lft foreverinet 192.168.119.100/32 scope global ens33 valid_lft forever preferred_lft forever[root@k8s-3 keepalived-1.3.9]# ip address show ens332: ens33:mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:35:af:f9 brd ff:ff:ff:ff:ff:ffinet 192.168.119.183/24 brd 192.168.119.255 scope global ens33 valid_lft forever preferred_lft forever[root@k8s-0 ~]# ping 192.168.119.100PING 192.168.119.100 (192.168.119.100) 56(84) bytes of data.64 bytes from 192.168.119.100: icmp_seq=1 ttl=64 time=0.346 ms64 bytes from 192.168.119.100: icmp_seq=2 ttl=64 time=0.618 ms^C--- 192.168.119.100 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1000msrtt min/avg/max/mdev = 0.346/0.482/0.618/0.136 ms7、部署私有docker registry

这里的docker私库我们使用nfs作为外部存储,国内很多镜像文件无法通过docker直接pull下来,经常需要下载下来之后上传到私库便于后续使用。如果是内网环境更是必不可少。

7.1、搭建NFS服务7.1.1、安装[root@k8s-0 ~]# yum install -y nfs-utils rpcbind[root@k8s-1 images]# yum install -y nfs-utils[root@k8s-2 images]# yum install -y nfs-utils[root@k8s-3 images]# yum install -y nfs-utils7.1.2、配置[root@k8s-0 ~]# cat /etc/exports/opt/data/ 192.168.119.0/24(rw,no_root_squash,no_all_squash,sync)[root@k8s-0 ~]# mkdir -p /opt/data/[root@k8s-0 ~]# exportfs -r7.1.3、启动[root@k8s-0 ~]# systemctl start rpcbind[root@k8s-0 ~]# systemctl status rpcbind● rpcbind.service - RPC bind service Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled) Active: active (running) since 日 2017-11-12 21:17:17 CST; 3min 58s ago Main PID: 3558 (rpcbind) CGroup: /system.slice/rpcbind.service └─3558 /sbin/rpcbind -w11月 12 21:17:17 k8s-0 systemd[1]: Starting RPC bind service...11月 12 21:17:17 k8s-0 systemd[1]: Started RPC bind service.[root@k8s-0 ~]# systemctl start nfs[root@k8s-0 ~]# systemctl status nfs● nfs-server.service - NFS server and services Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled) Active: active (exited) since 日 2017-11-12 21:21:23 CST; 3s agoProcess: 3735 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)Process: 3730 ExecStartPre=/bin/sh -c /bin/kill -HUP `cat /run/gssproxy.pid` (code=exited, status=0/SUCCESS)Process: 3729 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS) Main PID: 3735 (code=exited, status=0/SUCCESS) CGroup: /system.slice/nfs-server.service11月 12 21:21:23 k8s-0 systemd[1]: Starting NFS server and services...11月 12 21:21:23 k8s-0 systemd[1]: Started NFS server and services.7.1.4、测试[root@k8s-0 ~]# showmount -e k8s-0Export list for k8s-0:/opt/data 192.168.119.0/24

在mission节点上测试是一样的。

7.2、YML文件内容[[email protected] yml]# pwd/root/yml[[email protected] yml]# cat /root/yml/docker-registry.yaml apiVersion: v1kind: PersistentVolumemetadata:name: docker-registry-pvlabels:release: stablespec:capacity:storage: 5GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: Recyclenfs:path: /opt/dataserver: 192.168.119.180---apiVersion: v1kind: PersistentVolumeClaimmetadata:name: docker-registry-claimspec:accessModes:- ReadWriteOnceresources:requests:storage: 5Giselector:matchLabels:release: stable---apiVersion: extensions/v1beta1kind: Deploymentmetadata:name: docker-registry-deploymentspec:replicas: 1template:metadata:labels:name: docker-registryspec:containers:- name: docker-registryimage: docker.io/registry:latestvolumeMounts:- mountPath: /var/lib/registryname: registry-volumeports:- containerPort: 5000volumes:- name: registry-volumepersistentVolumeClaim:claimName: docker-registry-claim---apiVersion: v1kind: Servicemetadata:name: docker-registry-servicespec:selector:name: docker-registryports:- port: 80targetPort: 5000---apiVersion: extensions/v1beta1kind: Ingressmetadata:name: docker-registry-ingressspec:rules:- host: docker.reghttp:paths:- backend:serviceName: docker-registry-serviceservicePort: 807.3、部署服务[root@k8s-0 yml]# pwd/root/yml[root@k8s-0 yml]# kubectl create -f docker-registry.yaml persistentvolume "docker-registry-pv" createdpersistentvolumeclaim "docker-registry-claim" createddeployment "docker-registry-deployment" createdservice "docker-registry-service" createdingress "docker-registry-ingress" created[root@k8s-0 yml]# kubectl get podsNAMEREADY STATUSRESTARTS AGEdocker-registry-deployment-68d94fcf85-t897g 1/1 Running 09s7.4、测试### 查看服务地址 ###[root@k8s-0 ~]# kubectl get svcNAMETYPECLUSTER-IPEXTERNAL-IP PORT(S) AGEdocker-registry-service ClusterIP 10.254.125.34 80/TCP12mkubernetesClusterIP 10.254.0.1443/TCP 2d### 运行busybox,在其内部测试 ###[root@k8s-1 ~]# docker run -ti --rm docker.io/busybox:1.27.2 sh/ # wget -O - -q http://10.254.125.34/v2/_catalog{"repositories":[]}/ # 7.5、配置hosts文件[[email protected]0 ~]# cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.119.180k8s-0 etcd-1192.168.119.181k8s-1 etcd-2192.168.119.182k8s-2 etcd-3192.168.119.183k8s-3192.168.119.100docker.reg### 再次测试docker registry ###[[email protected]1 ~]# curl http://docker.reg/v2/_catalog{"repositories":[]}

修改所有节点的hosts文件

7.6、配置国内镜像和本地私库的http[root@k8s-1 ~]# cat /etc/sysconfig/docker# /etc/sysconfig/docker# Modify these options if you want to change the way the docker daemon runsOPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false--registry-mirror=https://xxxxxxxx.mirror.aliyuncs.com'if [ -z "${DOCKER_CERT_PATH}" ]; thenDOCKER_CERT_PATH=/etc/dockerfi# Do not add registries in this file anymore. Use /etc/containers/registries.conf# from the atomic-registries package.## docker-latest daemon can be used by starting the docker-latest unitfile.# To use docker-latest client, uncomment below lines#DOCKERBINARY=/usr/bin/docker-latest#DOCKERDBINARY=/usr/bin/dockerd-latest#DOCKER_CONTAINERD_BINARY=/usr/bin/docker-containerd-latest#DOCKER_CONTAINERD_SHIM_BINARY=/usr/bin/docker-containerd-shim-latestINSECURE_REGISTRY='--insecure-registry docker.reg'[root@k8s-1 ~]# systemctl restart docker### 测试镜像上传下载 ###[root@k8s-1 ~]# docker imagesREPOSITORY TAG IMAGE IDCREATED SIZEdocker.io/busybox1.27.26ad733544a639 days ago1.129 MBdocker.io/registry 2.5.2 876793cc984a9 days ago37.73 MBdocker.io/traefikv1.4.183df6581f3d92 weeks ago 45.58 MBquay.io/coreos/flannel v0.9.0-amd644c600a64a18a7 weeks ago 51.31 MBgcr.io/google_containers/pause-amd64 3.0 99e59f495ffa18 months ago 746.9 kB[root@k8s-1 ~]# docker tag docker.io/busybox:1.27.2 docker.reg/busybox:1.27.2[root@k8s-1 ~]# docker imagesREPOSITORY TAG IMAGE IDCREATED SIZEdocker.reg/busybox 1.27.26ad733544a639 days ago1.129 MBdocker.io/busybox1.27.26ad733544a639 days ago1.129 MBdocker.io/registry 2.5.2 876793cc984a9 days ago37.73 MBdocker.io/traefikv1.4.183df6581f3d92 weeks ago 45.58 MBquay.io/coreos/flannel v0.9.0-amd644c600a64a18a7 weeks ago 51.31 MBgcr.io/google_containers/pause-amd64 3.0 99e59f495ffa18 months ago 746.9 kB[root@k8s-1 ~]# docker push docker.reg/busybox:1.27.2The push refers to a repository [docker.reg/busybox]0271b8eebde3: Pushed 1.27.2: digest: sha256:91ef6c1c52b166be02645b8efee30d1ee65362024f7da41c404681561734c465 size: 527[root@k8s-1 ~]# curl http://docker.reg/v2/_catalog{"repositories":["busybox"]}### 在另一个节点上下载 ###[root@k8s-2 ~]# docker imagesREPOSITORY TAG IMAGE IDCREATED SIZEdocker.io/registry 2.5.2 876793cc984a8 days ago37.73 MBdocker.io/traefikv1.4.183df6581f3d92 weeks ago 45.58 MBquay.io/coreos/flannel v0.9.0-amd644c600a64a18a7 weeks ago 51.31 MBgcr.io/google_containers/pause-amd64 3.0 99e59f495ffa18 months ago 746.9 kB[root@k8s-2 ~]# docker pull docker.reg/busybox:1.27.2Trying to pull repository docker.reg/busybox ... 1.27.2: Pulling from docker.reg/busybox0ffadd58f2a6: Pull complete Digest: sha256:91ef6c1c52b166be02645b8efee30d1ee65362024f7da41c404681561734c465[root@k8s-2 ~]# docker imagesREPOSITORY TAG IMAGE IDCREATED SIZEdocker.reg/busybox 1.27.26ad733544a638 days ago1.129 MBdocker.io/registry 2.5.2 876793cc984a8 days ago37.73 MBdocker.io/traefikv1.4.183df6581f3d92 weeks ago 45.58 MBquay.io/coreos/flannel v0.9.0-amd644c600a64a18a7 weeks ago 51.31 MBgcr.io/google_containers/pause-amd64 3.0 99e59f495ffa18 months ago 746.9 kB

–registry-mirror=https://xxxxxxxx.mirror.aliyuncs.com‘根据实际地址配置

所有docker节点都需要修改

8、部署kube-dns服务

kube-dns是kubernetes的可选插件,用来实现集群内部的DNS服务,service之间可以直接通过name相互通信,从而避免IP不可靠的问题。

8.1、YML文件内容[[email protected] addons]# pwd/root/yml/addons[[email protected] addons]# cat kube-dns.yaml # Copyright 2016 The Kubernetes Authors.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml# in sync with this file.# Warning: This is a file generated from the base underscore template file: kube-dns.yaml.baseapiVersion: v1kind: Servicemetadata:name: kube-dnsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "KubeDNS"spec:selector:k8s-app: kube-dnsclusterIP: 10.254.0.2ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP---apiVersion: v1kind: ServiceAccountmetadata:name: kube-dnsnamespace: kube-systemlabels:kubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile---apiVersion: v1kind: ConfigMapmetadata:name: kube-dnsnamespace: kube-systemlabels:addonmanager.kubernetes.io/mode: EnsureExists---apiVersion: extensions/v1beta1kind: Deploymentmetadata:name: kube-dnsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilespec:# replicas: not specified here:# 1. In order to make Addon Manager do not reconcile this replicas parameter.# 2. Default is 1.# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:rollingUpdate:maxSurge: 10%maxUnavailable: 0selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:tolerations:- key: "CriticalAddonsOnly"operator: "Exists"volumes:- name: kube-dns-configconfigMap:name: kube-dnsoptional: truecontainers:- name: kubednsimage: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5resources:# TODO: Set memory limits when we've profiled the container for large# clusters, then set request = limit to keep this container in# guaranteed class. Currently, this container falls into the# "burstable" category so the kubelet doesn't backoff from restarting it.limits:memory: 170Mirequests:cpu: 100mmemory: 70MilivenessProbe:httpGet:path: /healthcheck/kubednsport: 10054scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readinessport: 8081scheme: HTTP# we poll on pod startup for the Kubernetes master service and# only setup the /readiness HTTP server once that's available.initialDelaySeconds: 3timeoutSeconds: 5args:- --domain=cluster.local.- --dns-port=10053- --config-dir=/kube-dns-config- --v=2env:- name: PROMETHEUS_PORTvalue: "10055"ports:- containerPort: 10053name: dns-localprotocol: UDP- containerPort: 10053name: dns-tcp-localprotocol: TCP- containerPort: 10055name: metricsprotocol: TCPvolumeMounts:- name: kube-dns-configmountPath: /kube-dns-config- name: dnsmasqimage: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5livenessProbe:httpGet:path: /healthcheck/dnsmasqport: 10054scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5args:- -v=2- -logtostderr- -configDir=/etc/k8s/dns/dnsmasq-nanny- -restartDnsmasq=true- --- -k- --cache-size=1000- --no-negcache- --log-facility=-- --server=/cluster.local/127.0.0.1#10053- --server=/in-addr.arpa/127.0.0.1#10053- --server=/ip6.arpa/127.0.0.1#10053ports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP# see: https://github.com/kubernetes/kubernetes/issues/29055 for detailsresources:requests:cpu: 150mmemory: 20MivolumeMounts:- name: kube-dns-configmountPath: /etc/k8s/dns/dnsmasq-nanny- name: sidecarimage: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5livenessProbe:httpGet:path: /metricsport: 10054scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5args:- --v=2- --logtostderr- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,Aports:- containerPort: 10054name: metricsprotocol: TCPresources:requests:memory: 20Micpu: 10mdnsPolicy: Default# Don't use cluster DNS.serviceAccountName: kube-dns8.2、创建服务[root@k8s-0 addons]# pwd/root/yml/addons[root@k8s-0 addons]# kubectl create -f kube-dns.yaml service "kube-dns" createdserviceaccount "kube-dns" createdconfigmap "kube-dns" createddeployment "kube-dns" created[root@k8s-0 addons]# kubectl get pods -n kube-systemNAME READY STATUSRESTARTS AGEkube-dns-7dff49b8fc-2fl643/3 Running 020straefik-ingress-controller-gnnn8 1/1 Running 12htraefik-ingress-controller-v6c86 1/1 Running 12htraefik-ingress-controller-wtmf8 1/1 Running 12h8.3、测试[[email protected]0 addons]# kubectl run -ti --rm --image=docker.reg/busybox:1.27.2 shIf you don't see a command prompt, try pressing enter./ # wget -O - -q http://docker-registry-service/v2/_catalog{"repositories":["busybox"]}

这里使用kubectl run而不是docker run,通过service name访问docker registry服务。

9、部署doshboard服务9.1、YML文件内容[[email protected]0 addons]# pwd/root/yml/addons[[email protected]0 addons]# cat kubernetes-dashboard.yaml # Copyright 2015 Google Inc. All Rights Reserved.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# Configuration to deploy release version of the Dashboard UI compatible with# Kubernetes 1.7.## Example usage: kubectl create -f # ------------------- Dashboard Secret ------------------- #apiVersion: v1kind: Secretmetadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kube-systemtype: Opaque---# ------------------- Dashboard Service Account ------------------- #apiVersion: v1kind: ServiceAccountmetadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system---# ------------------- Dashboard Role & Role Binding ------------------- #kind: RoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: kubernetes-dashboard-minimalnamespace: kube-systemrules:# Allow Dashboard to create and watch for changes of 'kubernetes-dashboard-key-holder' secret.- apiGroups: [""]resources: ["secrets"]verbs: ["create", "watch"]- apiGroups: [""]resources: ["secrets"]# Allow Dashboard to get, update and delete 'kubernetes-dashboard-key-holder' secret.resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]verbs: ["get", "update", "delete"]# Allow Dashboard to get metrics from heapster.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster"]verbs: ["proxy"]---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:name: kubernetes-dashboardnamespace: kube-systemroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-adminsubjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kube-system---# ------------------- Dashboard Deployment ------------------- #kind: DeploymentapiVersion: extensions/v1beta1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-systemspec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:initContainers:- name: kubernetes-dashboard-initimage: gcr.io/google_containers/kubernetes-dashboard-init-amd64:v1.0.1volumeMounts:- name: kubernetes-dashboard-certsmountPath: /certscontainers:- name: kubernetes-dashboardimage: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.1ports:- containerPort: 8443protocol: TCPargs:- --tls-key-file=/certs/dashboard.key- --tls-cert-file=/certs/dashboard.crt# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certsreadOnly: true# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboard# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---# ------------------- Dashboard Service ------------------- #kind: ServiceapiVersion: v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-systemspec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard

目前是将cluster-admin角色直接绑定给kubernetes-dashboard用户。

9.2、部署[root@k8s-0 addons]# pwd/root/yml/addons[root@k8s-0 addons]# kubectl create -f kubernetes-dashboard.yaml secret "kubernetes-dashboard-certs" createdserviceaccount "kubernetes-dashboard" createdrole "kubernetes-dashboard-minimal" createdclusterrolebinding "kubernetes-dashboard" createddeployment "kubernetes-dashboard" createdservice "kubernetes-dashboard" created[root@k8s-0 addons]# kubectl get pods -n kube-systemNAME READY STATUSRESTARTS AGEkube-dns-7dff49b8fc-2fl643/3 Running 018mkubernetes-dashboard-747c4f7cf-qtgcx 1/1 Running 025straefik-ingress-controller-gnnn8 1/1 Running 13htraefik-ingress-controller-v6c86 1/1 Running 13htraefik-ingress-controller-wtmf8 1/1 Running 13h9.3、测试

因为使用了https,浏览器会提示证书验证,如果没有事项导入证书,会提示认证失败。

9.3.1、导入证书[[email protected] kubernetes]# pwd/root/cfssl/kubernetes[[email protected] kubernetes]# openssl pkcs12 -export -in kubernetes-client-kubectl.pem -out kubernetes-client.p12 -inkey kubernetes-client-kubectl-key.pem Enter Export Password:Verifying - Enter Export Password:[[email protected] kubernetes]# ls -l总用量 72-rw-r--r--. 1 root root833 11月 10 16:29 ca-config.json-rw-r--r--. 1 root root 1086 11月 10 19:28 kubernetes-client-kubectl.csr-rw-r--r--. 1 root root356 11月 10 18:17 kubernetes-client-kubectl-csr.json-rw-------. 1 root root 1675 11月 10 19:28 kubernetes-client-kubectl-key.pem-rw-r--r--. 1 root root 1460 11月 10 19:28 kubernetes-client-kubectl.pem-rw-r--r--. 1 root root 2637 11月 13 00:06 kubernetes-client.p12-rw-r--r--. 1 root root 1098 11月 10 21:04 kubernetes-client-proxy.csr-rw-r--r--. 1 root root385 11月 10 21:03 kubernetes-client-proxy-csr.json-rw-------. 1 root root 1679 11月 10 21:04 kubernetes-client-proxy-key.pem-rw-r--r--. 1 root root 1476 11月 10 21:04 kubernetes-client-proxy.pem-rw-r--r--. 1 root root 1021 11月 10 19:20 kubernetes-root-ca.csr-rw-r--r--. 1 root root279 11月 10 18:04 kubernetes-root-ca-csr.json-rw-------. 1 root root 1675 11月 10 19:20 kubernetes-root-ca-key.pem-rw-r--r--. 1 root root 1395 11月 10 19:20 kubernetes-root-ca.pem-rw-r--r--. 1 root root 1277 11月 10 19:42 kubernetes-server.csr-rw-r--r--. 1 root root556 11月 10 19:40 kubernetes-server-csr.json-rw-------. 1 root root 1675 11月 10 19:42 kubernetes-server-key.pem-rw-r--r--. 1 root root 1651 11月 10 19:42 kubernetes-server.pem

将kubernetes-client.p12导入到你的操作系统中。

9.3.2、浏览器访问

地址:http://192.168.119.180:8080/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

这里写图片描述

这里写图片描述

这种访问方式需要在master节点上部署flannel。

免责声明:非本网注明原创的信息,皆为程序自动获取自互联网,目的在于传递更多信息,并不代表本网赞同其观点和对其真实性负责;如此页面有侵犯到您的权益,请给站长发送邮件,并提供相关证明(版权证明、身份证正反面、侵权链接),站长将在收到邮件24小时内删除。

相关阅读