k8s集群基础搭建

一共可以分为两部分来搭建,主节点(master)和从节点(node)。

环境配置(所有节点)

1.关闭防火墙

systemctl stop firewalld

systemctl disable firewalld

2.关闭selinux

setenforce 0

cat /etc/selinux/config

SELINUX=disabled

3.互信

ssh-keygen 分发密钥和密匙

4.时间同步

yum install ntp ntpdate -y

ntpdate 时间服务器

5.创建/etc/sysctl.d/k8s.conf,添加如下内容

cat > /etc/sysctl.d/k8s.conf <<EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

 6.执行如下命令,是修改生效

modprobe br_netfilter

sysctl -p /etc/sysctl.d/k8s.conf

 7.安装系统工具

yum install -y yum-utils device-mapper-persistent-data lvm2

 8.添加软件源信息

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

如果可以科学上网可以用谷歌的yum源,因为Kubernetes本身是谷歌的软件

 9.查看docker软件版本,安装指定版本

yum list docker-ce.x86_64 --showduplicates | sort -r

yum -y install docker-ce-[VERSION]

比如:

yum -y install docker-ce-17.12.1.ce-1.el7.centos

也可以安装其他版本,但是会出现一些小问题,这个后边会说。

10.安装完毕,启动docker

systemctl start docker;systemctl enable docker

master节点

1.安装k8s

创建一个sh文件,名字无所谓。

如:

vim img.sh

在里面写入:

docker pull cloudnil/etcd-amd64:3.2.18

docker pull cloudnil/pause-amd64:3.1

docker pull cloudnil/kube-proxy-amd64:v1.11.1

docker pull cloudnil/kube-scheduler-amd64:v1.11.1

docker pull cloudnil/kube-controller-manager-amd64:v1.11.1

docker pull cloudnil/kube-apiserver-amd64:v1.11.1

docker pull cloudnil/k8s-dns-sidecar-amd64:1.14.4

docker pull cloudnil/k8s-dns-kube-dns-amd64:1.14.4

docker pull cloudnil/k8s-dns-dnsmasq-nanny-amd64:1.14.4

docker pull cloudnil/kube-discovery-amd64:1.0

docker pull cloudnil/dnsmasq-metrics-amd64:1.0

docker pull cloudnil/exechealthz-amd64:1.2

docker pull cloudnil/coredns:1.1.3

#对镜像重命名

docker tag cloudnil/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18

docker tag cloudnil/pause-amd64:3.1 k8s.gcr.io/pause:3.1

docker tag cloudnil/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1

docker tag cloudnil/kube-scheduler-amd64:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1

docker tag cloudnil/kube-controller-manager-amd64:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1

docker tag cloudnil/kube-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1

docker tag cloudnil/kube-discovery-amd64:1.0 k8s.gcr.io/kube-discovery-amd64:1.0

docker tag cloudnil/k8s-dns-sidecar-amd64:1.14.4 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.4

docker tag cloudnil/k8s-dns-kube-dns-amd64:1.14.4 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.4

docker tag cloudnil/k8s-dns-dnsmasq-nanny-amd64:1.14.4 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.4

docker tag cloudnil/dnsmasq-metrics-amd64:1.0 k8s.gcr.io/dnsmasq-metrics-amd64:1.0

docker tag cloudnil/exechealthz-amd64:1.2 k8s.gcr.io/exechealthz-amd64:1.2

docker tag cloudnil/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3

#删除镜像

docker rmi cloudnil/etcd-amd64:3.2.18

docker rmi cloudnil/pause-amd64:3.1

docker rmi cloudnil/kube-proxy-amd64:v1.11.1

docker rmi cloudnil/kube-scheduler-amd64:v1.11.1

docker rmi cloudnil/kube-controller-manager-amd64:v1.11.1

docker rmi cloudnil/kube-apiserver-amd64:v1.11.1

docker rmi cloudnil/k8s-dns-sidecar-amd64:1.14.4

docker rmi cloudnil/k8s-dns-kube-dns-amd64:1.14.4

docker rmi cloudnil/k8s-dns-dnsmasq-nanny-amd64:1.14.4

docker rmi cloudnil/kube-discovery-amd64:1.0

docker rmi cloudnil/dnsmasq-metrics-amd64:1.0

docker rmi cloudnil/exechealthz-amd64:1.2

docker rmi cloudnil/coredns:1.1.3

然后执行它

bash img.sh

2.在确保docker安装完成后,上面的相关环境配置也完成了,对应所需要的镜像(如果可以科学上网可以跳过这一步)也下载完成了,现在我们就可以来安装kubeadm了,我们这里是通过指定yum源的方式来进行安装的:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes] name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

 

当然了,上面的yum源也是需要科学上网的,如果不能科学上网的话,我们可以使用阿里云的源进行安装

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

3.安装集群软件

yum install kubelet-1.11.1-0.x86_64 -y

yum install -y kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1

systemctl enable docker.service && systemctl start docker.service

systemctl enable kubelet.service && systemctl start kubelet.service

4.关闭交换分区

swapoff -a

vim /etc/fstab

把/dev/mapper/centos-swap swap swap defaults 0 0这一行注释掉

#/dev/mapper/centos-swap swap swap defaults 0 0

5.初始化

到这里我们的准备工作就完成了,接下来我们就可以在master节点上用kubeadm命令来初始化我们的集群了:

kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16

成功完成会显示:

Your Kubernetes master has initialized successfully!

如果报错:

[ERROR SystemVerification]: unsupported docker version: 19.03.12

[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

这个原因就是前边提到的版本不支持,但是没什么问题,我们可以用下面的代码来跳过这个错误 

解决

[root@master ~]# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification

 
初始化成功之后除了会跳出成功,还会显示你的添加节点使用的token

类似这个:

kubeadm join 192.168.10.200:6443 --token h4lewk.4vex9tcwtdwfkn05 --discovery-token-ca-cert-hashsha256:7118c498deac29f4072dcd5a380900d9dd5135b02d0dd14f544ef160b4426ea4

##### token创建及使用

[root@master ~]# kubeadm token list ### token用于机器加入kubernetes集群时用到,默认token 24小时就会过期,后续的机器要加入集群需要重新生成token

TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS

f2b012.1416e09e3d1fff4d 23h 2018-06-24T21:26:17+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

 

24小时候失效重新创建需要如下命令:

[root@master ~]# kubeadm token create --print-join-command

上面的信息记录了kubeadm初始化整个集群的过程,生成相关的各种证书、kubeconfig文件、bootstraptoken等等,后边是使用kubeadm join往集群中添加节点时用到的命令,下面的命令是配置如何使用kubectl访问集群的方式: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config 最后给出了将节点加入集群的命令:

对于非root用户

[root@master ~]# mkdir -p $HOME/.kube

[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

 

对于root用户

export KUBECONFIG=/etc/kubernetes/admin.conf 

也可以直接放到~/.bash_profile

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

source一下环境变量

source ~/.bash_profile

6.kubectl版本测试

[root@master ~]# kubectl version

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

没有问题就可以进行下一步

7.安装网络,可以使用flannel、calico、weave、macvlan这里我们用flannel。

下载此文件

wget https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: flannel

rules:

- apiGroups:

- ""

resources:

- pods

verbs:

- get

- apiGroups:

- ""

resources:

- nodes

verbs:

- list

- watch

- apiGroups:

- ""

resources:

- nodes/status

verbs:

- patch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: flannel

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: flannel

subjects:

- kind: ServiceAccount

name: flannel

namespace: kube-system

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: flannel

namespace: kube-system

---

kind: ConfigMap

apiVersion: v1

metadata:

name: kube-flannel-cfg

namespace: kube-system

labels:

tier: node

app: flannel

data:

cni-conf.json: |

{

"name": "cbr0",

"type": "flannel",

"delegate": {

"isDefaultGateway": true

}

}

net-conf.json: |

{

"Network": "10.244.0.0/16",

"Backend": {

"Type": "vxlan"

}

}

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: kube-flannel-ds

namespace: kube-system

labels:

tier: node

app: flannel

spec:

template:

metadata:

labels:

tier: node

app: flannel

spec:

hostNetwork: true

nodeSelector:

beta.kubernetes.io/arch: amd64

tolerations:

- key: node-role.kubernetes.io/master

operator: Exists

effect: NoSchedule

serviceAccountName: flannel

initContainers:

- name: install-cni

image: quay.io/coreos/flannel:v0.9.1-amd64

command:

- cp

args:

- -f

- /etc/kube-flannel/cni-conf.json

- /etc/cni/net.d/10-flannel.conf

volumeMounts:

- name: cni

mountPath: /etc/cni/net.d

- name: flannel-cfg

mountPath: /etc/kube-flannel/

containers:

- name: kube-flannel

image: quay.io/coreos/flannel:v0.9.1-amd64

command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]

securityContext:

privileged: true

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

volumeMounts:

- name: run

mountPath: /run

- name: flannel-cfg

mountPath: /etc/kube-flannel/

volumes:

- name: run

hostPath:

path: /run

- name: cni

hostPath:

path: /etc/cni/net.d

- name: flannel-cfg

configMap:

name: kube-flannel-cfg

若要修改网段,需要kubeadm –pod-network-cidr=和这里同步

vim kube-flannel.yml

修改network项

"Network": "10.244.0.0/16",

执行

kubectl create -f kube-flannel.yml

检查:dockr iamges

[root@master ~]# docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

k8s.gcr.io/kube-apiserver-amd64 v1.11.1 6edbbf3b8d32 5 years ago 187MB

k8s.gcr.io/kube-scheduler-amd64 v1.11.1 77755e21c6c4 5 years ago 56.8MB

k8s.gcr.io/kube-controller-manager-amd64 v1.11.1 d615b70b3c06 5 years ago 155MB

k8s.gcr.io/kube-proxy-amd64 v1.11.1 6f715b939c12 5 years ago 97.8MB

k8s.gcr.io/coredns 1.1.3 f80ed2fa775d 5 years ago 45.6MB

k8s.gcr.io/pause 3.1 a1468838f467 5 years ago 742kB

k8s.gcr.io/etcd-amd64 3.2.18 78b282dd9c2e 5 years ago 219MB

quay.io/coreos/flannel v0.9.1-amd64 2b736d06ca4c 5 years ago 51.3MB

k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.4 2d6a3bea02c4 6 years ago 49.4MB

k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.4 13117b1d461f 6 years ago 41.4MB

k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.4 c413c7235eb4 6 years ago 41.8MB

k8s.gcr.io/dnsmasq-metrics-amd64 1.0 759b144b0bc0 6 years ago 14MB

k8s.gcr.io/kube-discovery-amd64 1.0 513fbb37b2a3 6 years ago 134MB

k8s.gcr.io/exechealthz-amd64 1.2 1d045120b630 6 years ago 8.37MB

如果显示是14个镜像,那么你的master节点就安装成功了。

node节点

创建一个sh文件,名字无所谓。

如:

vim img.sh

在里面写入:

docker pull cloudnil/kube-proxy-amd64:v1.11.1

docker pull cloudnil/pause-amd64:3.1

docker pull cnych/kubernetes-dashboard-amd64:v1.8.3

docker pull cnych/heapster-influxdb-amd64:v1.3.3

docker pull cnych/heapster-grafana-amd64:v4.4.3

docker pull cnych/heapster-amd64:v1.4.2

docker pull quay.io/coreos/flannel:v0.9.1-amd64

docker tag cloudnil/pause-amd64:3.1 k8s.gcr.io/pause:3.1

docker tag cloudnil/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1

docker tag cnych/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

docker tag cnych/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3

docker tag cnych/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3

docker tag cnych/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2

docker rmi cloudnil/kube-proxy-amd64:v1.11.1

docker rmi cloudnil/pause-amd64:3.1

docker rmi cnych/kubernetes-dashboard-amd64:v1.8.3

docker rmi cnych/heapster-influxdb-amd64:v1.3.3

docker rmi cnych/heapster-grafana-amd64:v4.4.3

docker rmi cnych/heapster-amd64:v1.4.2

然后执行它

bash img.sh

当然了,上面的yum源也是需要科学上网的,如果不能科学上网的话,我们可以使用阿里云的源进行安装

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

2.安装集群软件

yum install kubelet-1.11.1-0.x86_64 -y

yum install -y kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1

systemctl enable docker.service && systemctl start docker.service

systemctl enable kubelet.service && systemctl start kubelet.service

3.关闭交换分区

swapoff -a

vim /etc/fstab

把/dev/mapper/centos-swap swap swap defaults 0 0这一行注释掉

#/dev/mapper/centos-swap swap swap defaults 0 0

4.加入集群

用前面提到的添加节点使用的token(一定要用自己的)

kubeadm join 192.168.10.200:6443 --token h4lewk.4vex9tcwtdwfkn05 --discovery-token-ca-cert-hash sha256:7118c498deac29f4072dcd5a380900d9dd5135b02d0dd14f544ef160b4426ea4 --ignore-preflight-errors=SystemVerification

加入之后就算完成了k8s的基础搭建

master查看

[root@master ~]#kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    2h        v1.11.1
node1     Ready     <none>    1h        v1.11.1
node2     Ready     <none>    1h        v1.11.1

能查看到节点就算加入成功了。

集群状态

[root@master ~]# kubectl get cs

NAME STATUS MESSAGE ERROR

controller-manager Healthy ok

scheduler Healthy ok

etcd-0 Healthy {"health": "true"}