本地kubeadm搭建kubernetes集群

一、環(huán)境準(zhǔn)備
(每個(gè)機(jī)器都是centos7.6)
每個(gè)機(jī)器執(zhí)行:

創(chuàng)新互聯(lián)于2013年成立,是專業(yè)互聯(lián)網(wǎng)技術(shù)服務(wù)公司,擁有項(xiàng)目成都網(wǎng)站制作、網(wǎng)站建設(shè)網(wǎng)站策劃,項(xiàng)目實(shí)施與項(xiàng)目整合能力。我們以讓每一個(gè)夢想脫穎而出為使命,1280元武宣做網(wǎng)站,已為上家服務(wù),為武宣各地企業(yè)和個(gè)人服務(wù),聯(lián)系電話:18982081108

yum install chronyd -y
systemctl start chronyd
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.130 master
192.168.8.131 node01
192.168.8.132 node02
192.168.8.133 node03

systemctl disable firewalld
systemctl stop firewalld

setenforce 0 臨時(shí)生效

vim /etc/selinux/config
SELINUX=disabled

永久生效但是需要重啟

配置docker鏡像源
訪問mirrors.aliyun.com,找到docker-ce,點(diǎn)擊linux,點(diǎn)擊centos,右鍵docker-ce.repo復(fù)制鏈接地址

本地kubeadm搭建kubernetes集群

本地kubeadm搭建kubernetes集群

cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
--2019-05-19 17:39:51-- https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
正在解析主機(jī) mirrors.aliyun.com (mirrors.aliyun.com)...

其他三臺機(jī)器上也執(zhí)行該命令

接下來在master節(jié)點(diǎn)執(zhí)行:
修改yum源

[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# ls
CentOS-Base.repo CentOS-fasttrack.repo CentOS-Vault.repo epel-testing.repo
CentOS-CR.repo CentOS-Media.repo docker-ce.repo kubernetes.repo
CentOS-Debuginfo.repo CentOS-Sources.repo epel.repo
[root@master yum.repos.d]# vim CentOS-Base.repo
[base]
name=CentOS-$releasever - Base
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
baseurl=https://mirrors.aliyun.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever - Updates
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
baseurl=https://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
baseurl=https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

把base updates 和extras這三項(xiàng)的baseurl改成阿里的。保存退出,并發(fā)送到其他三臺work

scp /etc/yum.repos.d/CentOS-Base.repo node01:/etc/yum.repos.d/
scp /etc/yum.repos.d/CentOS-Base.repo node02:/etc/yum.repos.d/
scp /etc/yum.repos.d/CentOS-Base.repo node03:/etc/yum.repos.d/

yum install docker-ce -y
systemctl enable docker
systemctl start docker

修改docker啟動參數(shù)

[root@master ~]# vim /usr/lib/systemd/system/docker.service

在[Service]下添加這一條

ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT

重新加載docker

systemctl daemon-reload
systemctl restart docker

查看filter表所有規(guī)則

[root@master ~]# iptables -vnL
Chain INPUT (policy ACCEPT 1307 packets, 335K bytes)
pkts bytes target prot opt in out source destination
2794 168K KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW / kubernetes service portals /
2794 168K KUBE-EXTERNAL-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW / kubernetes externally-visible service portals /
773K 188M KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 KUBE-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 / kubernetes forwarding rules /
0 0 KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW / kubernetes service portals /
0 0 DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all --
docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0

發(fā)送到三個(gè)work

scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/docker.service node03:/usr/lib/systemd/system/docker.service

查看bridge的系統(tǒng)參數(shù)

[root@master ~]# sysctl -a |grep bridge
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-filter-pppoe-tagged = 0
net.bridge.bridge-nf-filter-vlan-tagged = 0
net.bridge.bridge-nf-pass-vlan-input-dev = 0
sysctl: reading key "net.ipv6.conf.all.stable_secret"
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.docker0.stable_secret"
sysctl: reading key "net.ipv6.conf.ens33.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"

其中加粗項(xiàng)在不同環(huán)境的值不一樣,添加配置確保他們的值為1

[root@master ~]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
~

重讀一下

[root@master ~]# systctl -p /etc/sysctl.d/k8s.conf

scp /etc/sysctl.d/k8s.conf node01:/etc/sysctl.d/
scp /etc/sysctl.d/k8s.conf node02:/etc/sysctl.d/
scp /etc/sysctl.d/k8s.conf node03:/etc/sysctl.d/

本地創(chuàng)建kubernetes.repo文件

[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# vim kubernetes.repo
[kubernetes]
name=Kubernetes Repository
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

同樣在阿里鏡像網(wǎng)站找到kubernetes點(diǎn)擊,點(diǎn)擊yum,點(diǎn)擊repos,找到kubernetes-el7-x86_64/

本地kubeadm搭建kubernetes集群

本地kubeadm搭建kubernetes集群

文件中baseurl的為kubernetes-el7-x86_64/ 的鏈接地址
gpgkey中的兩個(gè)地址為上一級目錄中doc中的兩個(gè)鏈接地址

本地kubeadm搭建kubernetes集群

yum repolist檢查一下
查看kube開頭的包

[root@master yum.repos.d]# yum list all |grep "^kube"
kubeadm.x86_64 1.14.2-0 @kubernetes
kubectl.x86_64 1.14.2-0 @kubernetes
kubelet.x86_64 1.14.2-0 @kubernetes
kubernetes-cni.x86_64 0.7.5-0 @kubernetes
kubernetes.x86_64 1.5.2-0.7.git269f928.el7 extras
kubernetes-ansible.noarch 0.6.0-0.1.gitd65ebd5.el7 epel
kubernetes-client.x86_64 1.5.2-0.7.git269f928.el7 extras
kubernetes-master.x86_64 1.5.2-0.7.git269f928.el7 extras
kubernetes-node.x86_64 1.5.2-0.7.git269f928.el7 extras

安裝工具

yum install -y kubeadm kubectl kubelet

修改kubelet參數(shù)(被kubeadm使用)

[root@master yum.repos.d]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

查看一下集群初始化默認(rèn)參數(shù)

[root@master yum.repos.d]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
\ - signing
\ - authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
DNS:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
scheduler: {}

接下來就是初始化集群,在初始化的過程中會創(chuàng)建容器,而容器的鏡像默認(rèn)是從k8s.gcr.io拉取的,我們在不能訪問外的的情景下,可以查看需要的鏡像然后從借助阿里云拉取,具體步驟見另外一篇博客https://blog.51cto.com/13670314/2397600

[root@master ~]# kubeadm config images list
I0521 13:32:40.122085 26344 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0521 13:32:40.122220 26344 version.go:97] falling back to the local client version: v1.14.2
k8s.gcr.io/kube-apiserver:v1.14.2
k8s.gcr.io/kube-controller-manager:v1.14.2
k8s.gcr.io/kube-scheduler:v1.14.2
k8s.gcr.io/kube-proxy:v1.14.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

忽略上面的報(bào)錯(cuò),是訪問不了外網(wǎng)導(dǎo)致的。

kubeadm init --pod-network-cidr="10.244.0.0/16" --ignore-preflight-errors=Swap

成功后會顯示:

本地kubeadm搭建kubernetes集群

記錄下最后一條join命令,后面加入集群會用到

查看節(jié)點(diǎn)

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 145m v1.14.2

status為NotReady,我們需要部署網(wǎng)絡(luò)插件

本地kubeadm搭建kubernetes集群

部署flannel,這里配置文件里的image是從quay.io拉取的,國內(nèi)可以訪問不用擔(dān)心。

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

查看系統(tǒng)命名空間里的pod

[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-q55g7 1/1 Running 0 150m
coredns-fb8b8dccf-vk7td 1/1 Running 0 150m
etcd-master 1/1 Running 0 149m
kube-apiserver-master 1/1 Running 0 149m
kube-controller-manager-master 1/1 Running 0 149m
kube-flannel-ds-amd64-gfl77 1/1 Running 0 71s
kube-proxy-4s9f6 1/1 Running 0 150m
kube-scheduler-master 1/1 Running 0 149m

前兩個(gè)有可能處于創(chuàng)建狀態(tài),稍等一下就好了

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 152m v1.14.2

發(fā)送到其他三個(gè)work

[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node01:/etc/yum.repos.d/
root@node01's password:
kubernetes.repo 100% 269 169.4KB/s 00:00
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node02:/etc/yum.repos.d/
root@node02's password:
kubernetes.repo 100% 269 277.9KB/s 00:00
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node03:/etc/yum.repos.d/
root@node03's password:
kubernetes.repo

下面將三臺work加入集群,在node01 02 03上執(zhí)行命令

[root@node01 ~]# yum install -y kubeadm kubelet

然后去master復(fù)制文件

[root@master ~]# scp /etc/sysconfig/kubelet node01:/etc/sysconfig/
root@node01's password:
kubelet 100% 42 32.7KB/s 00:00
[root@master ~]# scp /etc/sysconfig/kubelet node02:/etc/sysconfig/
root@node02's password:
kubelet 100% 42 32.9KB/s 00:00
[root@master ~]# scp /etc/sysconfig/kubelet node03:/etc/sysconfig/
root@node03's password:
kubelet 100% 42 29.4KB/s 00:00
[root@master ~]#

先在work上拉取阿里倉庫的pause鏡像

[root@node01 ~]# kubeadm join 192.168.8.130:6443 --token kxmqr4.1vza1kh70vra2d2u --discovery-token-ca-cert-hash sha256:6537d556e18c1799f10ac567dcaa41ee2b3197aa4c464747bc50243a6142bc1c --ignore-preflight-errors=Swap

查看節(jié)點(diǎn)

[root@master /]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 172m v1.14.2
node01 Ready <none> 7m39s v1.14.2
node02 Ready <none> 48s v1.14.2
node03 Ready <none> 43s v1.14.2

本文標(biāo)題:本地kubeadm搭建kubernetes集群
鏈接分享:http://muchs.cn/article26/jpihcg.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供品牌網(wǎng)站建設(shè)營銷型網(wǎng)站建設(shè)、網(wǎng)站內(nèi)鏈App設(shè)計(jì)、網(wǎng)站策劃、面包屑導(dǎo)航

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)

成都app開發(fā)公司