카이도스의 Tech Blog
Rancher K8s 구성 - 4(MetalLB & Nginx Ingress 구성) 본문
728x90
반응형
2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 1(DNS)
2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 2(HAproxy)
2024.02.20 - [Rancher k8s] - Rancher K8s 구성 - 3(Master, Worker)
2024.02.21 - [분류 전체보기] - Rancher K8s 구성 - 5(Rancher UI 구성)
Metal LB 구성
더보기
# metallb 디렉토리 이동
mkdir -p ~/k8s-opensource/metallb && cd ~/k8s-opensource/metallb
# MetalLb 최신 릴리즈 저장
MetalLB_RTAG=$(curl -s https://api.github.com/repos/metallb/metallb/releases/latest|grep tag_name|cut -d '"' -f 4|sed 's/v//')
echo $MetalLB_RTAG
0.14.3
# Metallb-native.yml 다운로드 및 배포
wget https://raw.githubusercontent.com/metallb/metallb/v$MetalLB_RTAG/config/manifests/metallb-native.yaml
kubectl apply -f metallb-native.yaml
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
# metallb 리소스 생성 확인
kubectl get all -n metallb-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/controller-5c46fdb5b8-vt44g 1/1 Running 0 76s 10.42.4.3 r1-k8s-workre2 <none> <none>
pod/speaker-4dtq8 1/1 Running 0 76s 10.10.X.204 r1-k8s-workre1 <none> <none>
pod/speaker-85kwl 1/1 Running 0 76s 10.10.X.206 r1-k8s-workre3 <none> <none>
pod/speaker-dsfhv 1/1 Running 0 76s 10.10.X.205 r1-k8s-workre2 <none> <none>
pod/speaker-f74dh 1/1 Running 0 76s 10.10.X.202 r1-k8s-master2 <none> <none>
pod/speaker-r7sf6 1/1 Running 0 76s 10.10.X.201 r1-k8s-master1 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/webhook-service ClusterIP 10.43.139.106 <none> 443/TCP 76s component=controller
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset.apps/speaker 5 5 5 5 5 kubernetes.io/os=linux 76s speaker quay.io/metallb/speaker:v0.14.3 app=metallb,component=speaker
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/controller 1/1 1 1 76s controller quay.io/metallb/controller:v0.14.3 app=metallb,component=controller
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/controller-5c46fdb5b8 1 1 1 76s controller quay.io/metallb/controller:v0.14.3 app=metallb,component=controller,pod-template-hash=5c46fdb5b8
# ipaddress pool 설정 (IP 대역은 IDC 특정 대역만 사용)
vi ipaddress_pools.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: idc-rack1-prd
namespace: metallb-system
spec:
addresses:
- 10.10.X.1-10.10.X.100
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2-advert
namespace: metallb-system
# weebhook 삭제
kubectl delete validatingwebhookconfigurations metallb-webhook-configuration
validatingwebhookconfiguration.admissionregistration.k8s.io "metallb-webhook-configuration" deleted
# ipaddress pool 배포
kubectl apply -f ipaddress_pools.yaml
ipaddresspool.metallb.io/idc-rack1-prd created
l2advertisement.metallb.io/l2-advert created
# ipaddresspool 설정 확인
kubectl get ipaddresspools.metallb.io -n metallb-system
NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES
idc-rack1-prd true false ["10.10.X.1-10.10.X.100"]
Nginx Ingress Controller 설정
더보기
# Master Node taint 설정 (Noschedule : Master 노드에 일반 Pod가 생성되지 않도록 설정)
kubectl taint node r1-k8s-master1 node-role.kubernetes.io/master:NoSchedule
kubectl taint node r1-k8s-master2 node-role.kubernetes.io/master:NoSchedule
kubectl taint node r1-k8s-master3 node-role.kubernetes.io/master:NoSchedule
node/r1-k8s-master1 tainted
node/r1-k8s-master2 tainted
node/r1-k8s-master3 tainted
# Ingress Nginx Controller 데몬셋 Rollout 실행 후 Worker 노드에만 생성되는지 확인
kubectl rollout restart daemonsets/rke2-ingress-nginx-controller -n kube-system
daemonset.apps/rke2-ingress-nginx-controller restarted
# pod 확인 (ingress-nginx-controller pod가 worker 노드에만 생성된것을 확인 가능) - 시간 좀 걸림
kubectl get pods -A -o wide | grep ingress-nginx
kube-system helm-install-rke2-ingress-nginx-ntjvd 0/1 Completed 0 10m 10.42.0.174 r1-k8s-master1 <none> <none>
kube-system rke2-ingress-nginx-controller-br2fj 1/1 Running 0 64s 10.42.4.4 r1-k8s-worker2 <none> <none>
kube-system rke2-ingress-nginx-controller-k5gfm 1/1 Running 0 20s 10.42.5.3 r1-k8s-worker3 <none> <none>
kube-system rke2-ingress-nginx-controller-xxrmc 1/1 Running 0 41s 10.42.3.3 r1-k8s-worker1 <none> <none>
# Ingress Nginx 데몬셋 확인 (3개의 pod로 줄어든것을 확인)
kubectl get daemonsets -n kube-system | grep ingress-nginx
rke2-ingress-nginx-controller 3 3 3 3 3 kubernetes.io/os=linux 9m50s
# ingress controller pod 확인
kubectl get pods -n kube-system | grep ingress-nginx-controller
rke2-ingress-nginx-controller-6xg5d 1/1 Running 0 2m35s
rke2-ingress-nginx-controller-hgwgt 1/1 Running 0 2m3s
rke2-ingress-nginx-controller-zs66j 1/1 Running 0 91s
# helm 설치
cd ~/
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
# ingress controller helm chart 확인
helm ls -n kube-system | grep ingress-nginx
rke2-ingress-nginx kube-system 1 2024-02-02 09:59:26.784640626 +0000 UTC deployed rke2-ingress-nginx-4.8.200 1.9.3
# ingress svc 확인 (현재 ingress-nginx의 admission 서비스는 있지만, 실제 트래픽을 받아줄 Loadbalancer 서비스를 배포해야함)
kubectl get svc -n kube-system | grep ingress-nginx
rke2-ingress-nginx-controller-admission ClusterIP 10.43.170.20 <none> 443/TCP 10m
# ingress controller 디렉토리 생성
mkdir -p ~/k8s-opensource/ingress-nginx && cd ~/k8s-opensource/ingress-nginx
# ingress nginx helm 파일 확인
sudo cat /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx.yaml | grep helm.cattle.io/chart-url
helm.cattle.io/chart-url: https://rke2-charts.rancher.io/assets/rke2-ingress-nginx/rke2-ingress-nginx-4.8.200.tgz
# helm 파일 추출 및 압축 해제
wget https://rke2-charts.rancher.io/assets/rke2-ingress-nginx/rke2-ingress-nginx-4.8.200.tgz
tar -zvxf rke2-ingress-nginx-4.8.200.tgz && rm -rf rke2-ingress-nginx-4.8.200.tgz
# values.yaml을 수정
cd rke2-ingress-nginx
vi values.yaml
service: # 431번 라인
enabled: true
# ingress nginx 차트 재배포
helm upgrade -i rke2-ingress-nginx -n kube-system .
# 서비스 확인 (nginx-controller-admission 서비스를 제외한 신규 nginx-controller 서비스가 Loadbalancer 타입으로 생성된 것을 확인 - Loadbalancer 타입의 External-IP는 Metal-LB로부터 받아와서 할당 가능)
kubectl get svc -n kube-system | grep ingress-nginx
rke2-ingress-nginx-controller LoadBalancer 10.43.103.15 10.10.X.1 80:31650/TCP,443:30928/TCP 8m7s
rke2-ingress-nginx-controller-admission ClusterIP 10.43.170.20 <none> 443/TCP 20m
# nginx-controller-admission 서비스는 pod의 webhook용 port(8443)로 연결되며, 실제 서비스 트래픽은 loadbalancer 서비스의 80, 443 port로 수신하여 pod의 80, 443 port로 연결된다.
# 서비스 -> pod 연결 확인
kubectl get ep -n kube-system | grep ingress-nginx
rke2-ingress-nginx-controller 10.42.3.3:443,10.42.4.4:443,10.42.5.3:443 + 3 more... 8m17s
rke2-ingress-nginx-controller-admission 10.42.3.3:8443,10.42.4.4:8443,10.42.5.3:8443 20m
# ingress controller pod 확인
kubectl get pods -n kube-system | grep ingress-nginx-controller
rke2-ingress-nginx-controller-qlxml 1/1 Running 0 4m56s
rke2-ingress-nginx-controller-wszlv 1/1 Running 0 5m51s
rke2-ingress-nginx-controller-xp9ll 1/1 Running 0 5m29s
# ingress-nginx pod의 port 확인
kubectl describe pods -n kube-system rke2-ingress-nginx-controller-qlxml | grep Ports
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 80/TCP, 443/TCP, 0/TCP
# Ingress 배포를 위한 Nginx pod, service 배포
kubectl create deployment nginx-test --image=nginx:alpine --replicas=1
deployment.apps/nginx-test created
kubectl expose deploy nginx-test --port=80 --name web-nginx-svc
service/web-nginx-svc exposed
# Ingress yaml 작성
vi nginx-ingress-test.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-test
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: nginx-test.domain.test.com
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: web-nginx-svc
port:
number: 80
# Ingress 배포 및 확인
kubectl apply -f nginx-ingress-test.yaml
ingress.networking.k8s.io/ingress-nginx-test created
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-nginx-test <none> nginx-test.domain.test.com 10.10.X.204,10.10.X.205,10.10.X.206 80 45s
# 테스트를 위해 /etc/hosts 도메인 임시 설정
sudo vi /etc/hosts
10.10.X.1 nginx-test.domain.test.com nginx-test.domain.test.com
# 테스트 도메인으로 curl 테스트
curl nginx-test.domain.test.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
# nginx-ingress-controller 로그 확인 (web-nginx-svc 서비스를 통해 트래픽 요청 정상 확인)
kubectl logs -n kube-system rke2-ingress-nginx-controller-nphns
I0201 13:33:27.317382 7 main.go:107] "successfully validated configuration, accepting" ingress="default/ingress-nginx-test"
I0201 13:33:27.321507 7 store.go:440] "Found valid IngressClass" ingress="default/ingress-nginx-test" ingressclass="nginx"
I0201 13:33:27.321643 7 event.go:298] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-nginx-test", UID:"b3cef3e0-5207-453b-9e5e-45349a4f5a9a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"29418", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0201 13:33:27.321837 7 controller.go:190] "Configuration changes detected, backend reload required"
I0201 13:33:27.342086 7 controller.go:210] "Backend successfully reloaded"
I0201 13:33:27.342209 7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"rke2-ingress-nginx-controller-nphns", UID:"7b103c2b-b31e-4043-926d-24ba63216b24", APIVersion:"v1", ResourceVersion:"23559", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0201 13:34:07.804059 7 status.go:304] "updating Ingress status" namespace="default" ingress="ingress-nginx-test" currentValue=null newValue=[{"ip":"10.10.X.204"},{"ip":"10.10.X.205"},{"ip":"10.10.X.206"}]
I0201 13:34:07.807461 7 event.go:298] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-nginx-test", UID:"b3cef3e0-5207-453b-9e5e-45349a4f5a9a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"29622", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
10.42.0.237 - - [01/Feb/2024:13:35:46 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.81.0" 90 0.002 [default-web-nginx-svc-80] [] 10.42.3.222:80 615 0.002 200 36b0268f3b2fd212b38abdd96ce0fedc
# 테스트 관련 리소스 삭제 및 /etc/hosts 원복 진행
kubectl delete -f nginx-ingress-test.yaml
ingress.networking.k8s.io "ingress-nginx-test" deleted
kubectl delete svc web-nginx-svc
service "web-nginx-svc" deleted
kubectl delete deploy nginx-test
deployment.apps "nginx-test" deleted
728x90
반응형
'Rancher k8s' 카테고리의 다른 글
Rancher K8s 구성 - 6(Ceph Storage 구성) (0) | 2024.02.23 |
---|---|
Rancher K8s 구성 - 5(Rancher UI 구성) (0) | 2024.02.21 |
Rancher K8s 구성 - 3(Master, Worker) (0) | 2024.02.20 |
Rancher K8s 구성 - 2(HAproxy) (0) | 2024.02.15 |
Rancher K8s 구성 - 1(DNS) (0) | 2024.02.15 |
Comments