카이도스의 Tech Blog

Rancher K8s 구성 - 6(Ceph Storage 구성) 본문

Rancher k8s

Rancher K8s 구성 - 6(Ceph Storage 구성)

카이도스 2024. 2. 23. 16:00
728x90
반응형

2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 1(DNS)

 

Rancher K8s 구성 - 1(DNS)

IDC에도 K8s 사용이 필요해서 Rancher K8s 통해 구성진행하였다. 2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 2(HAproxy) Rancher K8s 구성 - 2(HAproxy) 2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 1(DNS) Rancher K8s 구성 - 1(DNS)

djdakf1234.tistory.com

2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 2(HAproxy)

 

Rancher K8s 구성 - 2(HAproxy)

2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 1(DNS) Rancher K8s 구성 - 1(DNS) IDC에도 K8s 사용이 필요해서 Rancher K8s 통해 구성진행하였다. 스펙 vm 통해 구성(PROXMOX) jenkins, haproxy, master(8vcore, mem 8G, os 200G) worker(16

djdakf1234.tistory.com

2024.02.20 - [Rancher k8s] - Rancher K8s 구성 - 3(Master, Worker)

 

Rancher K8s 구성 - 3(Master, Worker)

2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 1(DNS) Rancher K8s 구성 - 1(DNS) IDC에도 K8s 사용이 필요해서 Rancher K8s 통해 구성진행하였다. 2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 2(HAproxy) Rancher K8s 구성 - 2(HAproxy)

djdakf1234.tistory.com

2024.02.21 - [Rancher k8s] - Rancher K8s 구성 - 4(MetalLB & Nginx Ingress 구성)

 

Rancher K8s 구성 - 4(MetalLB & Nginx Ingress 구성)

2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 1(DNS) Rancher K8s 구성 - 1(DNS) IDC에도 K8s 사용이 필요해서 Rancher K8s 통해 구성진행하였다. 2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 2(HAproxy) Rancher K8s 구성 - 2(HAproxy)

djdakf1234.tistory.com

2024.02.21 - [Rancher k8s] - Rancher K8s 구성 - 5(Rancher UI 구성)

 

Rancher K8s 구성 - 5(Rancher UI 구성)

2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 1(DNS) Rancher K8s 구성 - 1(DNS) IDC에도 K8s 사용이 필요해서 Rancher K8s 통해 구성진행하였다. 2024.02.15 - [Rancher k8s] - Rancher K8s 구성 - 2(HAproxy) Rancher K8s 구성 - 2(HAproxy)

djdakf1234.tistory.com

2024.02.21 - [Rancher k8s] - Rancher K8s 구성 - 7(Private Registry 구성 - Harbor)

 

Rancher K8s 구성 - 7(Private Registry 구성 - Harbor)

카이도스의 Tech Blog Rancher K8s 구성 - 7(Private Registry 구성 - Harbor) 본문 Rancher k8s Rancher K8s 구성 - 7(Private Registry 구성 - Harbor) 카이도스 2024. 2. 21. 13:44

djdakf1234.tistory.com


Ceph Storage 구성

더보기
## K8s Master 노드에서 Ceph 구성 진행
mkdir -p ~/k8s-opensource/ceph-storage && cd ~/k8s-opensource/ceph-storage

# Ceph Git 레포지토리 다운로드 (최신 릴리즈 버전 확인)
git clone --single-branch --branch v1.13.4 https://github.com/rook/rook.git
cd rook/deploy/examples/

# 릴리즈 버전 확인
git branch

# Ceph 설치
kubectl create -f crds.yaml -f common.yaml -f operator.yaml

# Ceph Operator 리소스 확인(3~5분뒤 확인)
kubectl get all -n rook-ceph
NAME                                      READY   STATUS    RESTARTS   AGE
pod/rook-ceph-operator-7d78486df4-6xm6h   1/1     Running   0          29s

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rook-ceph-operator   1/1     1            1           29s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/rook-ceph-operator-7d78486df4   1         1         1       29s

# pod 확인
kubectl -n rook-ceph get pod
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-operator-7d78486df4-fqz9b   1/1     Running   0          65s

# Ceph Cluster 배포 및 리소스 확인
kubectl create -f cluster.yaml

# log 확인
kubectl -n rook-ceph logs -l app=rook-ceph-operator -f

# pod 확인
kubectl -n rook-ceph get pod
NAME                                                       READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-fdh22                                     2/2     Running     0          4m5s
csi-cephfsplugin-h5jw5                                     2/2     Running     0          4m5s
csi-cephfsplugin-provisioner-5df547d5d-q8m42               5/5     Running     0          4m5s
csi-cephfsplugin-provisioner-5df547d5d-sjwjr               5/5     Running     0          4m5s
csi-cephfsplugin-vp6hw                                     2/2     Running     0          4m5s
csi-rbdplugin-58d4d                                        2/2     Running     0          4m5s
csi-rbdplugin-bdqh4                                        2/2     Running     0          4m5s
csi-rbdplugin-f7d27                                        2/2     Running     0          4m5s
csi-rbdplugin-provisioner-68bbc4f6b5-clqwv                 5/5     Running     0          4m5s
csi-rbdplugin-provisioner-68bbc4f6b5-n5qhk                 5/5     Running     0          4m5s
rook-ceph-crashcollector-r1-k8s-workre1-78c766f5dc-l548k   1/1     Running     0          2m31s
rook-ceph-crashcollector-r1-k8s-workre2-57566b9854-mqd99   1/1     Running     0          2m30s
rook-ceph-crashcollector-r1-k8s-workre3-7855979cc7-bkzpr   1/1     Running     0          2m45s
rook-ceph-exporter-r1-k8s-workre1-856fb9b44c-zcg26         1/1     Running     0          2m28s
rook-ceph-exporter-r1-k8s-workre2-7645488466-gjcrv         1/1     Running     0          2m27s
rook-ceph-exporter-r1-k8s-workre3-dd4b978c7-m7wjc          1/1     Running     0          2m45s
rook-ceph-mgr-a-58b87794b9-8bz7h                           3/3     Running     0          3m4s
rook-ceph-mgr-b-66c844bf4b-nxv6n                           3/3     Running     0          3m3s
rook-ceph-mon-a-bcbf5796c-gshws                            2/2     Running     0          3m55s
rook-ceph-mon-b-5f644ccf59-jft9w                           2/2     Running     0          3m31s
rook-ceph-mon-c-85bb745cd8-qbnj4                           2/2     Running     0          3m21s
rook-ceph-operator-7d78486df4-fqz9b                        1/1     Running     0          7m50s
rook-ceph-osd-0-6c956bc77-ll4nf                            2/2     Running     0          2m31s
rook-ceph-osd-1-6f49c6d57-mkmqc                            2/2     Running     0          2m30s
rook-ceph-osd-2-84dbcd7b66-52nlk                           2/2     Running     0          2m30s
rook-ceph-osd-prepare-r1-k8s-workre1-tzprl                 0/1     Completed   0          2m3s
rook-ceph-osd-prepare-r1-k8s-workre2-hpccr                 0/1     Completed   0          2m
rook-ceph-osd-prepare-r1-k8s-workre3-cq5cm                 0/1     Completed   0          117s

# 확인
kubectl get all -n rook-ceph
kubectl get pods -n rook-ceph -w

# Ceph Cluster Status 확인 (HEALTH가 Warning으로 나오는 이유는 단일 노드로 구성하여 가용성 보장이 안되기 때문)
kubectl -n rook-ceph get cephcluster
NAME        DATADIRHOSTPATH   MONCOUNT   AGE     PHASE   MESSAGE                        HEALTH      EXTERNAL   FSID
rook-ceph   /var/lib/rook     3          5m40s   Ready   Cluster created successfully   HEALTH_OK              bc9f95a3-1e13-4ec5-b3fa-e34837aa85d6

ToolBox를 활용하여 Ceph Cluster 연결 후 정보 확인

더보기
# Ceph Toolbox 배포
kubectl create -f toolbox.yaml

# Toolbox를 이용하여 Ceph Cluster에 연결
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash

# Ceph Status 확인
ceph status 
cluster:
    id:     bc9f95a3-1e13-4ec5-b3fa-e34837aa85d6
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 4m)
    mgr: a(active, since 2m), standbys: b
    osd: 3 osds: 3 up (since 3m), 3 in (since 3m)

  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 577 KiB
    usage:   81 MiB used, 2.9 TiB / 2.9 TiB avail
    pgs:     1 active+clean

# Ceph osd status 확인
ceph osd status
ID  HOST             USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
 0  r1-k8s-workre2  26.8M   999G      0        0       0        0   exists,up
 1  r1-k8s-workre3  26.8M   999G      0        0       0        0   exists,up
 2  r1-k8s-workre1  26.8M   999G      0        0       0        0   exists,up

# Ceph 디바이스(Disk) 정보 확인
ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL    USED  RAW USED  %RAW USED
hdd    2.9 TiB  2.9 TiB  81 MiB    81 MiB          0
TOTAL  2.9 TiB  2.9 TiB  81 MiB    81 MiB          0

--- POOLS ---
POOL  ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr   1    1  577 KiB        2  1.7 MiB      0    950 GiB

# Ceph Rados 정보 확인
rados df
POOL_NAME     USED  OBJECTS  CLONES  COPIES  MISSING_ON_PRIMARY  UNFOUND  DEGRADED  RD_OPS       RD  WR_OPS       WR  USED COMPR  UNDER COMPR
.mgr       1.7 MiB        2       0       6                   0        0         0     298  503 KiB     177  1.8 MiB         0 B          0 B

total_objects    2
total_used       81 MiB
total_avail      2.9 TiB
total_space      2.9 TiB

# toolbox 종료
exit

테스트

더보기
# rook-ceph-block(RBD) StorageClass 배포 및 확인
cd ~/k8s-opensource/ceph-storage/rook/deploy/examples
kubectl apply -f csi/rbd/storageclass.yaml
kubectl get sc
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   11s

# 테스트로 wordpress 및 mysql application 배포
kubectl create -f mysql.yaml
kubectl create -f wordpress.yaml

# 확인
kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim   Bound    pvc-2cabbc4a-116b-4aff-a73e-60ff981ae78e   20Gi       RWO            rook-ceph-block   30s
wp-pv-claim      Bound    pvc-4a6a76ec-2839-40e3-8a01-7bfd2b3b7c08   20Gi       RWO            rook-ceph-block   14s

kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
wordpress-86bc5bb8d6-5p9gm         1/1     Running   0          54s
wordpress-mysql-6b49db8c4c-6bpv7   1/1     Running   0          70s

kubectl get deploy wordpress wordpress-mysql
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
wordpress         1/1     1            1           95s
wordpress-mysql   1/1     1            1           111s

kubectl get svc wordpress wordpress-mysql
NAME              TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
wordpress         LoadBalancer   10.43.207.38   10.10.X.2    80:32640/TCP   115s
wordpress-mysql   ClusterIP      None           <none>        3306/TCP       2m11s

# LB 주소 및 Port를 통해 Wordpress URL 확인
NodePort=$(kubectl get service wordpress -o jsonpath='{.spec.ports[0].nodePort}')
echo $NodePort
32640

# url 접속
http://10.10.X.2:80

# 테스트 삭제
kubectl delete -f mysql.yaml
kubectl delete -f wordpress.yaml


Ceph UI 대시보드 구성

더보기
# dashboard-external-https.yaml 배포 및 확인
kubectl create -f dashboard-loadbalancer.yaml
kubectl get svc -n rook-ceph -o wide
NAME                                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE     SELECTOR
rook-ceph-exporter                     ClusterIP      10.43.114.97    <none>        9926/TCP            6m4s    app=rook-ceph-exporter,rook_cluster=rook-ceph
rook-ceph-mgr                          ClusterIP      10.43.85.122    <none>        9283/TCP            5m47s   app=rook-ceph-mgr,mgr_role=active,rook_cluster=rook-ceph
rook-ceph-mgr-dashboard                ClusterIP      10.43.152.193   <none>        8443/TCP            5m47s   app=rook-ceph-mgr,mgr_role=active,rook_cluster=rook-ceph
rook-ceph-mgr-dashboard-loadbalancer   LoadBalancer   10.43.214.212   10.10.X.2    8443:30933/TCP      6s      app=rook-ceph-mgr,mgr_role=active,rook_cluster=rook-ceph
rook-ceph-mon-a                        ClusterIP      10.43.228.122   <none>        6789/TCP,3300/TCP   7m6s    app=rook-ceph-mon,ceph_daemon_id=a,mon=a,mon_cluster=rook-ceph,rook_cluster=rook-ceph
rook-ceph-mon-b                        ClusterIP      10.43.70.127    <none>        6789/TCP,3300/TCP   6m41s   app=rook-ceph-mon,ceph_daemon_id=b,mon=b,mon_cluster=rook-ceph,rook_cluster=rook-ceph
rook-ceph-mon-c                        ClusterIP      10.43.94.114    <none>        6789/TCP,3300/TCP   6m29s   app=rook-ceph-mon,ceph_daemon_id=c,mon=c,mon_cluster=rook-ceph,rook_cluster=rook-ceph

# 로그 확인
kubectl -n rook-ceph logs -l app=rook-ceph-mgr -f

# 접속
https://10.10.X.2:8443/

# Ceph 대시보드 패스워드 정보 확인 (ID : admin)
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
;.Yg`G&;alRN5"I7T*9J

  • 오른쪽 상단 사람 아이콘 클릭 → change password (패스워드 변경 : xgk8stest1234!@)

728x90
반응형
Comments