Clickhouse 在雲原生場景下的部署和使用

ClickHouse 簡介

ClickHouse 是一個面向列的數據庫管理系統 (DBMS),用於查詢的在線分析處理 (OLAP)。

Clickhouse 特點

更多功能參考官方文檔: https://clickhouse.com/docs/en/introduction/performance/#

Clickhouse 持久化配置

這裏數據持久化使用 NFS 進行持久化數據

安裝 NFS

#這裏我使用單獨服務器進行演示,實際上順便使用一臺服務器安裝nfs都可以 (建議和kubernetes集羣分開,找單獨一臺機器)
[root@nfs ~]# yum install nfs-utils -y rpcbind

#接下來設置nfs存儲目錄
[root@nfs ~]# mkdir /data/k8s-volume
[root@nfs ~]# chmod 755 /data/k8s-volume/

#編輯nfs配置文件
[root@nfs ~]# cat /etc/exports
/data/k8s-volume  *(rw,no_root_squash,sync)

#存儲目錄,*允許所有人連接,rw讀寫權限,sync文件同時寫入硬盤及內存,no_root_squash 使用者root用戶自動修改爲普通用戶
接下來啓動rpcbind
[root@nfs ~]# systemctl start rpcbind
[root@nfs ~]# systemctl enable rpcbind
[root@nfs ~]# systemctl status rpcbind
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
   Active: active (running) since 四 2019-12-19 18:44:29 CST; 11min ago
 Main PID: 3126 (rpcbind)
   CGroup: /system.slice/rpcbind.service
           └─3126 /sbin/rpcbind -w

#由於nfs需要向rpcbind進行註冊,所以我們需要優先啓動rpcbind

#啓動NFS
[root@nfs ~]# systemctl restart nfs
[root@nfs ~]# systemctl enable nfs
[root@nfs ~]# systemctl status nfs
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           └─order-with-mounts.conf
   Active: active (exited) since 四 2019-12-19 18:44:30 CST; 13min ago
 Main PID: 3199 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service

#檢查rpcbind及nfs是否正常
[root@nfs ~]# rpcinfo |grep nfs
    100003    3    tcp       0.0.0.0.8.1            nfs        superuser
    100003    4    tcp       0.0.0.0.8.1            nfs        superuser
    100227    3    tcp       0.0.0.0.8.1            nfs_acl    superuser
    100003    3    udp       0.0.0.0.8.1            nfs        superuser
    100003    4    udp       0.0.0.0.8.1            nfs        superuser
    100227    3    udp       0.0.0.0.8.1            nfs_acl    superuser
    100003    3    tcp6      ::.8.1                 nfs        superuser
    100003    4    tcp6      ::.8.1                 nfs        superuser
    100227    3    tcp6      ::.8.1                 nfs_acl    superuser
    100003    3    udp6      ::.8.1                 nfs        superuser
    100003    4    udp6      ::.8.1                 nfs        superuser
    100227    3    udp6      ::.8.1                 nfs_acl    superuser

#查看nfs目錄掛載權限
[root@nfs ~]# cat /var/lib/nfs/etab
/data/k8s-volume   *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)

#檢查掛載是否正常
[root@nfs ~]# showmount -e 127.0.0.1
Export list for 127.0.0.1:
/data/k8s-volume *

創建 nfs client 本次 nfs 服務器地址 192.168.31.101 數據存儲目錄 /data/k8s-volume

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.31.101           #nfs server 地址
            - name: NFS_PATH
              value: /data/k8s-volume     #nfs共享目錄
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.31.101
            path: /data/k8s-volume

接下來我們還需要創建一個 serveraccount, 用於將 nfs-client-provisioner 中的 ServiceAccount 綁定到一個 nfs-client-provisioner-runner 的 ClusterRole。而該 ClusterRole 聲明瞭一些權限,其中就包括了對 persistentvolumes 的增刪改查,所以我們就可以利用 ServiceAccount 來自動創建 PV

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get""list""watch""create""delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get""list""watch""update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get""list""watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list""watch""create""update""patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create""delete""get""list""watch""patch""update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

檢查 pod 是否 ok

[root@k8s-01 nfs]# kubectl  get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7995946c89-n7bsc   1/1     Running   0          13m

創建 storageclass 這裏我們聲明瞭一個名爲 managed-nfs-storage 的 Storageclass 對象

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'

檢查狀態:

[root@k8s-01 nfs]# kubectl  get storageclasses.storage.k8s.io
NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  104d

爲 clickhouse 創建 pvc

首先需要創建一個 namespace,放 ck 相關

$ kubectl create ns test

pvc yaml 文件如下

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: clickhouse-pvc
  namespace: test
spec:
  resources:
    requests:
      storage: 10Gi                         #數據大小
  accessModes:
  - ReadWriteMany                            # pvc數據訪問類型
  storageClassName: "managed-nfs-storage"    #storageclass 名稱

檢查狀態:

[root@k8s-01 clickhouse]# kubectl  get pvc -n test
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
clickhouse-pvc   Bound    pvc-ee8a47fc-a196-459f-aca4-143a8af58bf3   10Gi       RWX            managed-nfs-storage   25s

Clickhouse 安裝

由於我們這裏需要對users.xml配置進行修改,做一下配置參數跳轉,我這裏將users.xml下載下來修改後使用 configmap 進行掛載

#這裏可以直接下載我的配置,或者是啓動ck在複製users.xml拷貝下來修改
wget https://d.frps.cn/file/kubernetes/clickhouse/users.xml

[root@k8s-01 clickhouse]# kubectl create cm -n test clickhouse-users --from-file=users.xml   #不做配置持久化可以跳過
configmap/clickhouse-users created
[root@k8s-01 clickhouse]# kubectl get cm -n test
NAME               DATA   AGE
clickhouse-users   1      5s

clickhouse yaml 文件如下

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: clickhouse
  name: clickhouse
  namespace: test
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: clickhouse
  template:
    metadata:
      labels:
        app: clickhouse
    spec:
      containers:
      - image: clickhouse/clickhouse-server
        imagePullPolicy: IfNotPresent
        name: clickhouse
        ports:
        - containerPort: 8123
          protocol: TCP
        resources:
          limits:
            cpu: 1048m
            memory: 2Gi
          requests:
            cpu: 1048m
            memory: 2Gi
        volumeMounts:
        - mountPath: /var/lib/clickhouse
          name: clickhouse-volume
        - mountPath: /etc/clickhouse-server/users.xml
          subPath: users.xml
          name: clickhouse-users
      volumes:
      - name: clickhouse-users
        configMap:
          name: clickhouse-users
          defaultMode: 511
      - name: clickhouse-volume
        persistentVolumeClaim:
          claimName: clickhouse-pvc
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
  name: clickhouse
  namespace: test
spec:
  ports:
  - port: 8123
    protocol: TCP
    targetPort: 8123
  selector:
    app: clickhouse
  type: ClusterIP

檢查服務是否正常

Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned test/clickhouse-bd6cb4f4b-8b6lx to k8s-02
  Normal  Pulling    6m17s      kubelet, k8s-02    Pulling image "clickhouse/clickhouse-server"
  Normal  Pulled     4m25s      kubelet, k8s-02    Successfully pulled image "clickhouse/clickhouse-server"
  Normal  Created    4m20s      kubelet, k8s-02    Created container clickhouse
  Normal  Started    4m17s      kubelet, k8s-02    Started container clickhouse

檢查 pod svc 狀態

[root@k8s-01 clickhouse]# kubectl  get pod -n test
NAME                         READY   STATUS    RESTARTS   AGE
clickhouse-bd6cb4f4b-8b6lx   1/1     Running   0          7m4s

[root@k8s-01 clickhouse]# kubectl  get svc -n test
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
clickhouse   ClusterIP   10.100.88.207   <none>        8123/TCP   7m23s

pod 內部調用測試

[root@k8s-01 clickhouse]# kubectl exec -it -n test clickhouse-bd6cb4f4b-8b6lx bash     #進入到容器
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
root@clickhouse-bd6cb4f4b-8b6lx:/# clickhouse-client                          #連接客戶端
ClickHouse client version 21.12.3.32 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.12.3 revision 54452.

clickhouse-bd6cb4f4b-8b6lx :) show databases;                     #查看數據庫

SHOW DATABASES

Query id: d89a782e-2fb5-47e8-a4e0-1ab3aa038bdf

┌─name───────────────┐
│ INFORMATION_SCHEMA │
│ default            │
│ information_schema │
│ system             │
└────────────────────┘

4 rows in set. Elapsed: 0.003 sec.

clickhouse-bd6cb4f4b-8b6lx :) create database abcdocker                 #創建測試庫

CREATE DATABASE abcdocker

Query id: 3a7aa992-9fe1-49fe-bc54-f537e0f4a104

Ok.

0 rows in set. Elapsed: 3.353 sec.

clickhouse-bd6cb4f4b-8b6lx :) show databases;

SHOW DATABASES

Query id: c53996ba-19de-4ffa-aa7f-2f3c305d5af5

┌─name───────────────┐
│ INFORMATION_SCHEMA │
│ abcdocker          │
│ default            │
│ information_schema │
│ system             │
└────────────────────┘

5 rows in set. Elapsed: 0.006 sec.

clickhouse-bd6cb4f4b-8b6lx :) use abcdocker;

USE abcdocker

Query id: e8302401-e922-4677-9ce3-28c263d162b1

Ok.

0 rows in set. Elapsed: 0.002 sec.

clickhouse-bd6cb4f4b-8b6lx :) show tables

SHOW TABLES

Query id: 29b3ec6d-6486-41f5-a526-28e80ea17107

Ok.

0 rows in set. Elapsed: 0.003 sec.

clickhouse-bd6cb4f4b-8b6lx :)

接下來我們創建一個 Telnet 容器,測試直接使用 svc name 訪問容器是否正常

$ kubectl run -n test --generator=run-pod/v1 -i --tty busybox --image=busybox --restart=Never -- sh
/ # telnet clickhouse 8123
Connected to clickhouse

#如果不在同一個命名空間就需要使用clickhouse.test.svc.cluster.local

外部訪問 Clickhouse

k8s 內部調用我們採用的是 svc name,外部可以通過 nodeport 實現

#svc 外部yaml如下
apiVersion: v1
kind: Service
metadata:
  name: clickhouse-node
  namespace: test
spec:
  ports:
  - port: 8123
    protocol: TCP
    targetPort: 8123
  selector:
    app: clickhouse
  type: NodePort

[root@k8s-01 clickhouse]# kubectl  get svc -n test
NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
clickhouse        ClusterIP   10.100.88.207   <none>        8123/TCP         33m
clickhouse-node   NodePort    10.99.147.187   <none>        8123:32445/TCP   8s

#如果用的阿里雲託管可以直接使用阿里雲LoadBalancer
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: "xxxx"
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners: "true"
  name: clickhouse-ck
  namespace: test
spec:
  ports:
  - port: 8123
    protocol: TCP
    targetPort: 8123
  selector:
    app: clickhouse
  type: LoadBalancer

首先需要下載 Windows 工具

https://d.frps.cn/file/kubernetes/clickhouse/dbeaver-ce-7.1.4-x86_64-setup.exe

接下來連接 ck,查看我們創建的庫是否存在 (安裝下載的軟件包)

添加 clickhouse 連接

這裏已經可以看到我們創建的庫,裏面只是一個空庫

如果我們需要給 ck 設置密碼,需要修改我們掛載的 configmap 即可

root@clickhouse-bd6cb4f4b-8b6lx:/etc/clickhouse-server# cat users.xml |grep pass
            <!-- See also the files in users.d directory where the password can be overridden.
                 If you want to specify password in plaintext (not recommended), place it in 'password' element.
                 Example: <password>qwerty</password>.
                 If you want to specify SHA256, place it in 'password_sha256_hex' element.
                 Example: <password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex>
                 If you want to specify double SHA1, place it in 'password_double_sha1_hex' element.
                 Example: <password_double_sha1_hex>e395796d6546b1b65db9d665cd43f0e858dd4303</password_double_sha1_hex>
                  place 'kerberos' element instead of 'password' (and similar) elements.
                 How to generate decent password:
                 In first line will be password and in second - corresponding SHA256.
                 In first line will be password and in second - corresponding double SHA1.
            <password></password>     #設置免密參數
  1. Kubernetes 1.14 二進制集羣安裝 [1]

  2. Prometheus Operator 持久化存儲 [2]

  3. 持久化存儲 StorageClass [3]

  4. CentOS 7 ETCD 集羣配置大全 [4]

引用鏈接

[1]

Kubernetes 1.14 二進制集羣安裝 : https://i4t.com/4253.html

[2]

Prometheus Operator 持久化存儲 : https://i4t.com/4586.html

[3]

持久化存儲 StorageClass : https://i4t.com/4475.html

[4]

CentOS 7 ETCD 集羣配置大全 : https://i4t.com/4403.html

原文鏈接:https://i4t.com/5245.html

本文由 Readfog 進行 AMP 轉碼,版權歸原作者所有。
來源https://mp.weixin.qq.com/s/cxD6YIedw4sF_Ell8_okQA