Loki 日誌系統詳解
背景
最近,在對公司容器雲的日誌方案進行設計的時候,發現主流的 ELK 或者 EFK 比較重,再加上現階段對於 ES 複雜的搜索功能很多都用不上最終選擇了 Grafana 開源的 Loki 日誌系統,下面介紹下 Loki 的背景。
背景和動機
當我們的容器雲運行的應用或者某個節點出現問題了,解決思路應該如下:
我們的監控使用的是基於 Prometheus 體系進行改造的,Prometheus 中比較重要的是 Metric 和 Alert,Metric 是來說明當前或者歷史達到了某個值,Alert 設置 Metric 達到某個特定的基數觸發了告警,但是這些信息明顯是不夠的。我們都知道,Kubernetes 的基本單位是 Pod,Pod 把日誌輸出到 stdout 和 stderr,平時有什麼問題我們通常在界面或者通過命令查看相關的日誌,舉個例子:當我們的某個 Pod 的內存變得很大,觸發了我們的 Alert,這個時候管理員,去頁面查詢確認是哪個 Pod 有問題,然後要確認 Pod 內存變大的原因,我們還需要去查詢 Pod 的日誌,如果沒有日誌系統,那麼我們就需要到頁面或者使用命令進行查詢了:
如果,這個時候應用突然掛了,這個時候我們就無法查到相關的日誌了,所以需要引入日誌系統,統一收集日誌,而使用 ELK 的話,就需要在 Kibana 和 Grafana 之間切換,影響用戶體驗。所以 ,loki 的第一目的就是最小化度量和日誌的切換成本,有助於減少異常事件的響應時間和提高用戶的體驗。
ELK 存在的問題
現有的很多日誌採集的方案都是採用全文檢索對日誌進行索引(如 ELK 方案),優點是功能豐富,允許複雜的操作。但是,這些方案往往規模複雜,資源佔用高,操作苦難。很多功能往往用不上,大多數查詢只關注一定時間範圍和一些簡單的參數(如 host、service 等),使用這些解決方案就有點殺雞用牛刀的感覺了。
因此,Loki 的第二個目的是,在查詢語言的易操作性和複雜性之間可以達到一個權衡。
成本
全文檢索的方案也帶來成本問題,簡單的說就是全文搜索(如 ES)的倒排索引的切分和共享的成本較高。後來出現了其他不同的設計方案如:OKlog,採用最終一致的、基於網格的分佈策略。這兩個設計決策提供了大量的成本降低和非常簡單的操作,但是查詢不夠方便。因此,Loki 的第三個目的是,提高一個更具成本效益的解決方案。
架構
**整體架構
**
Loki 的架構如下:
不難看出,Loki 的架構非常簡單,使用了和 Prometheus 一樣的標籤來作爲索引,也就是說,你通過這些標籤既可以查詢日誌的內容也可以查詢到監控的數據,不但減少了兩種查詢之間的切換成本,也極大地降低了日誌索引的存儲。Loki 將使用與 Prometheus 相同的服務發現和標籤重新標記庫,編寫了 pormtail,在 Kubernetes 中 promtail 以 DaemonSet 方式運行在每個節點中,通過 Kubernetes API 等到日誌的正確元數據,並將它們發送到 Loki。下面是日誌的存儲架構:
讀寫
日誌數據的寫主要依託的是 Distributor 和 Ingester 兩個組件,整體的流程如下:
Distributor
一旦 promtail 收集日誌並將其發送給 loki,Distributor 就是第一個接收日誌的組件。由於日誌的寫入量可能很大,所以不能在它們傳入時將它們寫入數據庫。這會毀掉數據庫。我們需要批處理和壓縮數據。
Loki 通過構建壓縮數據塊來實現這一點,方法是在日誌進入時對其進行 gzip 操作,組件 ingester 是一個有狀態的組件,負責構建和刷新 chunck,當 chunk 達到一定的數量或者時間後,刷新到存儲中去。每個流的日誌對應一個 ingester,當日志到達 Distributor 後,根據元數據和 hash 算法計算出應該到哪個 ingester 上面。
此外,爲了冗餘和彈性,我們將其複製 n(默認情況下爲 3)次。
Ingester
Ingester 接收到日誌並開始構建 chunk:
基本上就是將日誌進行壓縮並附加到 chunk 上面。一旦 chunk“填滿”(數據達到一定數量或者過了一定期限),ingester 將其刷新到數據庫。我們對塊和索引使用單獨的數據庫,因爲它們存儲的數據類型不同。
刷新一個 chunk 之後,ingester 然後創建一個新的空 chunk 並將新條目添加到該 chunk 中。
Querier
讀取就非常簡單了,由 Querier 負責給定一個時間範圍和標籤選擇器,Querier 查看索引以確定哪些塊匹配,並通過 greps 將結果顯示出來。它還從 Ingester 獲取尚未刷新的最新數據。
對於每個查詢,一個查詢器將爲您顯示所有相關日誌。實現了查詢並行化,提供分佈式 grep,使即使是大型查詢也是足夠的。
可擴展性
Loki 的索引存儲可以是 cassandra/bigtable/dynamodb,而 chuncks 可以是各種對象存儲,Querier 和 Distributor 都是無狀態的組件。對於 ingester 他雖然是有狀態的但是,當新的節點加入或者減少,整節點間的 chunk 會重新分配,已適應新的散列環。而 Loki 底層存儲的實現 Cortex 已經 在實際的生產中投入使用多年了。有了這句話,我可以放心的在環境中實驗一把了。
部署
Loki 的安裝非常簡單。
創建 namespace
權限設置
oc adm policy add-scc-to-user anyuid -z default -n loki
oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:loki:default
安裝 Loki
安裝命令:
oc create -f statefulset.json -n loki
statefulset.json 如下:
{
"apiVersion": "apps/v1",
"kind": "StatefulSet",
"metadata": {
"name": "loki"
},
"spec": {
"podManagementPolicy": "OrderedReady",
"replicas": 1,
"revisionHistoryLimit": 10,
"selector": {
"matchLabels": {
"app": "loki"
}
},
"serviceName": "womping-stoat-loki-headless",
"template": {
"metadata": {
"annotations": {
"checksum/config": "da297d66ee53e0ce68b58e12be7ec5df4a91538c0b476cfe0ed79666343df72b",
"prometheus.io/port": "http-metrics",
"prometheus.io/scrape": "true"
},
"creationTimestamp": null,
"labels": {
"app": "loki",
"name": "loki"
}
},
"spec": {
"affinity": {},
"containers": [
{
"args": [
"-config.file=/etc/loki/local-config.yaml"
],
"image": "grafana/loki:latest",
"imagePullPolicy": "IfNotPresent",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/ready",
"port": "http-metrics",
"scheme": "HTTP"
},
"initialDelaySeconds": 45,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 1
},
"name": "loki",
"ports": [
{
"containerPort": 3100,
"name": "http-metrics",
"protocol": "TCP"
}
],
"readinessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/ready",
"port": "http-metrics",
"scheme": "HTTP"
},
"initialDelaySeconds": 45,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 1
},
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/tmp/loki",
"name": "storage"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"emptyDir": {},
"name": "storage"
}
]
}
},
"updateStrategy": {
"type": "RollingUpdate"
}
}
}
安裝 Promtail
安裝命令:
oc create -f configmap.json -n loki
configmap.json 如下:
{
"apiVersion": "v1",
"data": {
"promtail.yaml": "client:\n backoff_config:\n maxbackoff: 5s\n maxretries: 5\n minbackoff: 100ms\n batchsize: 102400\n batchwait: 1s\n external_labels: {}\n timeout: 10s\npositions:\n filename: /run/promtail/positions.yaml\nserver:\n http_listen_port: 3101\ntarget_config:\n sync_period: 10s\n\nscrape_configs:\n- job_name: kubernetes-pods-name\n pipeline_stages:\n - docker: {}\n \n kubernetes_sd_configs:\n - role: pod\n relabel_configs:\n - source_labels:\n - __meta_kubernetes_pod_label_name\n target_label: __service__\n - source_labels:\n - __meta_kubernetes_pod_node_name\n target_label: __host__\n - action: drop\n regex: ^$\n source_labels:\n - __service__\n - action: labelmap\n regex: __meta_kubernetes_pod_label_(.+)\n - action: replace\n replacement: $1\n separator: /\n source_labels:\n - __meta_kubernetes_namespace\n - __service__\n target_label: job\n - action: replace\n source_labels:\n - __meta_kubernetes_namespace\n target_label: namespace\n - action: replace\n source_labels:\n - __meta_kubernetes_pod_name\n target_label: instance\n - action: replace\n source_labels:\n - __meta_kubernetes_pod_container_name\n target_label: container_name\n - replacement: /var/log/pods/*$1/*.log\n separator: /\n source_labels:\n - __meta_kubernetes_pod_uid\n - __meta_kubernetes_pod_container_name\n target_label: __path__\n- job_name: kubernetes-pods-app\n pipeline_stages:\n - docker: {}\n \n kubernetes_sd_configs:\n - role: pod\n relabel_configs:\n - action: drop\n regex: .+\n source_labels:\n - __meta_kubernetes_pod_label_name\n - source_labels:\n - __meta_kubernetes_pod_label_app\n target_label: __service__\n - source_labels:\n - __meta_kubernetes_pod_node_name\n target_label: __host__\n - action: drop\n regex: ^$\n source_labels:\n - __service__\n - action: labelmap\n regex: __meta_kubernetes_pod_label_(.+)\n - action: replace\n replacement: $1\n separator: /\n source_labels:\n - __meta_kubernetes_namespace\n - __service__\n target_label: job\n - action: replace\n source_labels:\n - __meta_kubernetes_namespace\n target_label: namespace\n - action: replace\n source_labels:\n - __meta_kubernetes_pod_name\n target_label: instance\n - action: replace\n source_labels:\n - __meta_kubernetes_pod_container_name\n target_label: container_name\n - replacement: /var/log/pods/*$1/*.log\n separator: /\n source_labels:\n - __meta_kubernetes_pod_uid\n - __meta_kubernetes_pod_container_name\n target_label: __path__\n- job_name: kubernetes-pods-direct-controllers\n pipeline_stages:\n - docker: {}\n \n kubernetes_sd_configs:\n - role: pod\n relabel_configs:\n - action: drop\n regex: .+\n separator: ''\n source_labels:\n - __meta_kubernetes_pod_label_name\n - __meta_kubernetes_pod_label_app\n - action: drop\n regex: ^([0-9a-z-.]+)(-[0-9a-f]{8,10})$\n source_labels:\n - __meta_kubernetes_pod_controller_name\n - source_labels:\n - __meta_kubernetes_pod_controller_name\n target_label: __service__\n - source_labels:\n - __meta_kubernetes_pod_node_name\n target_label: __host__\n - action: drop\n regex: ^$\n source_labels:\n - __service__\n - action: labelmap\n regex: __meta_kubernetes_pod_label_(.+)\n - action: replace\n replacement: $1\n separator: /\n source_labels:\n - __meta_kubernetes_namespace\n - __service__\n target_label: job\n - action: replace\n source_labels:\n - __meta_kubernetes_namespace\n target_label: namespace\n - action: replace\n source_labels:\n - __meta_kubernetes_pod_name\n target_label: instance\n - action: replace\n source_labels:\n - __meta_kubernetes_pod_container_name\n target_label: container_name\n - replacement: /var/log/pods/*$1/*.log\n separator: /\n source_labels:\n - __meta_kubernetes_pod_uid\n - __meta_kubernetes_pod_container_name\n target_label: __path__\n- job_name: kubernetes-pods-indirect-controller\n pipeline_stages:\n - docker: {}\n \n kubernetes_sd_configs:\n - role: pod\n relabel_configs:\n - action: drop\n regex: .+\n separator: ''\n source_labels:\n - __meta_kubernetes_pod_label_name\n - __meta_kubernetes_pod_label_app\n - action: keep\n regex: ^([0-9a-z-.]+)(-[0-9a-f]{8,10})$\n source_labels:\n - __meta_kubernetes_pod_controller_name\n - action: replace\n regex: ^([0-9a-z-.]+)(-[0-9a-f]{8,10})$\n source_labels:\n - __meta_kubernetes_pod_controller_name\n target_label: __service__\n - source_labels:\n - __meta_kubernetes_pod_node_name\n target_label: __host__\n - action: drop\n regex: ^$\n source_labels:\n - __service__\n - action: labelmap\n regex: __meta_kubernetes_pod_label_(.+)\n - action: replace\n replacement: $1\n separator: /\n source_labels:\n - __meta_kubernetes_namespace\n - __service__\n target_label: job\n - action: replace\n source_labels:\n - __meta_kubernetes_namespace\n target_label: namespace\n - action: replace\n source_labels:\n - __meta_kubernetes_pod_name\n target_label: instance\n - action: replace\n source_labels:\n - __meta_kubernetes_pod_container_name\n target_label: container_name\n - replacement: /var/log/pods/*$1/*.log\n separator: /\n source_labels:\n - __meta_kubernetes_pod_uid\n - __meta_kubernetes_pod_container_name\n target_label: __path__\n- job_name: kubernetes-pods-static\n pipeline_stages:\n - docker: {}\n \n kubernetes_sd_configs:\n - role: pod\n relabel_configs:\n - action: drop\n regex: ^$\n source_labels:\n - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\n - action: replace\n source_labels:\n - __meta_kubernetes_pod_label_component\n target_label: __service__\n - source_labels:\n - __meta_kubernetes_pod_node_name\n target_label: __host__\n - action: drop\n regex: ^$\n source_labels:\n - __service__\n - action: labelmap\n regex: __meta_kubernetes_pod_label_(.+)\n - action: replace\n replacement: $1\n separator: /\n source_labels:\n - __meta_kubernetes_namespace\n - __service__\n target_label: job\n - action: replace\n source_labels:\n - __meta_kubernetes_namespace\n target_label: namespace\n - action: replace\n source_labels:\n - __meta_kubernetes_pod_name\n target_label: instance\n - action: replace\n source_labels:\n - __meta_kubernetes_pod_container_name\n target_label: container_name\n - replacement: /var/log/pods/*$1/*.log\n separator: /\n source_labels:\n - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\n - __meta_kubernetes_pod_container_name\n target_label: __path__\n"
},
"kind": "ConfigMap",
"metadata": {
"creationTimestamp": "2019-09-05T01:05:03Z",
"labels": {
"app": "promtail",
"chart": "promtail-0.12.0",
"heritage": "Tiller",
"release": "lame-zorse"
},
"name": "lame-zorse-promtail",
"namespace": "loki",
"resourceVersion": "17921611",
"selfLink": "/api/v1/namespaces/loki/configmaps/lame-zorse-promtail",
"uid": "30fcb896-cf79-11e9-b58e-e4a8b6cc47d2"
}
}
oc create -f daemonset.json -n loki
daemonset.json 如下:
{
"apiVersion": "apps/v1",
"kind": "DaemonSet",
"metadata": {
"annotations": {
"deployment.kubernetes.io/revision": "2"
},
"creationTimestamp": "2019-09-05T01:16:37Z",
"generation": 2,
"labels": {
"app": "promtail",
"chart": "promtail-0.12.0",
"heritage": "Tiller",
"release": "lame-zorse"
},
"name": "lame-zorse-promtail",
"namespace": "loki"
},
"spec": {
"progressDeadlineSeconds": 600,
"replicas": 1,
"revisionHistoryLimit": 10,
"selector": {
"matchLabels": {
"app": "promtail",
"release": "lame-zorse"
}
},
"strategy": {
"rollingUpdate": {
"maxSurge": 1,
"maxUnavailable": 1
},
"type": "RollingUpdate"
},
"template": {
"metadata": {
"annotations": {
"checksum/config": "75a25ee4f2869f54d394bf879549a9c89c343981a648f8d878f69bad65dba809",
"prometheus.io/port": "http-metrics",
"prometheus.io/scrape": "true"
},
"creationTimestamp": null,
"labels": {
"app": "promtail",
"release": "lame-zorse"
}
},
"spec": {
"affinity": {},
"containers": [
{
"args": [
"-config.file=/etc/promtail/promtail.yaml",
"-client.url=http://loki.loki.svc:3100/api/prom/push"
],
"env": [
{
"name": "HOSTNAME",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "spec.nodeName"
}
}
}
],
"image": "grafana/promtail:v0.3.0",
"imagePullPolicy": "IfNotPresent",
"name": "promtail",
"ports": [
{
"containerPort": 3101,
"name": "http-metrics",
"protocol": "TCP"
}
],
"readinessProbe": {
"failureThreshold": 5,
"httpGet": {
"path": "/ready",
"port": "http-metrics",
"scheme": "HTTP"
},
"initialDelaySeconds": 10,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 1
},
"resources": {},
"securityContext": {
"readOnlyRootFilesystem": true,
"runAsUser": 0
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/etc/promtail",
"name": "config"
},
{
"mountPath": "/run/promtail",
"name": "run"
},
{
"mountPath": "/var/lib/docker/containers",
"name": "docker",
"readOnly": true
},
{
"mountPath": "/var/log/pods",
"name": "pods",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"configMap": {
"defaultMode": 420,
"name": "lame-zorse-promtail"
},
"name": "config"
},
{
"hostPath": {
"path": "/run/promtail",
"type": ""
},
"name": "run"
},
{
"hostPath": {
"path": "/var/lib/docker/containers",
"type": ""
},
"name": "docker"
},
{
"hostPath": {
"path": "/var/log/pods",
"type": ""
},
"name": "pods"
}
]
}
}
}
}
安裝服務
oc create -f service.json -n loki
service.json 的內容如下:
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2019-09-04T09:37:49Z",
"name": "loki",
"namespace": "loki",
"resourceVersion": "17800188",
"selfLink": "/api/v1/namespaces/loki/services/loki",
"uid": "a87fe237-cef7-11e9-b58e-e4a8b6cc47d2"
},
"spec": {
"externalTrafficPolicy": "Cluster",
"ports": [
{
"name": "lokiport",
"port": 3100,
"protocol": "TCP",
"targetPort": 3100
}
],
"selector": {
"app": "loki"
},
"sessionAffinity": "None",
"type": "NodePort"
},
"status": {
"loadBalancer": {}
}
語法
Loki 提供了 HTTP 接口,我們這裏就不詳解了,大家可以看:https://github.com/grafana/loki/blob/master/docs/api.md
我們這裏說下查詢的接口如何使用。
第一步,獲取當前 Loki 的元數據類型:
curl http://192.168.25.30:30972/api/prom/label
{
"values": ["alertmanager", "app", "component", "container_name", "controller_revision_hash", "deployment", "deploymentconfig", "docker_registry", "draft", "filename", "instance", "job", "logging_infra", "metrics_infra", "name", "namespace", "openshift_io_component", "pod_template_generation", "pod_template_hash", "project", "projectname", "prometheus", "provider", "release", "router", "servicename", "statefulset_kubernetes_io_pod_name", "stream", "tekton_dev_pipeline", "tekton_dev_pipelineRun", "tekton_dev_pipelineTask", "tekton_dev_task", "tekton_dev_taskRun", "type", "webconsole"]
}
第二步,獲取某個元數據類型的值:
curl http://192.168.25.30:30972/api/prom/label/namespace/values
{"values":["cicd","default","gitlab","grafanaserver","jenkins","jx-staging","kube-system","loki","mysql-exporter","new2","openshift-console","openshift-infra","openshift-logging","openshift-monitoring","openshift-node","openshift-sdn","openshift-web-console","tekton-pipelines","test111"]}
第三步,根據 label 進行查詢,例如:
http://192.168.25.30:30972/api/prom/query?direction=BACKWARD&limit=1000®exp=&query={namespace="cicd"}&start=1567644457221000000&end=1567730857221000000&refId=A
參數解析:
-
query:一種查詢語法詳細見下面章節,{name=~“mysql.+”} or {namespace=“cicd”} |= "error" 表示查詢,namespace 爲 CI/CD 的日誌中,有 error 字樣的信息。
-
limit:返回日誌的數量
-
start:開始時間,Unix 時間表示方法 默認爲,一小時前時間
-
end:結束時間,默認爲當前時間
-
direction:forward 或者 backward,指定 limit 時候有用,默認爲 backward
-
regexp:對結果進行 regex 過濾
LogQL 語法
選擇器
對於查詢表達式的標籤部分,將放在 {} 中,多個標籤表達式用逗號分隔:
{app="mysql",}
支持的符號有:
-
=:完全相同。
-
!=:不平等。
-
=~:正則表達式匹配。
-
!~:不要正則表達式匹配。
過濾表達式
編寫日誌流選擇器後,您可以通過編寫搜索表達式進一步過濾結果。搜索表達式可以文本或正則表達式。
如:
-
{job=“mysql”} |= “error”
-
{name=“kafka”} |~ “tsdb-ops.*io:2003”
-
{instance=~“kafka-[23]”,name=“kafka”} != kafka.server:type=ReplicaManager
支持多個過濾:
- {job=“mysql”} |= “error” != “timeout”
目前支持的操作符:
-
|= line 包含字符串。
-
!= line 不包含字符串。
-
|~ line 匹配正則表達式。
-
!~ line 與正則表達式不匹配。
表達式遵循 https://github.com/google/re2/wiki/Syntax 語法。
如喜歡本文,請點擊右上角,把文章分享到朋友圈
如有想了解學習的技術點,請留言給若飛安排分享
作者:
來源:https://blog.csdn.net/Linkthaha/article/details/100575278
http://blog.csdn.net/Linkthaha/article/details/100575651
https://blog.csdn.net/Linkthaha/article/details/100582422
https://blog.csdn.net/Linkthaha/article/details/10058258
架構師
我們都是架構師!
本文由 Readfog 進行 AMP 轉碼,版權歸原作者所有。
來源:https://mp.weixin.qq.com/s/GKJPTxuD_lrMXKmx2CE9Sw