Centralized Logging with Loki and Promtail on Kubernetes
Complete your Kubernetes observability stack by adding centralized logging with Grafana Loki and Promtail. This guide shows you how to set up Loki for log storage and Promtail as a DaemonSet to collect logs from all your pods, complementing the monitoring capabilities of the kube-prometheus stack.
Introduction to Loki and Promtail
In my previous post about the kube-prometheus stack, we set up comprehensive metrics monitoring for our Kubernetes cluster. However, a complete observability solution requires three pillars: metrics, logs, and traces. Today, we’ll address the logging component by setting up Grafana Loki for log storage and Promtail for log collection.
What is Loki?
Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It’s designed to be very cost-effective and easy to operate, as it does not index the contents of the logs but rather a set of labels for each log stream.
What is Promtail?
Promtail is an agent that ships the contents of local logs to a Loki instance. It is usually deployed as a DaemonSet in Kubernetes to ensure that logs from all pods are collected across all nodes in the cluster.
Prerequisites
Before we begin, ensure you have:
- A running Kubernetes cluster (1.16+). If you need to set one up, refer to my guide on creating a HA Kubernetes cluster.
- Helm 3.x installed
- kubectl configured to communicate with your cluster
- StorageClass configured for persistent volumes (e.g., Longhorn)
- The kube-prometheus stack already deployed (refer to my previous guide)
Installing Loki with Helm
We’ll deploy Loki using the official Helm chart from Grafana. Let’s go through the steps:
1. Add the Grafana Helm Repository
1
2
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
2. Create a Values File for Loki
Create a file named loki-values.yaml
with the following content:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
deploymentMode: SingleBinary
loki:
auth_enabled: false
commonConfig:
replication_factor: 1
storage:
type: "filesystem"
schemaConfig:
configs:
- from: "2025-01-01"
store: tsdb
index:
prefix: loki_index_
period: 24h
object_store: filesystem
schema: v13
limits_config:
retention_period: 4d
ingestion_rate_mb: 10
ingestion_burst_size_mb: 20
singleBinary:
replicas: 1
read:
replicas: 0
backend:
replicas: 0
write:
replicas: 0
persistence:
enableStatefulSetAutoDeletePVC: true
enabled: true
size: 10Gi
storageClass: longhorn
monitoring:
serviceMonitor:
enabled: true
labels:
release: kube-prometheus
selfMonitoring:
enabled: false
grafanaAgent:
installOperator: false
This configuration deploys Loki in "SingleBinary" mode, which is suitable for small to medium clusters. For production environments with high log volumes, consider using the "Scalable" deployment mode.
3. Install Loki
Now, deploy Loki using Helm:
1
helm install loki grafana/loki --namespace monitoring -f loki-values.yaml
4. Verify the Loki Installation
Check that the Loki pods are running:
1
kubectl -n monitoring get pods -l app.kubernetes.io/name=loki
You should see a pod for Loki in the Running state.
Deploying Promtail as a DaemonSet
Promtail will run as a DaemonSet to collect logs from all nodes in your cluster. We need to create several Kubernetes resources for Promtail:
1. Create the RBAC Resources
Promtail needs permissions to discover pods and get their metadata. Create a file named promtail-rbac.yaml
with the following content:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: promtail-serviceaccount
namespace: monitoring
# ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: promtail-clusterrole
rules:
- apiGroups: [""]
resources:
- nodes
- services
- pods
verbs:
- get
- watch
- list
# ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: promtail-clusterrolebinding
subjects:
- kind: ServiceAccount
name: promtail-serviceaccount
namespace: monitoring
roleRef:
kind: ClusterRole
name: promtail-clusterrole
apiGroup: rbac.authorization.k8s.io
Apply the RBAC resources:
1
kubectl apply -f promtail-rbac.yaml
2. Create the Promtail ConfigMap
Create a file named promtail-configmap.yaml
with the following content:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
apiVersion: v1
kind: ConfigMap
metadata:
name: promtail-config
namespace: monitoring
data:
promtail.yaml: |
server:
http_listen_port: 9080
grpc_listen_port: 0
clients:
- url: http://loki.monitoring:3100/loki/api/v1/push
positions:
filename: /tmp/positions.yaml
target_config:
sync_period: 10s
scrape_configs:
- job_name: pod-logs
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- cri: {}
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_node_name
target_label: __host__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: $1
separator: /
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_name
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- action: labeldrop
regex: (app_kubernetes_io_.*|helm_sh_chart|controller_revision_hash|pod_template_generation|stream)
Apply the ConfigMap:
1
kubectl apply -f promtail-configmap.yaml
3. Create the Promtail DaemonSet
Create a file named promtail-daemonset.yaml
with the following content:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: promtail-daemonset
namespace: monitoring
spec:
selector:
matchLabels:
name: promtail
template:
metadata:
labels:
name: promtail
spec:
serviceAccount: promtail-serviceaccount
containers:
- name: promtail-container
image: grafana/promtail:latest
args:
- -config.file=/etc/promtail/promtail.yaml
env:
- name: "HOSTNAME"
valueFrom:
fieldRef:
fieldPath: "spec.nodeName"
volumeMounts:
- name: logs
mountPath: /var/log
- name: promtail-config
mountPath: /etc/promtail
ports:
- containerPort: 9080
name: http-metrics
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
securityContext:
readOnlyRootFilesystem: true
runAsUser: 0
volumes:
- name: logs
hostPath:
path: /var/log
- name: promtail-config
configMap:
name: promtail-config
Apply the DaemonSet:
1
kubectl apply -f promtail-daemonset.yaml
4. Verify the Promtail Deployment
Check that Promtail is running on all nodes:
1
kubectl -n monitoring get pods -l name=promtail
You should see one Promtail pod per node in your cluster.
Configuring Grafana to Use Loki
If you followed my kube-prometheus stack guide, you already have Grafana installed. Now, let’s configure it to use Loki as a data source.
Since we’re using Helm for our kube-prometheus-stack installation, the most elegant way to add Loki as a data source is to update the values.yaml file for the kube-prometheus-stack and perform an upgrade:
1. Add Loki as a Grafana Data Source in kube-prometheus-stack
If you’ve saved your kube-prometheus-stack values file, add the following section to it:
1
2
3
4
5
6
7
8
9
10
11
grafana:
additionalDataSources:
- name: Loki
type: loki
uid: loki
access: proxy
url: http://loki.monitoring.svc.cluster.local:3100
isDefault: false
# jsonData:
# maxLines: 1000
editable: false
If you didn’t save your values file, you can create a new one with just this section.
2. Upgrade Your kube-prometheus-stack Installation
1
helm upgrade kube-prometheus-stack prometheus-community/kube-prometheus-stack -n monitoring -f your-prometheus-values.yaml
This will update the Grafana configuration without any downtime and add Loki as a data source.
Alternative: Manually Add Loki as a Data Source
If you prefer not to modify your Helm release, you can also add the Loki data source manually through the Grafana UI:
- Access your Grafana dashboard
- Go to Configuration > Data sources
- Click “Add data source”
- Select “Loki”
- Set the URL to
http://loki.monitoring:3100
- Click “Save & Test`
Using Loki in Grafana
Now that you have Loki and Promtail set up, you can query your logs in Grafana:
- Log in to your Grafana instance
- Go to Explore (compass icon in the left sidebar)
- Select “Loki” as the data source
- Start querying your logs!
Basic Loki Queries
Loki uses a query language called LogQL. Here are some basic queries to get you started:
- View logs from a specific namespace:
1
{namespace="kube-system"}
- View logs from a specific pod:
1
{namespace="monitoring", pod="promtail-daemonset-abc123"}
- View logs containing an error:
1
{namespace="default"} |= "error"
- View logs with a specific JSON field:
1
{namespace="default"} | json | level="error"
Creating a Loki Dashboard
Let’s create a simple dashboard to monitor errors across namespaces:
- In Grafana, click on “+” and select “Dashboard”
- Click “Add a new panel”
- Select “Loki” as the data source
- Use this query:
sum(count_over_time({namespace=~".+"} |= "error"[15m])) by (namespace)
- Set the visualization to “Bar gauge” or “Time series”
- Give your panel a title like “Errors by Namespace”
- Save the dashboard with a name like “Kubernetes Logs Overview”
Setting Up Log Alerting
You can also set up alerts based on log contents using Loki and Grafana:
- In your dashboard, edit the panel you want to alert on
- Go to the “Alert” tab
- Click “Create Alert Rule”
- Configure the alert conditions (e.g., when error count > 10)
- Set notification channels (e.g., Slack, email)
- Save the alert rule
Conclusion
Congratulations! You’ve now added centralized logging to your Kubernetes cluster using Loki and Promtail. Combined with the metrics monitoring from the kube-prometheus stack, you now have a powerful observability solution that covers two of the three pillars of observability.
Benefits of Your New Logging Stack
- Centralized logging: All your application and system logs in one place
- Efficient storage: Loki’s unique approach stores logs efficiently
- Integrated with Grafana: Seamless integration with your existing dashboards
- Label-based queries: Similar to Prometheus, making it familiar for users
- Low resource usage: Optimized for Kubernetes environments
Next Steps
To further enhance your Kubernetes observability:
- Explore more advanced LogQL queries to extract valuable insights from your logs
- Set up log retention policies based on your compliance and operational needs
- Implement distributed tracing with Tempo to complete the observability trifecta
- Create more sophisticated alerting rules based on log patterns
- Consider implementing Loki’s multi-tenancy for larger environments
Let me know in the comments if you have any questions or if you’d like me to explore any specific aspect of Kubernetes logging in more detail!
Happy logging!