Post

Deploying Elasticsearch, Kibana, and Fleet Server on Kubernetes with ECK Operator

A comprehensive guide to setting up a production-ready Elastic Stack on Kubernetes using the Elastic Cloud on Kubernetes (ECK) operator. This tutorial walks through deploying Elasticsearch for data storage, Kibana for visualization, and Fleet Server for agent management.

Deploying Elasticsearch, Kibana, and Fleet Server on Kubernetes with ECK Operator

Introduction to the Elastic Stack on Kubernetes

The Elastic Stack (formerly ELK Stack) has become a cornerstone of modern observability solutions, providing powerful capabilities for log and metrics collection, analysis, and visualization. When running workloads in Kubernetes, deploying Elasticsearch and its companion components directly in the cluster offers several advantages:

  • Data locality: Keeping observability data close to your applications
  • Unified management: Managing both your applications and observability stack with the same tools
  • Resource optimization: Scaling your observability infrastructure alongside your applications
  • Simplified architecture: Eliminating the need for external observability platforms

Elastic Cloud on Kubernetes (ECK) is the official operator for running the Elastic Stack on Kubernetes. It dramatically simplifies the deployment and management of Elasticsearch, Kibana, and other Elastic components by:

  • Automating cluster provisioning and setup
  • Managing secure communications with TLS
  • Handling upgrades and version compatibility
  • Providing self-healing capabilities
  • Simplifying configuration management

In this guide, I’ll walk you through deploying a complete Elastic Stack environment on Kubernetes, including:

  1. Installing the ECK operator
  2. Deploying an Elasticsearch cluster
  3. Setting up Kibana for visualization
  4. Configuring a Fleet Server for agent management
  5. Exposing services via Ingress
  6. Ensuring proper security settings

Prerequisites

Before you begin, ensure you have:

  • A running Kubernetes cluster (v1.21+). You can set up one by following my previous post on Kubernetes or using my ansible playbook for Kubernetes.
  • kubectl installed and configured to communicate with your cluster
  • Helm v3.x installed (optional but recommended)
  • StorageClass configured for persistent volumes. If you don’t have one, follow my guide on Longhorn
  • A LoadBalancer or Ingress Controller set up. For bare metal, see my guides on MetalLB and Traefik
  • At least 8GB of available memory across your cluster nodes
  • Basic understanding of Kubernetes concepts

Step 1: Installing the ECK Operator

The ECK operator is responsible for deploying and managing Elastic resources in your Kubernetes cluster. Let’s start by creating a dedicated namespace and installing the operator.

Create a Namespace

First, create a dedicated namespace for the ECK operator:

1
kubectl create namespace elastic-system

Install with Helm

You can install the ECK operator using the official Helm chart:

1
2
3
4
5
6
# Add the Elastic Helm repository
helm repo add elastic https://helm.elastic.co
helm repo update

# Install the ECK operator
helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace --version 2.16.1

Alternative: Install the Operator Using Manifests (Optional)

If you prefer using manifests, you can install the ECK operator directly using Kubernetes manifests:

1
2
3
4
5
# Install custom resource definitions
kubectl apply -f https://download.elastic.co/downloads/eck/2.10.0/crds.yaml

# Install the operator with RBAC rules
kubectl apply -f https://download.elastic.co/downloads/eck/2.10.0/operator.yaml

Verify the Operator Installation

Check that the ECK operator is running:

1
kubectl -n elastic-system get pods

You should see output similar to:

1
2
NAME                             READY   STATUS    RESTARTS   AGE
elastic-operator-0               1/1     Running   0          2m

Step 2: Installing the Elastic Stack with Helm

For a more streamlined deployment process, you can use Helm to install the entire Elastic Stack in one go, including Elasticsearch, Kibana, Fleet Server, and Elastic Agent. This approach simplifies the installation process and ensures proper integration between all components.

Create a values.yaml File

Create a file named elastic-stack-values.yaml with the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
eck-elasticsearch:
  enabled: true
  version: 8.17.3
  # Name of the Elasticsearch instance.
  fullnameOverride: elasticsearch
  nodeSets:
  - name: default
    count: 3
    # Comment out when setting the vm.max_map_count via initContainer, as these are mutually exclusive.
    # For production workloads, it is strongly recommended to increase the kernel setting vm.max_map_count to 262144
    # and leave node.store.allow_mmap unset.
    # ref: https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-virtual-memory.html
    #
    config:
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 20Gi

eck-kibana:
  enabled: true
  version: 8.17.3
  count: 1
  # Name of the Kibana instance.
  fullnameOverride: kibana
  elasticsearchRef:
    name: elasticsearch

  config:
    # If installed outside of the `elastic-stack` namespace, the following 2 lines need modification.
    xpack.fleet.agents.elasticsearch.hosts: ["https://elasticsearch-es-http.elastic-stack.svc:9200"]
    xpack.fleet.agents.fleet_server.hosts: ["https://fleet-server-agent-http.elastic-stack.svc:8220"]
    # If you wish to expose the elasticsearch and fleet server outside your cluster to be used with external clients modifly the above lines as
    # xpack.fleet.agents.elasticsearch.hosts: ["https://elasticsearch.plutolab.live"]
    # xpack.fleet.agents.fleet_server.hosts: ["https://fleet.plutolab.live"]
    # Also check the eck-agent values
    xpack.fleet.packages:
    - name: system
      version: latest
    - name: elastic_agent
      version: latest
    - name: fleet_server
      version: latest
    - name: kubernetes
      version: latest
    xpack.fleet.agentPolicies:
    - name: Fleet Server on ECK policy
      id: eck-fleet-server
      namespace: default
      is_managed: true
      monitoring_enabled:
      - logs
      - metrics
      package_policies:
      - name: fleet_server-1
        id: fleet_server-1
        package:
          name: fleet_server
    - name: Elastic Agent on ECK policy
      id: eck-agent
      namespace: default
      is_managed: true
      monitoring_enabled:
      - logs
      - metrics
      unenroll_timeout: 900
      package_policies:
      - package:
          name: system
        name: system-1
      - package:
          name: kubernetes
        name: kubernetes-1

eck-agent:
  enabled: true
  version: 8.17.3
  # Agent policy to be used.
  policyID: eck-agent
  # Reference to ECK-managed Kibana instance.
  kibanaRef:
    name: kibana
  elasticsearchRefs: []
  # Reference to ECK-managed Fleet instance.
  fleetServerRef:
    name: fleet-server

  mode: fleet
  daemonSet:
    podTemplate:
      spec:
        serviceAccountName: elastic-agent
        hostNetwork: true
        dnsPolicy: ClusterFirstWithHostNet
        automountServiceAccountToken: true
        securityContext:
          runAsUser: 0
        containers:
        - name: agent
          #env:
            #- name: FLEET_CA
            #  value: ""
            #- name: FLEET_URL
            #  value: "https://fleet.plutolab.live"
          # Uncomment the above lines if you want to expose the fleet server outside your cluster.

eck-fleet-server:
  enabled: true
  version: 8.17.3
  fullnameOverride: "fleet-server"

  deployment:
    replicas: 1
    podTemplate:
      spec:
        serviceAccountName: fleet-server
        automountServiceAccountToken: true

  # Agent policy to be used.
  policyID: eck-fleet-server
  kibanaRef:
    name: kibana
  elasticsearchRefs:
  - name: elasticsearch

This configuration:

  • Deploys Elasticsearch with 3 nodes (version 8.17.3) and 20Gi storage per node
  • Sets up Kibana with 1 replica
  • Configures a Fleet Server with pre-defined fleet policies
  • Deploys Elastic Agents as a DaemonSet on all cluster nodes
  • Pre-configures system and Kubernetes monitoring integrations
  • Runs Elastic Agent with host network access for complete visibility

Self-Monitoring Configuration

If you want to set up self-monitoring for the Elastic Stack, you can add the following configuration to your elastic-stack-values.yaml file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# Add this under eck-elasticsearch section
eck-elasticsearch:
  # ...existing configuration...
  monitoring:
    metrics:
      elasticsearchRefs:
        - name: monitoring
          namespace: monitoring
    logs:
      elasticsearchRefs:
        - name: monitoring
          namespace: monitoring

# And similarly for the eck-kibana section
eck-kibana:
  # ...existing configuration...
  monitoring:
    metrics:
      elasticsearchRefs:
        - name: monitoring
          namespace: monitoring
    logs:
      elasticsearchRefs:
        - name: monitoring
          namespace: monitoring

This configuration directs the monitoring data (metrics and logs) to a separate Elasticsearch cluster running in the monitoring namespace, creating a clear separation between your application data and monitoring data.

Install the Elastic Stack with Helm

Now that you have the values file prepared, install the Elastic Stack:

1
2
3
4
5
# Create a namespace for the Elastic Stack
kubectl create namespace elastic-stack

# Install the Elastic Stack
helm install elastic-stack elastic/eck-stack -n elastic-stack -f elastic-stack-values.yaml

Note: The ECK operator automatically creates all necessary service accounts and RBAC permissions required for the components to function properly, so you don’t need to create them separately.

Monitor the Deployment Progress

Watch the deployment process:

1
2
3
4
5
6
7
8
9
10
11
# Check Elasticsearch status
kubectl -n elastic-stack get elasticsearch

# Check Kibana status
kubectl -n elastic-stack get kibana

# Check Fleet Server status
kubectl -n elastic-stack get agent fleet-server

# Check Elastic Agent status
kubectl -n elastic-stack get agent elastic-agent

Wait until all components show green health status. The deployment might take a few minutes to complete.

Get Elasticsearch Credentials

Retrieve the auto-generated password for the elastic user:

1
2
PASSWORD=$(kubectl -n elastic-stack get secret elasticsearch-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')
echo "Elasticsearch password: $PASSWORD"

Step 3: Exposing Services via Ingress

To access Elasticsearch, Kibana, and Fleet Server from outside the cluster, we’ll set up Ingress routes. This example uses Traefik IngressRoute resources, but you can adapt it to your ingress controller.

Create IngressRoute for Elasticsearch

Create a file named elasticsearch-ingress.yaml with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: elasticsearch-ingress
  namespace: elastic-stack
  annotations:
    kubernetes.io/ingress.class: traefik-external
spec:
  entryPoints:
    - websecure
  routes:
    - match: "Host(`elasticsearch.plutolab.live`)"
      kind: Rule
      services:
        - name: elasticsearch-es-http
          port: 9200
          scheme: https
  tls:
    secretName: plutolab-live-tls

Create IngressRoute for Kibana

Create a file named kibana-ingress.yaml with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: kibana-ingress
  namespace: elastic-stack
  annotations:
    kubernetes.io/ingress.class: traefik-external
spec:
  entryPoints:
    - websecure
  routes:
    - match: "Host(`kibana.plutolab.live`)"
      kind: Rule
      services:
        - name: kibana-kb-http
          port: 5601
          scheme: https
  tls:
    secretName: plutolab-live-tls

Create IngressRoute for Fleet Server

Create a file named fleet-ingress.yaml with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: fleet-ingress
  namespace: elastic-stack
  annotations:
    kubernetes.io/ingress.class: traefik-external
spec:
  entryPoints:
    - websecure
  routes:
    - match: "Host(`fleet.plutolab.live`)"
      kind: Rule
      services:
        - name: fleet-server-agent-http
          port: 8220
          scheme: https
  tls:
    secretName: plutolab-live-tls

Replace elasticsearch.plutolab.live, kibana.plutolab.live, and fleet.plutolab.live with your own domain names.

Apply the configuration:

1
2
3
kubectl apply -f elasticsearch-ingress.yaml
kubectl apply -f kibana-ingress.yaml
kubectl apply -f fleet-ingress.yaml

Step 4: Accessing and Configuring Your Elastic Stack

Now that all components are deployed and exposed, let’s access and configure them.

Accessing Kibana

Open your browser and navigate to https://kibana.plutolab.live (or your custom domain). Log in with:

  • Username: elastic
  • Password: The one retrieved earlier

Exposing Fleet and Elasticsearch Outside the Cluster

If you want to expose Fleet Server and Elasticsearch outside your Kubernetes cluster for external clients, you’ll need to make the following changes:

  1. In the Kibana configuration, update the Fleet server hosts:
1
2
3
4
5
6
eck-kibana:
  # ...existing configuration...
  config:
    xpack.fleet.agents.elasticsearch.hosts: ["https://elasticsearch.plutolab.live"]
    xpack.fleet.agents.fleet_server.hosts: ["https://fleet.plutolab.live"]
    # ...rest of the configuration...
  1. In the Elastic Agent configuration, enable and configure the environment variables:
1
2
3
4
5
6
7
8
9
10
11
12
13
eck-agent:
  # ...existing configuration...
  daemonSet:
    podTemplate:
      spec:
        # ...existing configuration...
        containers:
        - name: agent
          env:
            - name: FLEET_CA
              value: "" # If you're using a publicly trusted certificate, leave this empty
            - name: FLEET_URL
              value: "https://fleet.plutolab.live"

These changes configure the agents to communicate with the externally accessible Fleet Server endpoint rather than the internal Kubernetes service address. This is essential when you have agents running outside your Kubernetes cluster that need to connect to your Fleet Server.

Security Considerations

The ECK operator automatically implements several security features:

  • TLS for all internal communication
  • Secure by default with authentication required
  • Automatic certificate generation and rotation
  • Secure settings using Kubernetes secrets

For additional security:

  1. Network Policies: Implement Kubernetes Network Policies to control traffic to and from Elastic components

  2. Resource Quotas: Set namespace resource quotas to prevent resource exhaustion

  3. Regular Updates: Keep your Elastic Stack updated with the latest security patches

  4. Backup: Implement regular snapshots of your Elasticsearch indices

Resource Optimization

The configurations provided are suitable for small to medium deployments. For larger environments:

  1. Scaling Elasticsearch: Increase the count in the Elasticsearch spec and consider using dedicated node roles

  2. Hot-Warm Architecture: Implement hot-warm-cold architecture for efficient data lifecycle management

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
spec:
  version: 8.11.1
  nodeSets:
  - name: hot
    count: 3
    config:
      node.roles: ["master", "data_hot", "data_content", "ingest"]
    # ...
  - name: warm
    count: 2
    config:
      node.roles: ["data_warm", "data_content"]
    # ...
  1. Dedicated coordinators: Add dedicated coordinating nodes for search-heavy workloads

Monitoring the Elastic Stack

The ECK operator exports various metrics that you can scrape with Prometheus. If you have the kube-prometheus stack installed, create a ServiceMonitor for the operator:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: elastic-operator
  namespace: monitoring
  labels:
    release: kube-prometheus
spec:
  selector:
    matchLabels:
      control-plane: elastic-operator
  namespaceSelector:
    matchNames:
    - elastic-system
  endpoints:
  - port: metrics
    interval: 30s

You can also set up monitoring for the Elasticsearch cluster itself through Metricbeat.

Conclusion

You now have a complete Elastic Stack running on your Kubernetes cluster, deployed efficiently using Helm charts. This installation includes:

  • A highly available Elasticsearch cluster for data storage and search
  • Kibana for data visualization and management
  • Fleet Server for agent management
  • Elastic Agents pre-configured to collect system and Kubernetes metrics
  • Secure access through TLS and authentication
  • Proper resource allocation and scaling capabilities

This architecture provides a solid foundation for building a comprehensive observability solution for your Kubernetes workloads. The ECK operator significantly simplifies the management of these components, allowing you to focus on deriving insights from your data rather than maintaining the infrastructure.

Have you implemented the Elastic Stack in your Kubernetes environment? Let me know in the comments about your experience and any challenges you faced!

This post is licensed under CC BY 4.0 by the author.