GitOps with Flux on Kubernetes Part 2: Advanced Kustomization and the Kustomize Controller
Deep dive into Flux's Kustomize Controller, advanced kustomization techniques, multi-environment management, and configuration overlays for GitOps workflows
Introduction
In part 1 of this series, we covered the basics of installing Flux and setting up your first GitOps workflow. Now, we’ll dive deeper into one of Flux’s most powerful components: the Kustomize Controller and advanced kustomization techniques.
Kustomization is a powerful tool for managing Kubernetes configurations across multiple environments without duplicating YAML files. When combined with Flux’s GitOps capabilities, it provides a robust solution for managing complex, multi-environment deployments declaratively.
Quick Cluster Setup with Kind
If you don’t have a Kubernetes cluster readily available for testing, you can quickly spin up a local cluster using kind (Kubernetes in Docker):
1
2
3
4
5
# Create a kind cluster
kind create cluster --name flux-demo
# Verify the cluster is running
kubectl cluster-info --context kind-flux-demo
This creates a single-node Kubernetes cluster running in Docker, perfect for testing Flux and experimenting with the examples in this post. When you’re done, you can delete it with:
1
kind delete cluster --name flux-demo
Understanding the Flux Kustomize Controller
The Kustomize Controller is one of the core components in Flux v2. It watches for Kustomization resources and applies the Kubernetes manifests from Git repositories, OCI repositories, or Bucket sources.
Key Features of the Kustomize Controller
- Declarative Configuration: Define your desired state in Git using Kustomization resources
- Dependency Management: Control the order of resource deployment and updates
- Health Assessment: Monitor the health of deployed resources
- Pruning: Automatically remove resources that are no longer defined in Git
- Variable Substitution: Inject values dynamically into your manifests
- Multi-tenancy: Isolate deployments across different namespaces and teams
Kustomization Resource Structure
A Flux Kustomization resource looks like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: my-app
namespace: flux-system
spec:
interval: 10m0s
path: "./apps/my-app"
prune: true
sourceRef:
kind: GitRepository
name: my-repo
timeout: 5m0s
retryInterval: 2m0s
wait: true
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: my-app
namespace: default
Setting Up a Multi-Environment Structure
Let’s create a practical example that demonstrates advanced kustomization techniques. We’ll set up a repository structure that supports multiple environments (dev, staging, production) for a sample application. The sample repository is available at TalhaJuikar/flux-demo-website
Here’s a quick demonstration of the deployed website across different environments:
Repository Structure
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
flux-demo-website/
├── apps
│ └── demo-website
│ ├── base
│ │ ├── deployment.yaml
│ │ ├── ingress.yaml
│ │ ├── kustomization.yaml
│ │ └── service.yaml
│ └── overlays
│ ├── dev
│ │ ├── config-patch.yaml
│ │ ├── ingress-patch.yaml
│ │ ├── kustomization.yaml
│ │ ├── replica-patch.yaml
│ │ └── resource-patch.yaml
│ ├── prod
│ │ ├── deployment-patch.yaml
│ │ ├── hpa.yaml
│ │ ├── ingress-patch.yaml
│ │ └── kustomization.yaml
│ └── staging
│ ├── deployment-patch.yaml
│ ├── ingress-patch.yaml
│ └── kustomization.yaml
└── clusters
└── flux-demo
├── apps.yaml
└── flux-system
├── gotk-components.yaml
├── gotk-sync.yaml
└── kustomization.yaml
Installing Kustomize
Before we begin, let’s install the Kustomize binary to build and test our configurations locally:
On Linux:
1
2
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
sudo mv kustomize /usr/local/bin/
On macOS:
1
brew install kustomize
Verify installation:
1
kustomize version
Testing Configurations Locally
It’s a best practice to test your Kustomize configurations locally before pushing to Git. Here’s how to build and validate your manifests:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Navigate to your overlay directory
cd apps/demo-website/overlays/dev
# Build the kustomization and view the output
kustomize build .
# Save the output to a file for review
kustomize build . > output.yaml
# Test with kubectl without applying
kustomize build . | kubectl diff -f -
# Or use kubectl directly (kubectl has kustomize built-in)
kubectl kustomize overlays/dev
This allows you to catch syntax errors, verify patches are applied correctly, and validate the final manifests before deployment.
Bootstrapping the Repository
If you haven’t already, bootstrap your Flux setup with the following command:
1
2
3
4
5
6
flux bootstrap git \
--url=ssh://[email protected]/user/repo \
--branch=main \
--private-key-file=<path-to-your-key> \
--path=<path-to-cluster> \
--components-extra image-reflector-controller,image-automation-controller
You can also bootstrap using token, check previous post for more details.
Creating the Base Application
Let’s start by creating our base application configuration:
Base Kustomization
1
2
3
4
5
6
7
# apps/demo-website/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
- ingress.yaml
Base Deployment
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# apps/demo-website/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-website
labels:
app: demo-website
spec:
selector:
matchLabels:
app: demo-website
template:
metadata:
labels:
app: demo-website
spec:
containers:
- name: demo-website
image: ghcr.io/talhajuikar/flux-demo-website:main
imagePullPolicy: Always
ports:
- containerPort: 80
Base Service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# apps/demo-website/base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: demo-website
labels:
app: demo-website
spec:
selector:
app: demo-website
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
type: ClusterIP
Base Ingress
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# apps/demo-website/base/ingress.yaml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubernetes.io/ingress.class: traefik-external
name: demo-website
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`demo.plutolab.live`)
services:
- name: demo-website
port: 80
Creating Environment-Specific Overlays
Now let’s create overlays for different environments, each with their specific configurations. You can customize replicas, resource limits, environment variables, and more. In dev I have split the replicas and resource limits into separate patches for clarity. In staging and production, I have combined them into a single patch for simplicity. This demonstrates how you can structure your overlays based on your team’s preferences and the complexity of your configurations.
Development Environment
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# apps/demo-website/overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: dev
resources:
- ../../base
patchesStrategicMerge:
- replica-patch.yaml
- resource-patch.yaml
- config-patch.yaml
- ingress-patch.yaml
commonLabels:
environment: dev
nameSuffix: -dev
1
2
3
4
5
6
7
# apps/demo-website/overlays/dev/replica-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-website
spec:
replicas: 1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# apps/demo-website/overlays/dev/resource-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-website
spec:
template:
spec:
containers:
- name: demo-website
resources:
limits:
cpu: "0.5"
memory: "512Mi"
requests:
cpu: "0.2"
memory: "256Mi"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# apps/demo-website/overlays/dev/config-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-website
spec:
template:
spec:
containers:
- name: demo-website
env:
- name: BACKGROUND_COLOR
value: "#f5f5f5"
- name: ENVIRONMENT
value: "dev"
1
2
3
4
5
6
7
8
9
10
11
12
# apps/demo-website/overlays/dev/ingress-patch.yaml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: demo-website
spec:
routes:
- kind: Rule
match: Host(`dev.plutolab.live`)
services:
- name: demo-website-dev
port: 80
Staging Environment
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# apps/demo-website/overlays/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: staging
resources:
- ../../base
patchesStrategicMerge:
- deployment-patch.yaml
- ingress-patch.yaml
commonLabels:
environment: staging
nameSuffix: -staging
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# apps/demo-website/overlays/staging/deployment-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-website
spec:
replicas: 3
template:
spec:
containers:
- name: demo-website
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "0.5"
memory: "512Mi"
env:
- name: BACKGROUND_COLOR
value: "#d4edda"
- name: ENVIRONMENT
value: "staging"
1
2
3
4
5
6
7
8
9
10
11
12
# apps/demo-website/overlays/staging/ingress-patch.yaml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: demo-website
spec:
routes:
- kind: Rule
match: Host(`staging.plutolab.live`)
services:
- name: demo-website-staging
port: 80
Production Environment
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# apps/demo-website/overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: prod
resources:
- ../../base
- hpa.yaml
patchesStrategicMerge:
- deployment-patch.yaml
- ingress-patch.yaml
commonLabels:
environment: prod
nameSuffix: -prod
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# apps/demo-website/overlays/prod/deployment-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-website
spec:
replicas: 5
template:
spec:
containers:
- name: demo-website
resources:
limits:
cpu: "2"
memory: "2Gi"
requests:
cpu: "1"
memory: "1Gi"
env:
- name: BACKGROUND_COLOR
value: "#fc6f03"
- name: ENVIRONMENT
value: "prod"
1
2
3
4
5
6
7
8
9
10
11
12
# apps/demo-website/overlays/prod/ingress-patch.yaml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: demo-website
spec:
routes:
- kind: Rule
match: Host(`prod.plutolab.live`)
services:
- name: demo-website-prod
port: 80
In production, we will also set up a Horizontal Pod Autoscaler (HPA) to manage scaling based on CPU and memory usage.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# apps/demo-website/overlays/prod/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: demo-website-prod
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: demo-website-prod
minReplicas: 5
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Creating Flux Kustomization Resources
Since the repository where the kustomization files are stored is the same as the one where the Flux configuration is stored, you can skip creating a separate GitRepository resource. However, if you want to keep your Flux configuration and application manifests in separate repositories, you would need to create a GitRepository resource as shown below. Then push the GitRepository resource to the git repository that Flux is monitoring.
1
2
3
4
5
6
7
8
9
10
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
name: flux-demo-website
namespace: flux-system
spec:
interval: 1m
url: https://github.com/TalhaJuikar/flux-demo-website.git
ref:
branch: main
Now let’s create the Flux Kustomization resources that will deploy our applications to different environments:
Development Environment Kustomization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# clusters/flux-demo/apps.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: flux-demo-website-dev
namespace: flux-system
spec:
interval: 5m
path: "./apps/demo-website/overlays/dev"
prune: true
sourceRef:
kind: GitRepository
name: flux-system
timeout: 2m0s
retryInterval: 1m0s
Staging Environment Kustomization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# clusters/flux-demo/apps.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: flux-demo-website-staging
namespace: flux-system
spec:
interval: 5m
path: "./apps/demo-website/overlays/staging"
prune: true
sourceRef:
kind: GitRepository
name: flux-system
timeout: 5m0s
retryInterval: 2m0s
Production Environment Kustomization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# clusters/flux-demo/apps.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: flux-demo-website-prod
namespace: flux-system
spec:
interval: 5m
path: "./apps/demo-website/overlays/prod"
prune: true
sourceRef:
kind: GitRepository
name: flux-system
timeout: 10m0s
retryInterval: 5m0s
In my case, I have a single cluster with different namespaces for each environment, so there will be only one apps.yaml file in the clusters/flux-demo directory.
Verifying Deployment
Wait for Flux to reconcile the kustomizations. Once flux has reconciled the kustomizations, you can verify that the resources have been created in the respective namespaces:
1
2
3
4
5
# Check the status of git repositories
flux get source git -A
# Check the status of kustomizations
flux get kustomizations -n flux-system
You should see the kustomizations for each environment listed with their respective statuses.
1
2
3
4
5
NAME REVISION SUSPENDED READY MESSAGE
flux-demo-website-dev main@sha1:7d3938ce False True Applied revision: main@sha1:7d3938ce
flux-demo-website-prod main@sha1:7d3938ce False True Applied revision: main@sha1:7d3938ce
flux-demo-website-staging main@sha1:7d3938ce False True Applied revision: main@sha1:7d3938ce
flux-system main@sha1:7d3938ce False True Applied revision: main@sha1:7d3938ce
If everything is set up correctly, you should see the resources created in the respective namespaces.
1
2
3
4
5
6
# Check resources in the dev namespace
kubectl get all -n dev
# Check resources in the staging namespace
kubectl get all -n staging
# Check resources in the prod namespace
kubectl get all -n prod
You can now access your application in each environment using the respective hostnames configured in the Ingress resources.
Advanced Kustomization Features
Variable Substitution
Flux supports variable substitution in your manifests using the postBuild.substitute field. This allows you to inject dynamic values into your Kubernetes manifests without hardcoding them.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# In your Kustomization
spec:
postBuild:
substitute:
cluster_name: "my-cluster"
app_version: "v1.2.3"
replica_count: "3"
substituteFrom:
- kind: ConfigMap
name: cluster-vars
# Optional: if true, continue even if this ConfigMap doesn't exist
optional: true
- kind: Secret
name: cluster-secrets
# Required by default (optional: false)
The optional field indicates whether the controller should tolerate the absence of the referenced ConfigMap or Secret. When set to true, reconciliation continues even if the object is missing.
Then in your manifests, use ${var_name} syntax:
1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
cluster: ${cluster_name}
spec:
replicas: ${replica_count}
template:
spec:
containers:
- name: app
image: my-app:${app_version}
Important Notes about Variable Substitution:
- Variable names can only contain alphanumeric and underscore characters
- Variables support bash string replacement functions like
${var:=default},${var:position},${var/substring/replacement} - Use
$varinstead of${var}in scripts to avoid substitution, or use$${var}to output${var}literally - All undefined variables in
${var}format are substituted with empty strings unless a default is provided - Variables defined in
substitutetake precedence over those fromsubstituteFrom - When multiple ConfigMaps/Secrets are used, later values overwrite earlier ones
- Disable substitution for specific resources with annotation:
kustomize.toolkit.fluxcd.io/substitute: disabled
Dependency Management
In complex deployments, you may need to control the order in which resources are applied. The Kustomization controller allows you to specify dependencies using the dependsOn field.
You can define dependencies between kustomizations to ensure that certain resources are applied before others. This is particularly useful for managing infrastructure components like databases, message queues, or ingress controllers that must be ready before your application is deployed.
1
2
3
4
5
6
7
8
9
10
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: my-app
spec:
dependsOn:
- name: infrastructure
- name: cert-manager
- name: ingress-controller
# ... rest of spec
Health Checks and Wait Conditions
Health checks allow Flux to verify that deployed resources are actually ready before marking the Kustomization as successful. This is crucial for ensuring application availability and proper deployment ordering.
Using Explicit Health Checks:
1
2
3
4
5
6
7
8
9
10
11
spec:
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: my-app
namespace: default
- apiVersion: v1
kind: Service
name: my-app-service
namespace: default
timeout: 10m0s
Flux supports health checks for various resource types:
- Kubernetes built-in kinds: Deployment, DaemonSet, StatefulSet, PersistentVolumeClaim, Pod, PodDisruptionBudget, Job, CronJob, Service, Secret, ConfigMap, CustomResourceDefinition
- Flux kinds: HelmRelease, HelmRepository, GitRepository, etc.
- Custom resources compatible with kstatus
Using Wait for All Resources:
1
2
3
spec:
wait: true # Performs health checks on all reconciled resources
timeout: 10m0s
When wait: true is set, the .spec.healthChecks field is ignored, and Flux automatically checks all applied resources.
Force Recreation for Immutable Fields
Sometimes you need to update immutable fields (like Job specs or StatefulSet volumeClaimTemplates). The force field instructs the controller to recreate resources when patching fails due to immutable field changes:
1
2
spec:
force: true # Recreate resources if immutable fields change
You can also enable this per-resource using annotations:
1
2
3
metadata:
annotations:
kustomize.toolkit.fluxcd.io/force: enabled
Warning: Using force: true for StatefulSets may result in data loss. Use with caution.
Target Namespace
The targetNamespace field sets or overrides the namespace for all resources in the Kustomization:
1
2
3
spec:
targetNamespace: my-app
# ... other fields
Important: The namespace must exist before applying the Kustomization, or be defined within the Kustomization itself. Flux will not create it automatically.
Name Prefix and Suffix
Add prefixes and suffixes to all resource names in your Kustomization:
1
2
3
spec:
namePrefix: "prod-"
nameSuffix: "-v2"
This is useful for managing multiple versions or instances of the same application.
Decrypting Secrets with SOPS
Flux supports decrypting secrets encrypted with Mozilla SOPS directly during reconciliation. This enables storing encrypted secrets in Git safely. SOPS supports multiple encryption backends:
- Age and OpenPGP for key-based encryption
- AWS KMS, Azure Key Vault, GCP KMS, and Hashicorp Vault for cloud KMS
Example Configuration:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: apps
namespace: flux-system
spec:
interval: 10m
path: "./apps"
prune: true
sourceRef:
kind: GitRepository
name: flux-system
decryption:
provider: sops
secretRef:
name: sops-keys
The Secret contains the decryption keys or credentials:
1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Secret
metadata:
name: sops-keys
namespace: flux-system
stringData:
# Age private key
identity.agekey: |
AGE-SECRET-KEY-1...
# Or AWS credentials
sops.aws-kms: |
aws_access_key_id: AKIA...
aws_secret_access_key: ...
Important: Leave metadata, kind, and apiVersion unencrypted in SOPS files. Use the --encrypted-regex '^(data|stringData)$' flag with SOPS to only encrypt the sensitive fields.
Inline Patches
Flux supports applying patches directly in the Kustomization resource without modifying the source manifests. This is useful for quick fixes or environment-specific changes:
Strategic Merge Patch:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: my-app
spec:
# ... other fields
patches:
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
target:
kind: Deployment
labelSelector: "app=my-app"
JSON6902 Patch:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
patches:
- patch: |
- op: add
path: /spec/template/spec/securityContext
value:
runAsUser: 10000
fsGroup: 1337
- op: replace
path: /spec/replicas
value: 5
target:
kind: Deployment
name: my-app
namespace: default
Patches can target resources by name, namespace, kind, labelSelector, or annotationSelector.
Image Overrides
You can override container images without creating patches using the images field. This is particularly useful for promoting images across environments:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: my-app
spec:
# ... other fields
images:
- name: ghcr.io/org/app
newName: ghcr.io/org/app
newTag: v1.2.3
- name: ghcr.io/org/app
newTag: v1.2.4 # Update tag only
- name: ghcr.io/org/app
newName: my-registry/app # Change registry
- name: ghcr.io/org/app
digest: sha256:abc123... # Pin to digest
This feature works with Flux’s Image Automation to automatically update tags when new images are available.
Monitoring and Troubleshooting
Checking Kustomization Status
1
2
3
4
5
6
7
8
9
10
11
# List all kustomizations
flux get kustomizations
# Get detailed status
kubectl describe kustomization flux-demo-website-prod -n flux-system
# Check events
flux events
# View logs
flux logs --kind=Kustomization --name=flux-demo-website-prod
Understanding Kustomization Status Conditions
Flux reports the state of a Kustomization through Kubernetes Conditions. Understanding these is crucial for troubleshooting:
Ready Condition:
status: "True"withreason: ReconciliationSucceeded- Everything is healthystatus: "False"with various reasons:ArtifactFailed- Source artifact issuesBuildFailed- Kustomize build errorsHealthCheckFailed- Resource health checks failingDependencyNotReady- Waiting for dependenciesPruneFailed- Garbage collection issues
Reconciling Condition:
reason: Progressing- Currently reconcilingreason: ProgressingWithRetry- Reconciling after a failure
Example output:
1
2
3
$ kubectl get kustomization -n flux-system
NAME READY STATUS
my-app True Applied revision: main@sha1:abc123
Common Issues and Solutions
- Reconciliation Failures: Check source repository access and path correctness
- Dependency Issues: Ensure dependent resources are healthy before deploying
- Resource Conflicts: Use proper namespacing and unique naming
- Timeout Issues: Adjust timeout values based on resource complexity
- Build Failures: Validate Kustomization locally with
kustomize buildbefore committing - Health Check Failures: Check pod logs and events for the failing resources
Controlling Apply Behavior with Annotations
Flux provides fine-grained control over how resources are applied, pruned, and forced through annotations. These can be applied to individual resources within your manifests:
Server-Side Apply Policy
The kustomize.toolkit.fluxcd.io/ssa annotation controls how Flux applies changes:
Override(default): Reconciles resources with the desired state from Git. Any manual changes viakubectlwill be reverted.Merge: Preserves fields added by other tools. Only fields defined in Git manifests are managed.IfNotPresent: Only creates the resource if it doesn’t exist. Useful for Secrets managed by cert-manager.Ignore: Skip applying this resource entirely, even if included in the Kustomization.
Example:
1
2
3
4
5
6
7
apiVersion: v1
kind: Secret
metadata:
name: tls-cert
annotations:
kustomize.toolkit.fluxcd.io/ssa: IfNotPresent
type: kubernetes.io/tls
Prune Control
Prevent specific resources from being garbage collected:
1
2
3
metadata:
annotations:
kustomize.toolkit.fluxcd.io/prune: disabled
This is useful for protecting critical resources like Namespaces, PVCs, and PVs from accidental deletion.
Suspending Reconciliation
To temporarily pause reconciliation of a specific resource:
1
2
3
metadata:
annotations:
kustomize.toolkit.fluxcd.io/reconcile: disabled
Set to enabled or remove the annotation to resume reconciliation.
Best Practices for Production
When deploying to production environments with Flux, consider these recommended settings:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: production-app
namespace: flux-system
spec:
interval: 60m0s # Detect drift and reconcile every hour
retryInterval: 2m0s # Retry every 2 minutes on failures
timeout: 5m0s # Fail if operations take longer than 5 minutes
path: "./apps/production"
prune: true # Enable garbage collection
wait: true # Wait for all resources to be ready
force: false # Don't recreate by default
sourceRef:
kind: GitRepository
name: production-repo
healthChecks: # Specific health checks for critical components
- apiVersion: apps/v1
kind: Deployment
name: api-server
namespace: production
- apiVersion: apps/v1
kind: StatefulSet
name: database
namespace: production
Additional Recommendations:
- Use specific timeouts: Adjust based on your application’s startup time
- Enable wait or health checks: Don’t proceed if deployments fail
- Set appropriate intervals: Balance responsiveness with cluster load
- Use dependencies: Ensure infrastructure is ready before applications
- Test locally first: Validate with
kustomize buildandkubectl diff - Monitor reconciliation: Set up alerts on Kustomization failures
- Use Git branches: Test changes in feature branches before merging to main
- Document your overlays: Add README files explaining environment differences
Conclusion
The Flux Kustomize Controller provides a powerful foundation for managing complex, multi-environment Kubernetes deployments. By combining Kustomize’s configuration management capabilities with Flux’s GitOps automation, you can create robust, scalable deployment pipelines that maintain consistency across environments while allowing for necessary customizations.
Key Takeaways:
- Structured Configuration: Use base and overlay patterns to manage environment-specific configurations without duplication
- Dependency Management: Control deployment order with
dependsOnto ensure infrastructure is ready before applications - Health Checks: Use
waitandhealthChecksto verify deployments succeed before proceeding - Variable Substitution: Inject dynamic values with
postBuild.substitutefor configuration flexibility - Fine-grained Control: Use annotations to control apply behavior, pruning, and reconciliation per-resource
In the next part of this series, we’ll explore Image Automation with Flux, covering:
- Setting up ImageRepository to scan container registries
- Creating ImagePolicy for semantic versioning and filtering
- Automating image updates in Git with ImageUpdateAutomation
- Integrating image scanning and security policies
Have you implemented multi-environment deployments with Flux and Kustomize? Share your experiences and challenges in the comments below!
