From Push to Pull: Implementing Flux GitOps for Self-Healing Kubernetes Deployments
Your CI/CD pipeline just pushed a broken deployment to production. Again. The rollback took 23 minutes because someone had to wake up, VPN in, and run kubectl commands while half-asleep. Meanwhile, your customers watched spinning loading icons and your Slack channel filled with increasingly urgent messages from the on-call rotation.
This scenario plays out across engineering organizations every week. The root cause isn’t the bad deployment itself—bugs happen. The real problem is architectural: you’ve built a system where recovery depends on human intervention, where your Jenkins server holds cluster-admin credentials, and where the “desired state” of your infrastructure exists only in the ephemeral memory of your last successful pipeline run.
Push-based continuous deployment made sense when Kubernetes clusters were novel and GitOps was just a buzzword. But the model carries inherent risks that compound at scale. Every CI system with cluster access becomes an attack vector. Every manual kubectl apply creates drift that’s invisible until the next incident. Every rollback requires someone with the right permissions, the right context, and the right mental state to execute commands correctly under pressure.
There’s a better way. The GitOps model inverts the entire flow: instead of pipelines pushing changes into clusters, clusters continuously pull their desired state from Git. The cluster becomes self-aware, constantly comparing its actual state against what’s declared in your repository. When drift occurs—whether from a bad deployment, manual intervention, or infrastructure failure—the cluster heals itself. No human required. No credentials exposed outside the cluster boundary.
Flux, the CNCF graduated project, implements this pull-based reconciliation model with a focus on security and operational simplicity. Let’s start by examining exactly why push-based deployment creates the problems it does.
The Push Problem: Why Your CI/CD Pipeline Shouldn’t Touch Your Cluster
Every time your CI pipeline runs kubectl apply, it needs credentials to your Kubernetes cluster. Those credentials—typically a kubeconfig file or service account token—sit in GitHub Actions secrets, Jenkins credentials, or whatever CI system you’re running. They’re one misconfigured workflow, one compromised dependency, or one leaked log away from giving an attacker direct access to your production infrastructure.

This is the fundamental flaw of push-based continuous deployment: your CI system becomes a privileged gateway to your cluster, and that privilege persists 24/7 whether deployments are happening or not.
The Credential Sprawl Problem
In a typical push-based setup, cluster credentials proliferate across your organization. The main deployment pipeline has them. The hotfix workflow has them. The “quick script” someone wrote for emergency rollbacks has them. Each copy represents another attack surface, another secret to rotate, and another audit trail to maintain.
When you need to revoke access—after a security incident, an employee departure, or a credential rotation—you’re hunting through multiple CI systems, shared repositories, and that one shell script in someone’s home directory. The blast radius of a single compromised credential extends to everything that credential can touch, which is often the entire cluster.
Configuration Drift: The Silent Killer
Push-based deployments also suffer from a more insidious problem: drift. Someone runs kubectl edit to fix a production issue at 2 AM. Another engineer scales a deployment manually during a traffic spike. A third applies a ConfigMap change directly because “it’s just one small fix.”
None of these changes exist in Git. Your repository says one thing; your cluster runs another. You discover the discrepancy weeks later when a deployment overwrites the manual changes and breaks production—or worse, when you’re trying to recover from an outage and your “known good” state in Git doesn’t match what was actually running.
Rollbacks Under Pressure
When an incident occurs, push-based rollbacks require human intervention. Someone needs to identify the last working commit, trigger a pipeline (assuming the pipeline itself isn’t the problem), and wait for it to complete. Under pressure, mistakes happen: wrong commit selected, environment variables misconfigured, approval gates blocking the fix.
The GitOps model eliminates these problems by inverting the deployment flow entirely. Instead of CI pushing changes to the cluster, an agent running inside the cluster pulls its desired state from Git. The cluster credentials never leave the cluster. Git becomes the single source of truth, and any drift from that truth triggers automatic reconciliation.
This pull-based approach is the foundation of self-healing infrastructure—and it starts with understanding how Flux implements this reconciliation loop.
Flux Architecture: Controllers, Sources, and the Reconciliation Loop
Understanding Flux’s architecture is essential before implementing it. Unlike push-based systems where an external process applies changes to your cluster, Flux operates as a set of controllers running inside your cluster, continuously pulling state from Git and reconciling it with reality.

The Controller Model
Flux deploys as a collection of specialized Kubernetes controllers, each responsible for a specific domain:
Source Controller monitors external repositories—Git repos, Helm registries, S3 buckets—and makes their contents available to other controllers. When you define a GitRepository resource pointing to your infrastructure repo, the source controller polls it on your configured interval, detects new commits, and downloads the artifacts.
Kustomize Controller takes those artifacts and applies them to your cluster using Kustomize. It handles the actual deployment of plain Kubernetes manifests, patches, and overlays. Every Kustomization resource you create tells this controller which source to watch and which path within that source contains your manifests.
Helm Controller manages the lifecycle of Helm releases. Rather than running helm install from your laptop or CI pipeline, you declare a HelmRelease resource that specifies the chart, values, and target namespace. The controller installs, upgrades, and rolls back releases based on that declaration.
Notification Controller handles alerts and incoming webhooks. It sends Slack messages when deployments fail and receives webhook calls from GitHub to trigger immediate reconciliation instead of waiting for the next poll interval.
Sources and Deployables
Flux separates where your desired state lives from what gets deployed. This separation enables powerful composition patterns.
Sources define the origin of truth:
- GitRepository points to a Git repo containing manifests or Helm charts
- HelmRepository points to a Helm chart registry
- OCIRepository pulls artifacts from container registries using the OCI standard
Deployables reference sources and define what to apply:
- Kustomization applies manifests from a path in a source, optionally with Kustomize overlays
- HelmRelease installs a Helm chart with specified values
This design means a single GitRepository source can feed multiple Kustomization resources—one per environment, one per team, or one per application component.
The Reconciliation Loop
Every Flux controller runs a continuous reconciliation loop. The logic is deceptively simple:
- Observe the current state of resources in the cluster
- Compare against the desired state from sources
- Apply changes to eliminate drift
- Report status back to the custom resource
This loop runs on a configurable interval (typically one to ten minutes) and also triggers on webhook events. When someone manually edits a Deployment that Flux manages, the next reconciliation cycle reverts the change. When a new commit lands in your Git repo, the next cycle applies it.
Pro Tip: The reconciliation interval represents a tradeoff between responsiveness and Git provider rate limits. Start with five minutes and configure webhooks for immediate reaction to commits you care about.
The status of each resource reflects whether reconciliation succeeded. A Kustomization shows Ready: True when all manifests applied successfully, or surfaces error messages when something failed. This status becomes the foundation for alerts and debugging.
With this mental model of controllers, sources, and continuous reconciliation in place, you’re ready to bootstrap Flux into a real cluster and connect it to your Git repository.
Bootstrap: Installing Flux and Connecting Your Git Repository
The bootstrap process is where Flux’s GitOps philosophy becomes tangible. Unlike traditional CD tools that require manual installation and separate configuration steps, Flux bootstraps itself—installing components into your cluster while simultaneously committing its own configuration to Git. This means your GitOps infrastructure is itself managed via GitOps from day one. The elegance of this approach becomes clear when you consider disaster recovery: losing a cluster doesn’t mean losing your CD configuration, because the entire setup lives in version control.
Running the Bootstrap Command
The flux bootstrap command handles three critical tasks: installing Flux controllers, generating deployment manifests, and configuring Git repository authentication. For GitHub repositories:
flux bootstrap github \ --owner=acme-corp \ --repository=fleet-infra \ --branch=main \ --path=clusters/production \ --personalThis command creates the fleet-infra repository if it doesn’t exist, installs Flux components into the flux-system namespace, and commits the installation manifests to clusters/production/flux-system/. The --path flag is particularly important in multi-cluster scenarios—it determines the directory within your repository that this specific cluster watches for changes.
For GitLab, Bitbucket, or generic Git servers, substitute the appropriate subcommand:
flux bootstrap gitlab \ --owner=acme-corp \ --repository=fleet-infra \ --branch=main \ --path=clusters/productionGeneric Git servers use flux bootstrap git with explicit URL parameters, accommodating self-hosted solutions like Gitea or internal Git servers that don’t conform to major provider APIs.
Understanding flux-system Components
After bootstrap completes, inspect the namespace to verify the installation:
kubectl get pods -n flux-system
## Expected output:## NAME READY STATUS## helm-controller-5b96d94c7f-x2vnk 1/1 Running## kustomize-controller-7b7b8d7c5f-9plmz 1/1 Running## notification-controller-6c4f4c5c5d-8qwrt 1/1 Running## source-controller-7c7b8d8c6f-4mvnp 1/1 RunningEach controller serves a distinct purpose within the Flux architecture. The source-controller fetches artifacts from Git repositories, Helm repositories, and OCI registries, caching them locally for other controllers to consume. The kustomize-controller applies Kubernetes manifests, supporting both plain YAML and Kustomize overlays. The helm-controller manages Helm releases declaratively, watching for HelmRelease custom resources and reconciling chart installations. Finally, the notification-controller handles bidirectional communication—sending alerts to external systems like Slack or PagerDuty while also receiving webhooks to trigger immediate reconciliation.
Deploy Keys vs Personal Access Tokens
The --personal flag in the bootstrap command uses a GitHub Personal Access Token (PAT) for authentication. While convenient for initial setup and experimentation, production environments benefit from deploy keys—SSH keys scoped to a single repository with minimal permissions. PATs typically carry broader access than necessary and create security concerns when shared across multiple systems.
To bootstrap with deploy keys (the default when --personal is omitted):
flux bootstrap github \ --owner=acme-corp \ --repository=fleet-infra \ --branch=main \ --path=clusters/productionFlux generates an SSH key pair, stores the private key as a Kubernetes secret in the flux-system namespace, and prompts you to add the public key to your repository’s deploy keys. This approach follows the principle of least privilege—the key grants access only to the specific repository Flux needs, and revoking access requires only removing the deploy key from that single repository.
Pro Tip: For organizations managing multiple clusters, use separate repositories or repository paths per cluster. Each cluster bootstraps with its own deploy key, enabling fine-grained access control and clear audit trails for compliance requirements.
What Gets Committed to Your Repository
After bootstrap, examine the repository structure that Flux created:
clusters/production/flux-system/├── gotk-components.yaml # Flux controller deployments├── gotk-sync.yaml # GitRepository and Kustomization for self-management└── kustomization.yaml # Kustomize entry pointThe gotk-components.yaml file contains the complete Flux installation: CustomResourceDefinitions, service accounts, deployments, and RBAC rules. The gotk-sync.yaml file contains two critical resources: a GitRepository pointing to your fleet-infra repo and a Kustomization that applies everything under the clusters/production path. This self-referential configuration means Flux continuously reconciles its own installation—update a controller version in Git, and Flux upgrades itself automatically on the next reconciliation interval.
Verify the synchronization status with the Flux CLI:
flux get sources gitflux get kustomizationsBoth commands should show Ready: True with the latest commit hash from your repository. If either resource shows a failed state, use flux logs to inspect controller output and diagnose connectivity or permission issues.
With Flux installed and synchronized, the cluster actively watches your Git repository for changes. The next step is defining your first application deployment—creating the manifests that Flux will reconcile into running workloads.
Defining Your First GitOps Pipeline: From Manifests to Running Pods
With Flux bootstrapped and connected to your cluster, you now have the foundation for declarative infrastructure management. The next step transforms this foundation into a working deployment pipeline that automatically syncs your application manifests from Git to running workloads.
Creating a GitRepository Source
Flux uses source controllers to fetch configuration from external locations. The GitRepository custom resource tells Flux where to find your application manifests and how often to check for updates.
apiVersion: source.toolkit.fluxcd.io/v1kind: GitRepositorymetadata: name: webapp-source namespace: flux-systemspec: interval: 1m url: https://github.com/acme-corp/webapp-manifests ref: branch: main secretRef: name: webapp-repo-credentialsThe interval field controls how frequently Flux polls the repository for changes. A one-minute interval balances responsiveness with API rate limits. For private repositories, the secretRef points to a Kubernetes secret containing authentication credentials—either a personal access token or SSH key.
Pro Tip: Store your GitRepository definitions in the same repository that Flux manages. This creates a self-referential system where adding new sources follows the same GitOps workflow as deploying applications.
Defining the Kustomization Resource
The Kustomization resource connects a source to a specific path containing Kubernetes manifests. Despite sharing a name with the Kustomize tool, this is a Flux-specific CRD that orchestrates the reconciliation process.
apiVersion: kustomize.toolkit.fluxcd.io/v1kind: Kustomizationmetadata: name: webapp namespace: flux-systemspec: interval: 5m sourceRef: kind: GitRepository name: webapp-source path: ./deploy/production prune: true timeout: 3m healthChecks: - apiVersion: apps/v1 kind: Deployment name: webapp namespace: webapp wait: true retryInterval: 2mSeveral fields deserve attention here. The path specifies which directory within the repository contains the manifests—allowing a single repository to serve multiple environments or applications. Setting prune: true enables garbage collection: when you remove a manifest from Git, Flux deletes the corresponding resource from the cluster.
The healthChecks array defines success criteria for the reconciliation. Flux waits for the specified resources to become healthy before marking the sync as complete. Combined with wait: true, this ensures deployments actually succeed rather than just being submitted to the API server.
Watching Reconciliation in Action
With both resources committed to your Flux-managed repository, observe the reconciliation:
flux get sources gitflux get kustomizationskubectl get events -n flux-system --sort-by='.lastTimestamp'The first reconciliation fetches the repository, parses the manifests at the specified path, and applies them to the cluster. Subsequent reconciliations compare the desired state in Git against the actual cluster state, applying only the necessary changes.
To trigger an immediate sync without waiting for the interval:
flux reconcile source git webapp-sourceflux reconcile kustomization webappFor real-time visibility into the reconciliation process:
flux logs --follow --kind=Kustomization --name=webappThis streams controller logs filtered to your specific application, showing each step from source fetching through manifest application.
Validating the Deployment Pipeline
Test the complete pipeline by making a change to your application manifests. Update an image tag or replica count, commit to the main branch, and watch Flux detect and apply the change:
watch flux get kustomizationsWithin the configured interval, the webapp Kustomization transitions through states: first showing a new revision detected, then applying changes, and finally reporting the sync as successful with health checks passed.
The Flux dashboard provides a web interface for this same information, but the CLI remains the most direct way to debug synchronization issues during initial setup.
You now have a functioning GitOps pipeline where commits to your manifest repository automatically propagate to running workloads. This pattern scales elegantly—the next section explores managing Helm releases and promoting changes across multiple environments using the same reconciliation model.
Helm Releases and Multi-Environment Promotion
Helm charts provide a templating layer that Flux leverages through the HelmRelease custom resource. This abstraction lets you define chart sources, version constraints, and environment-specific values in a declarative manner that Flux continuously reconciles against your cluster state. Understanding how to structure HelmReleases across multiple environments is essential for implementing a robust GitOps promotion workflow.
HelmRelease Fundamentals
A HelmRelease resource tells Flux which chart to deploy, from which source, and with what configuration:
apiVersion: helm.toolkit.fluxcd.io/v2kind: HelmReleasemetadata: name: api-service namespace: defaultspec: interval: 5m chart: spec: chart: api-service version: ">=1.0.0 <2.0.0" sourceRef: kind: HelmRepository name: internal-charts namespace: flux-system interval: 1m values: replicaCount: 2 image: repository: registry.example.com/api-service tag: v1.4.2The interval field controls how frequently Flux checks for chart updates within your semver constraint. Combined with the HelmRepository source, this creates an automated upgrade path for patch and minor versions while preventing unexpected breaking changes. The chart’s interval determines how often Flux polls the repository for new versions, while the top-level interval controls reconciliation frequency for the deployed release.
Repository Structure for Multi-Environment
The directory-based promotion strategy provides clear separation and audit trails. Structure your repository to share base configurations while allowing environment-specific overrides:
├── apps/│ ├── base/│ │ └── api-service/│ │ ├── helmrelease.yaml│ │ └── kustomization.yaml│ ├── development/│ │ └── api-service/│ │ ├── kustomization.yaml│ │ └── values-patch.yaml│ ├── staging/│ │ └── api-service/│ │ ├── kustomization.yaml│ │ └── values-patch.yaml│ └── production/│ └── api-service/│ ├── kustomization.yaml│ └── values-patch.yaml└── clusters/ ├── dev-us-east-1/ ├── staging-us-east-1/ └── prod-us-east-1/Each cluster directory contains a Kustomization that references the appropriate environment path. Promotion happens through pull requests that copy or merge changes from development through staging to production. This approach ensures that every environment change is reviewed, tested, and tracked in version control before reaching production clusters.
An alternative strategy uses branch-based promotion, where main represents production, and feature branches flow through development and staging branches before merging. Choose directory-based promotion when you need simultaneous visibility into all environment configurations, or branch-based promotion when you prefer linear progression with clear merge points.
Environment-Specific Values Without Duplication
Kustomize patches let you override specific HelmRelease fields per environment without duplicating the entire resource:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationnamespace: defaultresources: - ../../base/api-servicepatches: - path: values-patch.yaml target: kind: HelmRelease name: api-serviceapiVersion: helm.toolkit.fluxcd.io/v2kind: HelmReleasemetadata: name: api-servicespec: values: replicaCount: 5 resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "1Gi" cpu: "1000m" ingress: hosts: - api.example.com tls: - secretName: api-tls hosts: - api.example.comThe base HelmRelease defines sensible defaults and shared configuration. Environment patches modify only what differs—replica counts, resource allocations, ingress hostnames, and external service endpoints. This layered approach reduces configuration drift between environments while maintaining the flexibility to tune each deployment appropriately.
Pro Tip: Use
valuesFromto reference ConfigMaps or Secrets for values that change frequently or contain sensitive data. This separates configuration lifecycle from release definitions and integrates with external secret management solutions.
Implementing Promotion Gates
For controlled promotion, configure dependencies between environments using health checks and manual gates:
spec: dependsOn: - name: api-service namespace: development install: remediation: retries: 3 upgrade: remediation: retries: 3 remediateLastFailure: trueThis configuration prevents staging deployment until the development release reports healthy. Combined with branch protection rules requiring successful staging tests before merging to the production branch, you establish a promotion pipeline governed entirely through Git. The remediation settings ensure transient failures don’t block the pipeline unnecessarily while still catching persistent issues.
For additional control, you can suspend HelmReleases in production until explicit approval:
spec: suspend: trueRemoving the suspend flag through a pull request then triggers deployment, creating an explicit approval gate in your Git history.
The combination of HelmRelease resources, Kustomize overlays, and directory-based environment separation gives you reproducible deployments across environments. Every configuration change flows through version control, creating an audit trail and enabling rollback through Git revert. Teams can review environment-specific differences at a glance and trace any production configuration back to its source commit.
With your multi-environment Helm deployments configured, the next step is understanding how Flux detects and corrects configuration drift—the self-healing behavior that distinguishes GitOps from traditional deployment pipelines.
Drift Detection and Self-Healing in Practice
GitOps promises that your Git repository remains the single source of truth—but what happens when someone runs kubectl edit directly against your cluster? Flux’s drift detection and self-healing capabilities ensure unauthorized changes get reverted automatically, maintaining the integrity of your declared state.
How Flux Detects Drift
Flux continuously compares the live cluster state against the manifests in your Git repository. When the kustomize-controller or helm-controller reconciles a resource, it calculates a checksum of the desired state and compares it with what’s running. Any discrepancy triggers a reconciliation event that Flux logs and acts upon.
The detection mechanism works by storing the last applied configuration as an annotation on each managed resource. During each reconciliation cycle, Flux computes the expected state from your Git manifests and performs a three-way diff against both the stored configuration and the live object. This approach catches not only direct edits but also changes introduced by admission controllers or mutating webhooks.
You can observe this behavior directly:
## Make an unauthorized changekubectl scale deployment/api-server --replicas=5 -n production
## Watch Flux detect and revert itflux get kustomizations --watchWithin the reconciliation interval (default 10 minutes), Flux restores the replica count to whatever your Git manifests specify.
Configuring Reconciliation Strategies
The Kustomization resource provides fine-grained control over how Flux handles drift:
apiVersion: kustomize.toolkit.fluxcd.io/v1kind: Kustomizationmetadata: name: production-apps namespace: flux-systemspec: interval: 5m path: ./apps/production prune: true force: false sourceRef: kind: GitRepository name: flux-system timeout: 3m retryInterval: 1mThe prune: true setting removes resources from the cluster when you delete them from Git—essential for complete state synchronization. Without pruning enabled, orphaned resources accumulate over time, creating configuration drift that’s difficult to track. Setting force: true enables Flux to recreate immutable resources (like Jobs or certain ConfigMaps with immutable fields) by deleting and reapplying them, though this causes brief downtime.
The retryInterval parameter determines how quickly Flux retries after a failed reconciliation, while timeout sets the maximum duration for applying changes. Tuning these values depends on your deployment size—larger applications with many resources may require longer timeouts to complete successfully.
Setting Up Drift Alerts
Visibility into drift events is critical for understanding operational patterns and identifying workflow gaps. Configure Flux to send alerts when reconciliation fails or drift gets detected:
apiVersion: notification.toolkit.fluxcd.io/v1beta3kind: Providermetadata: name: slack-alerts namespace: flux-systemspec: type: slack channel: platform-alerts secretRef: name: slack-webhook-url---apiVersion: notification.toolkit.fluxcd.io/v1beta3kind: Alertmetadata: name: drift-detection namespace: flux-systemspec: providerRef: name: slack-alerts eventSeverity: info eventSources: - kind: Kustomization name: '*' - kind: HelmRelease name: '*' eventMetadata: cluster: production-us-east-1This configuration sends notifications for all Kustomization and HelmRelease events, including successful drift corrections and failed reconciliations. The eventMetadata field adds context to each alert, making it easier to identify which cluster generated the notification in multi-cluster environments.
Pro Tip: Track drift frequency as a platform metric. Frequent drift indicates either insufficient RBAC controls or teams circumventing GitOps workflows—both require organizational attention, not just technical solutions.
Testing Self-Healing
Validate your self-healing configuration before relying on it in production. Intentionally breaking deployments in a staging environment builds confidence that recovery works as expected:
## Delete a critical resourcekubectl delete configmap app-config -n production
## Verify Flux recreates itflux reconcile kustomization production-apps --with-source
## Check the resource is restoredkubectl get configmap app-config -n productionThe --with-source flag forces Flux to pull the latest manifests before reconciling, useful for immediate verification. Consider incorporating these validation steps into your disaster recovery runbooks—automated self-healing only provides value when you’ve confirmed it works under realistic failure conditions.
Drift detection provides operational confidence—you know manual changes won’t persist and your cluster state remains auditable through Git history. This self-healing behavior is one area where Flux and Argo CD take notably different approaches, which brings us to choosing between these two leading GitOps tools.
Flux vs Argo CD: Choosing the Right GitOps Tool
Both Flux and Argo CD implement the GitOps pull-based model, but they take fundamentally different approaches to solving the same problem. Understanding these differences helps you select the right tool for your infrastructure needs.
Architectural Philosophy
Flux follows a composable, controller-based architecture. Each component—source-controller, kustomize-controller, helm-controller, notification-controller—runs independently and communicates through Kubernetes custom resources. This modularity means you install only what you need. A team using plain manifests skips the Helm controller entirely.
Argo CD takes a monolithic approach centered around its web UI and API server. All functionality ships together, providing a cohesive experience but requiring more resources even when features go unused. The application server, repo server, and Redis cache run regardless of deployment complexity.
When the Dashboard Matters
Argo CD’s visual interface genuinely helps in specific scenarios: onboarding teams new to Kubernetes, debugging sync failures through the resource tree view, or providing stakeholder visibility into deployment status. Organizations with mixed technical audiences benefit from this accessibility.
For platform teams managing dozens of clusters programmatically, the dashboard becomes overhead. Flux’s CLI-first approach integrates naturally with existing automation. You query sync status through flux get kustomizations, pipe output to monitoring systems, and never open a browser.
Multi-Tenancy and Access Control
Both tools support multi-tenancy, but implementation differs substantially. Flux leverages native Kubernetes RBAC—tenants get namespaced service accounts with standard role bindings. No additional authentication layer sits between users and the cluster’s permission model.
Argo CD implements its own RBAC system through ConfigMaps, defining projects, roles, and policies separately from Kubernetes RBAC. This provides granular control over UI actions but introduces another permission layer to maintain. Teams already invested in Kubernetes RBAC find Flux’s approach more straightforward.
Migration Realities
Switching from Argo CD to Flux requires translating Application resources to Kustomization and HelmRelease CRDs. The concepts map cleanly: an Argo Application’s source and sync policy translate to a GitRepository and Kustomization pair. Migration scripts handle mechanical conversion, but expect to spend time on notification integrations and any custom health checks.
Moving from Flux to Argo CD involves similar translation work plus decisions about project structure and RBAC policy configuration.
Pro Tip: Run both tools in parallel on a non-production cluster for two weeks before committing. Real operational experience reveals workflow friction that documentation comparisons miss.
With your GitOps tooling decision made, the concepts from this article provide a foundation for implementing pull-based deployments that keep cluster credentials out of your CI pipelines while enabling the self-healing infrastructure your platform deserves.
Key Takeaways
- Remove cluster credentials from CI pipelines by switching to pull-based GitOps—your Jenkins server should never have kubectl access
- Start with a single Kustomization syncing one application path, then expand to multiple environments once the pattern is proven
- Enable drift detection alerts immediately—knowing when someone runs kubectl apply manually is half the value of GitOps
- Structure your Git repository with clear environment separation from day one; retrofitting is painful