# Argo CD Chart A Helm chart for ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes. Source code can be found [here](https://argoproj.github.io/argo-cd/) ## Additional Information This is a **community maintained** chart. This chart installs [argo-cd](https://argoproj.github.io/argo-cd/), a declarative, GitOps continuous delivery tool for Kubernetes. The default installation is intended to be similar to the provided ArgoCD [releases](https://github.com/argoproj/argo-cd/releases). ## High Availability This chart installs the non-HA version of ArgoCD by default. If you want to run ArgoCD in HA mode, you can use one of the example values in the next sections. Please also have a look into the upstream [Operator Manual regarding High Availability](https://argoproj.github.io/argo-cd/operator-manual/high_availability/) to understand how scaling of ArgoCD works in detail. > **Warning:** > You need at least 3 worker nodes as the HA mode of redis enforces Pods to run on separate nodes. ### HA mode with autoscaling ```yaml redis-ha: enabled: true controller: enableStatefulSet: true server: autoscaling: enabled: true minReplicas: 2 repoServer: autoscaling: enabled: true minReplicas: 2 ``` ### HA mode without autoscaling ```yaml redis-ha: enabled: true controller: enableStatefulSet: true server: replicas: 2 env: - name: ARGOCD_API_SERVER_REPLICAS value: '2' repoServer: replicas: 2 ``` ### Synchronizing Changes from Original Repository In the original [ArgoCD repository](https://github.com/argoproj/argo-cd/) an [`manifests/install.yaml`](https://github.com/argoproj/argo-cd/blob/master/manifests/install.yaml) is generated using `kustomize`. It's the basis for the installation as [described in the docs](https://argo-cd.readthedocs.io/en/stable/getting_started/#1-install-argo-cd). When installing ArgoCD using this helm chart the user should have a similar experience and configuration rolled out. Hence, it makes sense to try to achieve a similar output of rendered `.yaml` resources when calling `helm template` using the default settings in `values.yaml`. To update the templates and default settings in `values.yaml` it may come in handy to look up the diff of the `manifests/install.yaml` between two versions accordingly. This can either be done directly via github and look for `manifests/install.yaml`: https://github.com/argoproj/argo-cd/compare/v1.8.7...v2.0.0#files_bucket Or you clone the repository and do a local `git-diff`: ```bash git clone https://github.com/argoproj/argo-cd.git cd argo-cd git diff v1.8.7 v2.0.0 -- manifests/install.yaml ``` Changes in the `CustomResourceDefinition` resources shall be fixed easily by copying 1:1 from the [`manifests/crds` folder](https://github.com/argoproj/argo-cd/tree/master/manifests/crds) into this [`charts/argo-cd/crds` folder](https://github.com/argoproj/argo-helm/tree/master/charts/argo-cd/crds). ## Upgrading ### 3.13.0 This release removes the flag `--staticassets` from argocd server as it has been dropped upstream. If this flag needs to be enabled e.g for older releases of ArgoCD, it can be passed via the `server.extraArgs` field ### 3.10.2 ArgoCD has recently deprecated the flag `--staticassets` and from chart version `3.10.2` has been disabled by default It can be re-enabled by setting `server.staticAssets.enabled` to true ### 3.8.1 This bugfix version potentially introduces a rename (and recreation) of one or more ServiceAccounts. It _only happens_ when you use one of these customization: ```yaml # Case 1) - only happens when you do not specify a custom name (repoServer.serviceAccount.name) repoServer: serviceAccount: create: true # Case 2) controller: serviceAccount: name: "" # or # Case 3) dex: serviceAccount: name: "" # or # Case 4) server: serviceAccount: name: "" # or ``` Please check if you are affected by one of these cases **before you upgrade**, especially when you use **cloud IAM roles for service accounts.** (eg. IRSA on AWS or Workload Identity for GKE) ### 3.2.* With this minor version we introduced the evaluation for the ingress manifest (depending on the capabilities version), See [Pull Request](https://github.com/argoproj/argo-helm/pull/637). [Issue 703](https://github.com/argoproj/argo-helm/issues/703) reported that the capabilities evaluation is **not handled correctly when deploying the chart via an ArgoCD instance**, especially deploying on clusters running a cluster version prior to `1.19` (which misses `Ingress` on apiVersion `networking.k8s.io/v1`). If you are running a cluster version prior to `1.19` you can avoid this issue by directly installing chart version `3.6.0` and setting `kubeVersionOverride` like: ```yaml kubeVersionOverride: "1.18.0" ``` Then you should no longer encounter this issue. ### 3.0.0 and above Helm apiVersion switched to `v2`. Requires Helm `3.0.0` or above to install. [Read More](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) on how to migrate your release from Helm 2 to Helm 3. ### 2.14.7 and above The `matchLabels` key in the ArgoCD Application Controller is no longer hard-coded. Note that labels are immutable so caution should be exercised when making changes to this resource. ### 2.10.x to 2.11.0 The application controller is now available as a `StatefulSet` when the `controller.enableStatefulSet` flag is set to true. Depending on your Helm deployment this may be a downtime or breaking change if enabled when using HA and will become the default in 3.x. ### 1.8.7 to 2.x.x `controller.extraArgs`, `repoServer.extraArgs` and `server.extraArgs` are now arrays of strings instead of a map What was ```yaml server: extraArgs: insecure: "" ``` is now ```yaml server: extraArgs: - --insecure ``` ## Prerequisites - Kubernetes 1.7+ - Helm v3.0.0+ ## Installing the Chart To install the chart with the release name `my-release`: ```console $ helm repo add argo https://argoproj.github.io/argo-helm "argo" has been added to your repositories $ helm install my-release argo/argo-cd NAME: my-release ... ``` ## General parameters | Key | Type | Default | Description | |-----|------|---------|-------------| {{- range .Values }} {{- if not (or (hasPrefix "controller" .Key) (hasPrefix "repoServer" .Key) (hasPrefix "server" .Key) (hasPrefix "dex" .Key) (hasPrefix "redis" .Key) ) }} | {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} | {{- end }} {{- if hasPrefix "server.additional" .Key }} | {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} | {{- end }} {{- end }} ## ArgoCD Controller | Key | Type | Default | Description | |-----|------|---------|-------------| {{- range .Values }} {{- if hasPrefix "controller" .Key }} | {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} | {{- end }} {{- end }} ## Argo Repo Server | Key | Type | Default | Description | |-----|------|---------|-------------| {{- range .Values }} {{- if hasPrefix "repoServer" .Key }} | {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} | {{- end }} {{- end }} ## Argo Server | Key | Type | Default | Description | |-----|------|---------|-------------| {{- range .Values }} {{- if and (hasPrefix "server" .Key) (not (hasPrefix "server.additional" .Key)) }} | {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} | {{- end }} {{- end }} ## Dex | Key | Type | Default | Description | |-----|------|---------|-------------| {{- range .Values }} {{- if hasPrefix "dex" .Key }} | {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} | {{- end }} {{- end }} ## Redis | Key | Type | Default | Description | |-----|------|---------|-------------| {{- range .Values }} {{- if hasPrefix "redis." .Key }} | {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} | {{- end }} {{- end }} {{- range .Values }} {{- if hasPrefix "redis-ha" .Key }} | {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} | {{- end }} {{- end }} ### Using AWS ALB Ingress Controller With GRPC If you are using an AWS ALB Ingress controller, you will need to set `server.ingressGrpc.isAWSALB` to `true`. This will create a second service with the annotation `alb.ingress.kubernetes.io/backend-protocol-version: HTTP2` and modify the server ingress to add a condition annotation to route GRPC traffic to the new service. Example: ```yaml server: ingress: enabled: true annotations: alb.ingress.kubernetes.io/backend-protocol: HTTPS alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/target-type: ip ingressGrpc: enabled: true isAWSALB: true awsALB: serviceType: ClusterIP ``` {{ template "helm-docs.versionFooter" . }} [ArgoCD RBAC policy]: https://argoproj.github.io/argo-cd/operator-manual/rbac/ [affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ [BackendConfigSpec]: https://cloud.google.com/kubernetes-engine/docs/concepts/backendconfig#backendconfigspec_v1beta1_cloudgooglecom [CSS styles]: https://argo-cd.readthedocs.io/en/stable/operator-manual/custom-styles/ [external cluster credentials]: https://argoproj.github.io/argo-cd/operator-manual/declarative-setup/#clusters [General Argo CD configuration]: https://argoproj.github.io/argo-cd/operator-manual/declarative-setup/#repositories [gRPC-ingress]: https://argoproj.github.io/argo-cd/operator-manual/ingress/ [HPA]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ [MetricRelabelConfigs]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs [Node selector]: https://kubernetes.io/docs/user-guide/node-selection/ [probe]: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes [RelabelConfigs]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config [Tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ [TopologySpreadConstraints]: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ [values.yaml]: values.yaml