We do not provide our own images for those components yet which is
causing some incompabilites and test failures
Signed-off-by: Jan Martens <jan@martens.eu.org>
When updating the Vault config (and corresponding)
configmap, we now generate a checksum of the config
and set it as an annotation on both the configmap
and the Vault StatefulSet pod template.
This allows the deployer to know what pods need to
be restarted to pick up the a changed config.
We still recommend using the standard upgrade
[method for Vault on Kubernetes](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-raft-deployment-guide#upgrading-vault-on-kubernetes),
i.e., using the `OnDelete` strategy
for the Vault StatefulSet, so updating the config
and doing a `helm upgrade` should not trigger the
pods to restart, and then deleting pods one
at a time, starting with the standby pods.
With `kubectl` and `jq`, you can check check which
pods need to be updated by first getting the value
of the current configmap checksum:
```shell
kubectl get pods -o json | jq -r ".items[] | select(.metadata.annotations.\"config/checksum\" != $(kubectl get configmap vault-config -o json | jq '.metadata.annotations."config/checksum"') ) | .metadata.name"
```
Fixes#748.
---------
Co-authored-by: Tom Proctor <tomhjp@users.noreply.github.com>
* feat: allow server netPol to specify podSelector
* feat(test): add podSelector NetworkPolicy unittest
* chore: introduce server.networkPolicy.ingress
As suggested let users template the whole ingress object for the
networkPolicy than only the podSelector.
Co-authored-by: tvoran <444265+tvoran@users.noreply.github.com>
---------
Co-authored-by: tvoran <444265+tvoran@users.noreply.github.com>
This variable is used to set the persistentVolumeClaimRetentionPolicy
value in the server-statefulset.yaml template, which is used to
configure the retention policy for the PVCs used by the server
statefulset.
* Prepare for 0.25.0 release
* Update CSI acceptance test assertion
Starting in 1.4.0, the CSI provider caches Vault tokens locally. The main thing
we want to check is that the Agent cache is being used so that it's doing the
renewal legwork for any leased secrets, so check for the renewal log message instead
because CSI won't auth over and over anymore.
Support for prometheus-operator was added in
https://github.com/hashicorp/vault-helm/pull/772 and a default empty
set of rules was defined as an empty map `{}`. However, as evidenced
by the commented out rule examples below that very same values.yaml,
this is expected to be a list, so `rules:` value should be set to an
empty list `[]`.
Co-authored-by: Marc 'risson' Schmitt <marc.schmitt@risson.space>
Co-authored-by: Vitaliy <vitaliyf@users.noreply.github.com>
Adds Agent as a sidecar for the CSI Provider to:
* Cache k8s auth login leases
* Cache secret leases
* Automatically renew renewable leases in the background
* Add configurable Port Number in readinessProbe and livenessProbe for the server-statefulset.
Co-authored-by: Kyle Schochenmaier <kyle.schochenmaier@hashicorp.com>
Test with latest kind k8s versions 1.22-1.26. Remove support for old
disruptionbudget and ingress APIs (pre 1.22).
Pin all actions to SHAs, and use the common jira sync.
Update the default Vault version to v1.13.1.
Update chart-verifier used in tests to 1.10.1, also add an openshift
name annotation to Chart.yaml (one of the required checks).
* remove 1.16 from the versions tested in .github/workflows/acceptance.yaml as kind no longer supports creating a k8s 1.16 cluster
* update vault-helm's minimum support k8s version to 1.20 in README and Chart.yaml
* refactor server-ingress's templating and unit tests applied to k8s versions < 1.20