Refactor chart for 1.0, add tests, update TF (#2)
* Refactor chart for 1.0, add tests, update TF * Fix typo in helper comment * Add NOTES for post install instructions * Fix typo in NOTES * Fix replication port for enterprise * Change updateStrategy to OnDelete * Add icon * Remove cluster address from config * Update README, add contributing doc * Update README * Change HA replicas to 3
This commit is contained in:
parent
ca40087add
commit
b7469914e2
32 changed files with 1977 additions and 862 deletions
4
.gitignore
vendored
4
.gitignore
vendored
|
@ -4,3 +4,7 @@
|
|||
terraform.tfstate*
|
||||
terraform.tfvars
|
||||
values.dev.yaml
|
||||
vaul-helm-dev-creds.json
|
||||
./test/acceptance/vaul-helm-dev-creds.json
|
||||
./test/acceptance/values.yaml
|
||||
./test/acceptance/values.yml
|
||||
|
|
55
CONTRIBUTING.md
Normal file
55
CONTRIBUTING.md
Normal file
|
@ -0,0 +1,55 @@
|
|||
# Contributing to Vault Helm
|
||||
|
||||
**Please note:** We take Vault's security and our users' trust very seriously.
|
||||
If you believe you have found a security issue in Vault, please responsibly
|
||||
disclose by contacting us at security@hashicorp.com.
|
||||
|
||||
**First:** if you're unsure or afraid of _anything_, just ask or submit the
|
||||
issue or pull request anyways. You won't be yelled at for giving it your best
|
||||
effort. The worst that can happen is that you'll be politely asked to change
|
||||
something. We appreciate any sort of contributions, and don't want a wall of
|
||||
rules to get in the way of that.
|
||||
|
||||
That said, if you want to ensure that a pull request is likely to be merged,
|
||||
talk to us! You can find out our thoughts and ensure that your contribution
|
||||
won't clash or be obviated by Vault's normal direction. A great way to do this
|
||||
is via the [Vault Google Group][2]. Sometimes Vault devs are in `#vault-tool`
|
||||
on Freenode, too.
|
||||
|
||||
This document will cover what we're looking for in terms of reporting issues.
|
||||
By addressing all the points we're looking for, it raises the chances we can
|
||||
quickly merge or address your contributions.
|
||||
|
||||
## Issues
|
||||
|
||||
### Reporting an Issue
|
||||
|
||||
* Make sure you test against the latest released version. It is possible
|
||||
we already fixed the bug you're experiencing. Even better is if you can test
|
||||
against `master`, as bugs are fixed regularly but new versions are only
|
||||
released every few months.
|
||||
|
||||
* Provide steps to reproduce the issue, and if possible include the expected
|
||||
results as well as the actual results. Please provide text, not screen shots!
|
||||
|
||||
* Respond as promptly as possible to any questions made by the Vault
|
||||
team to your issue. Stale issues will be closed periodically.
|
||||
|
||||
### Issue Lifecycle
|
||||
|
||||
1. The issue is reported.
|
||||
|
||||
2. The issue is verified and categorized by a Vault Helm collaborator.
|
||||
Categorization is done via tags. For example, bugs are marked as "bugs".
|
||||
|
||||
3. Unless it is critical, the issue may be left for a period of time (sometimes
|
||||
many weeks), giving outside contributors -- maybe you!? -- a chance to
|
||||
address the issue.
|
||||
|
||||
4. The issue is addressed in a pull request or commit. The issue will be
|
||||
referenced in the commit message so that the code that fixes it is clearly
|
||||
linked.
|
||||
|
||||
5. The issue is closed. Sometimes, valid issues will be closed to keep
|
||||
the issue tracker clean. The issue is still indexed and available for
|
||||
future viewers, or can be re-opened if necessary.
|
|
@ -3,6 +3,7 @@ name: vault
|
|||
version: 0.1.0
|
||||
description: Install and configure Vault on Kubernetes.
|
||||
home: https://www.vaultproject.io
|
||||
icon: https://github.com/hashicorp/vault/raw/f22d202cde2018f9455dec755118a9b84586e082/Vault_PrimaryLogo_Black.png
|
||||
sources:
|
||||
- https://github.com/hashicorp/vault
|
||||
- https://github.com/hashicorp/vault-helm
|
||||
|
|
158
README.md
158
README.md
|
@ -1,17 +1,12 @@
|
|||
# Vault Helm Chart
|
||||
|
||||
------
|
||||
## WIP - forked from vault-Helm and under heavy development
|
||||
------
|
||||
|
||||
This repository contains the official HashiCorp Helm chart for installing
|
||||
and configuring Vault on Kubernetes. This chart supports multiple use
|
||||
cases of Vault on Kubernetes depending on the values provided.
|
||||
|
||||
[//]: # (These docs don't exist yet)
|
||||
[//]: # (For full documentation on this Helm chart along with all the ways you can)
|
||||
[//]: # (use Vault with Kubernetes, please see the)
|
||||
[//]: # ([Vault and Kubernetes documentation](https://www.vault.io/docs/platform/k8s/index.html).)
|
||||
For full documentation on this Helm chart along with all the ways you can
|
||||
use Vault with Kubernetes, please see the
|
||||
[Vault and Kubernetes documentation](https://www.vaultproject.io/docs/platform/k8s/index.html).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
@ -38,18 +33,9 @@ then be installed directly:
|
|||
|
||||
helm install ./vault-helm
|
||||
|
||||
[//]: # (These docs don't exist yet)
|
||||
[//]: # (Please see the many options supported in the `values.yaml`)
|
||||
[//]: # (file. These are also fully documented directly on the)
|
||||
[//]: # ([Vault website](https://www.vault.io/docs/platform/k8s/helm.html).)
|
||||
|
||||
### Using auto-unseal
|
||||
|
||||
Starting with Vault 1.0-beta, auto-unseal features are now included in the open source
|
||||
version on Vault. In order to use these features, users must ensure that
|
||||
the Vault configuration is provided with the appropriate credentials and authorizations
|
||||
to access the APIs needed for the given key provider, as well as the necessary keys
|
||||
created before hand.
|
||||
Please see the many options supported in the `values.yaml`
|
||||
file. These are also fully documented directly on the
|
||||
[Vault website](https://www.vaultproject.io/docs/platform/k8s/helm.html).
|
||||
|
||||
## Testing
|
||||
|
||||
|
@ -58,9 +44,22 @@ The Helm chart ships with both unit and acceptance tests.
|
|||
The unit tests don't require any active Kubernetes cluster and complete
|
||||
very quickly. These should be used for fast feedback during development.
|
||||
The acceptance tests require a Kubernetes cluster with a configured `kubectl`.
|
||||
Both require [Bats](https://github.com/bats-core/bats-core) and `helm` to be
|
||||
installed and available on the CLI. The unit tests also require the correct
|
||||
version of [yq](https://pypi.org/project/yq/) if running locally.
|
||||
|
||||
### Prequisites
|
||||
* [Bats](https://github.com/bats-core/bats-core)
|
||||
```bash
|
||||
brew install bats-core
|
||||
```
|
||||
* [yq](https://pypi.org/project/yq/)
|
||||
```bash
|
||||
brew install python-yq
|
||||
```
|
||||
* [helm](https://helm.sh)
|
||||
```bash
|
||||
brew install kubernetes-helm
|
||||
```
|
||||
|
||||
### Running The Tests
|
||||
|
||||
To run the unit tests:
|
||||
|
||||
|
@ -75,8 +74,119 @@ may not be properly cleaned up. We recommend recycling the Kubernetes cluster to
|
|||
start from a clean slate.
|
||||
|
||||
**Note:** There is a Terraform configuration in the
|
||||
[test/terraform/ directory](https://github.com/hashicorp/vault-helm/tree/master/test/terraform)
|
||||
[`test/terraform/`](https://github.com/hashicorp/vault-helm/tree/master/test/terraform) directory
|
||||
that can be used to quickly bring up a GKE cluster and configure
|
||||
`kubectl` and `helm` locally. This can be used to quickly spin up a test
|
||||
cluster for acceptance tests. Unit tests _do not_ require a running Kubernetes
|
||||
cluster.
|
||||
|
||||
### Writing Unit Tests
|
||||
|
||||
Changes to the Helm chart should be accompanied by appropriate unit tests.
|
||||
|
||||
#### Formatting
|
||||
|
||||
- Put tests in the test file in the same order as the variables appear in the `values.yaml`.
|
||||
- Start tests for a chart value with a header that says what is being tested, like this:
|
||||
```
|
||||
#--------------------------------------------------------------------
|
||||
# annotations
|
||||
```
|
||||
|
||||
- Name the test based on what it's testing in the following format (this will be its first line):
|
||||
```
|
||||
@test "<section being tested>: <short description of the test case>" {
|
||||
```
|
||||
|
||||
When adding tests to an existing file, the first section will be the same as the other tests in the file.
|
||||
|
||||
#### Test Details
|
||||
|
||||
[Bats](https://github.com/bats-core/bats-core) provides a way to run commands in a shell and inspect the output in an automated way.
|
||||
In all of the tests in this repo, the base command being run is [helm template](https://docs.helm.sh/helm/#helm-template) which turns the templated files into straight yaml output.
|
||||
In this way, we're able to test that the various conditionals in the templates render as we would expect.
|
||||
|
||||
Each test defines the files that should be rendered using the `-x` flag, then it might adjust chart values by adding `--set` flags as well.
|
||||
The output from this `helm template` command is then piped to [yq](https://pypi.org/project/yq/).
|
||||
`yq` allows us to pull out just the information we're interested in, either by referencing its position in the yaml file directly or giving information about it (like its length).
|
||||
The `-r` flag can be used with `yq` to return a raw string instead of a quoted one which is especially useful when looking for an exact match.
|
||||
|
||||
The test passes or fails based on the conditional at the end that is in square brackets, which is a comparison of our expected value and the output of `helm template` piped to `yq`.
|
||||
|
||||
The `| tee /dev/stderr ` pieces direct any terminal output of the `helm template` and `yq` commands to stderr so that it doesn't interfere with `bats`.
|
||||
|
||||
#### Test Examples
|
||||
|
||||
Here are some examples of common test patterns:
|
||||
|
||||
- Check that a value is disabled by default
|
||||
|
||||
```
|
||||
@test "ui/Service: no type by default" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.type' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
}
|
||||
```
|
||||
|
||||
In this example, nothing is changed from the default templates (no `--set` flags), then we use `yq` to retrieve the value we're checking, `.spec.type`.
|
||||
This output is then compared against our expected value (`null` in this case) in the assertion `[ "${actual}" = "null" ]`.
|
||||
|
||||
|
||||
- Check that a template value is rendered to a specific value
|
||||
```
|
||||
@test "ui/Service: specified type" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'ui.serviceType=LoadBalancer' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.type' | tee /dev/stderr)
|
||||
[ "${actual}" = "LoadBalancer" ]
|
||||
}
|
||||
```
|
||||
|
||||
This is very similar to the last example, except we've changed a default value with the `--set` flag and correspondingly changed the expected value.
|
||||
|
||||
- Check that a template value contains several values
|
||||
```
|
||||
@test "server/standalone-StatefulSet: custom resources" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.resources.requests.memory=256Mi' \
|
||||
--set 'server.resources.requests.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.requests.memory' | tee /dev/stderr)
|
||||
[ "${actual}" = "256Mi" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.resources.limits.memory=256Mi' \
|
||||
--set 'server.resources.limits.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.limits.memory' | tee /dev/stderr)
|
||||
[ "${actual}" = "256Mi" ]
|
||||
```
|
||||
|
||||
*Note:* If testing more than two conditions, it would be good to separate the `helm template` part of the command from the `yq` sections to reduce redundant work.
|
||||
|
||||
- Check that an entire template file is not rendered
|
||||
```
|
||||
@test "syncCatalog/Deployment: disabled by default" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
```
|
||||
Here we are check the length of the command output to see if the anything is rendered.
|
||||
This style can easily be switched to check that a file is rendered instead.
|
||||
|
|
14
templates/NOTES.txt
Normal file
14
templates/NOTES.txt
Normal file
|
@ -0,0 +1,14 @@
|
|||
|
||||
Thank you for installing HashiCorp Vault!
|
||||
|
||||
Now that you have deployed Vault, you should look over the docs on using
|
||||
Vault with Kubernetes available here:
|
||||
|
||||
https://www.vaultproject.io/docs/
|
||||
|
||||
|
||||
Your release is named {{ .Release.Name }}. To learn more about the release, try:
|
||||
|
||||
$ helm status {{ .Release.Name }}
|
||||
$ helm get {{ .Release.Name }}
|
||||
|
|
@ -37,11 +37,205 @@ This defaults to (n/2)-1 where n is the number of members of the server cluster.
|
|||
Add a special case for replicas=1, where it should default to 0 as well.
|
||||
*/}}
|
||||
{{- define "vault.pdb.maxUnavailable" -}}
|
||||
{{- if eq (int .Values.serverHA.replicas) 1 -}}
|
||||
{{- if eq (int .Values.server.ha.replicas) 1 -}}
|
||||
{{ 0 }}
|
||||
{{- else if .Values.serverHA.disruptionBudget.maxUnavailable -}}
|
||||
{{ .Values.serverHA.disruptionBudget.maxUnavailable -}}
|
||||
{{- else if .Values.server.ha.disruptionBudget.maxUnavailable -}}
|
||||
{{ .Values.server.ha.disruptionBudget.maxUnavailable -}}
|
||||
{{- else -}}
|
||||
{{- ceil (sub (div (int .Values.serverHA.replicas) 2) 1) -}}
|
||||
{{- ceil (sub (div (int .Values.server.ha.replicas) 2) 1) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Set the variable 'mode' to the server mode requested by the user to simplify
|
||||
template logic.
|
||||
*/}}
|
||||
{{- define "vault.mode" -}}
|
||||
{{- if eq (.Values.server.dev.enabled | toString) "true" -}}
|
||||
{{- $_ := set . "mode" "dev" -}}
|
||||
{{- else if eq (.Values.server.ha.enabled | toString) "true" -}}
|
||||
{{- $_ := set . "mode" "ha" -}}
|
||||
{{- else if or (eq (.Values.server.standalone.enabled | toString) "true") (eq (.Values.server.standalone.enabled | toString) "-") -}}
|
||||
{{- $_ := set . "mode" "standalone" -}}
|
||||
{{- else -}}
|
||||
{{- $_ := set . "mode" "" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Set's the replica count based on the different modes configured by user
|
||||
*/}}
|
||||
{{- define "vault.replicas" -}}
|
||||
{{ if eq .mode "standalone" }}
|
||||
{{- default 1 -}}
|
||||
{{ else if eq .mode "ha" }}
|
||||
{{- .Values.server.ha.replicas | default 3 -}}
|
||||
{{ else }}
|
||||
{{- default 1 -}}
|
||||
{{ end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Set's fsGroup based on different modes. Standalone is the only mode
|
||||
that requires fsGroup at this time because it uses PVC for the file
|
||||
storage backend.
|
||||
*/}}
|
||||
{{- define "vault.fsgroup" -}}
|
||||
{{ if eq .mode "standalone" }}
|
||||
{{- .Values.server.storageFsGroup | default 1000 -}}
|
||||
{{ end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Set's up configmap mounts if this isn't a dev deployment and the user
|
||||
defined a custom configuration. Additionally iterates over any
|
||||
extra volumes the user may have specified (such as a secret with TLS).
|
||||
*/}}
|
||||
{{- define "vault.volumes" -}}
|
||||
{{- if and (ne .mode "dev") (or (ne .Values.server.standalone.config "") (ne .Values.server.ha.config "")) }}
|
||||
- name: config
|
||||
configMap:
|
||||
name: {{ template "vault.fullname" . }}-config
|
||||
{{ end }}
|
||||
{{- range .Values.server.extraVolumes }}
|
||||
- name: userconfig-{{ .name }}
|
||||
{{ .type }}:
|
||||
{{- if (eq .type "configMap") }}
|
||||
name: {{ .name }}
|
||||
{{- else if (eq .type "secret") }}
|
||||
secretName: {{ .name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Set's a command to override the entrypoint defined in the image
|
||||
so we can make the user experience nicer. This works in with
|
||||
"vault.args" to specify what commands /bin/sh should run.
|
||||
*/}}
|
||||
{{- define "vault.command" -}}
|
||||
{{ if or (eq .mode "standalone") (eq .mode "ha") }}
|
||||
- "/bin/sh"
|
||||
- "-ec"
|
||||
{{ end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Set's the args for custom command to render the Vault configuration
|
||||
file with IP addresses to make the out of box experience easier
|
||||
for users looking to use this chart with Consul Helm.
|
||||
*/}}
|
||||
{{- define "vault.args" -}}
|
||||
{{ if or (eq .mode "standalone") (eq .mode "ha") }}
|
||||
- |
|
||||
sed -E "s/HOST_IP/${HOST_IP?}/g" /vault/config/extraconfig-from-values.hcl > /tmp/storageconfig.hcl;
|
||||
sed -Ei "s/POD_IP/${POD_IP?}/g" /tmp/storageconfig.hcl;
|
||||
chown vault:vault /tmp/storageconfig.hcl;
|
||||
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
|
||||
{{ end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Set's additional environment variables based on the mode.
|
||||
*/}}
|
||||
{{- define "vault.envs" -}}
|
||||
{{ if eq .mode "dev" }}
|
||||
- name: VAULT_DEV_ROOT_TOKEN_ID
|
||||
value: "root"
|
||||
{{ end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Set's which additional volumes should be mounted to the container
|
||||
based on the mode configured.
|
||||
*/}}
|
||||
{{- define "vault.mounts" -}}
|
||||
{{ if eq .mode "standalone" }}
|
||||
{{ if eq (.Values.server.auditStorage.enabled | toString) "true" }}
|
||||
- name: audit
|
||||
mountPath: /vault/audit
|
||||
{{ end }}
|
||||
{{ if eq (.Values.server.dataStorage.enabled | toString) "true" }}
|
||||
- name: data
|
||||
mountPath: /vault/data
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
{{ if and (ne .mode "dev") (or (ne .Values.server.standalone.config "") (ne .Values.server.ha.config "")) }}
|
||||
- name: config
|
||||
mountPath: /vault/config
|
||||
{{ end }}
|
||||
{{- range .Values.server.extraVolumes }}
|
||||
- name: userconfig-{{ .name }}
|
||||
readOnly: true
|
||||
mountPath: /vault/userconfig/{{ .name }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Set's up the volumeClaimTemplates when data or audit storage is required. HA
|
||||
might not use data storage since Consul is likely it's backend, however, audit
|
||||
storage might be desired by the user.
|
||||
*/}}
|
||||
{{- define "vault.volumeclaims" -}}
|
||||
{{- if and (ne .mode "dev") (or .Values.server.dataStorage.enabled .Values.server.auditStorage.enabled) }}
|
||||
volumeClaimTemplates:
|
||||
{{- if and (eq (.Values.server.dataStorage.enabled | toString) "true") (eq .mode "standalone") }}
|
||||
- metadata:
|
||||
name: data
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.server.dataStorage.accessMode | default "ReadWriteOnce" }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.server.dataStorage.size }}
|
||||
{{- if .Values.server.dataStorage.storageClass }}
|
||||
storageClassName: {{ .Values.server.dataStorage.storageClass }}
|
||||
{{- end }}
|
||||
{{ end }}
|
||||
{{- if eq (.Values.server.auditStorage.enabled | toString) "true" }}
|
||||
- metadata:
|
||||
name: audit
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.server.auditStorage.accessMode | default "ReadWriteOnce" }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.server.auditStorage.size }}
|
||||
{{- if .Values.server.auditStorage.storageClass }}
|
||||
storageClassName: {{ .Values.server.auditStorage.storageClass }}
|
||||
{{- end }}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Set's the affinity for pod placement when running in standalone and HA modes.
|
||||
*/}}
|
||||
{{- define "vault.affinity" -}}
|
||||
{{- if and (ne .mode "dev") (ne .Values.server.affinity "") }}
|
||||
affinity:
|
||||
{{ tpl .Values.server.affinity . | nindent 8 | trim }}
|
||||
{{ end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Set's the container resources if the user has set any.
|
||||
*/}}
|
||||
{{- define "vault.resources" -}}
|
||||
{{- if .Values.server.resources -}}
|
||||
resources:
|
||||
{{ toYaml .Values.server.resources | indent 12}}
|
||||
{{ end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Inject extra environment vars in the format key:value, if populated
|
||||
*/}}
|
||||
{{- define "vault.extraEnvironmentVars" -}}
|
||||
{{- if .extraEnvironmentVars -}}
|
||||
{{- range $key, $value := .extraEnvironmentVars }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
|
|
@ -1,28 +0,0 @@
|
|||
# Headless service for Vault server DNS entries. This service should only
|
||||
# point to Vault servers. For access to an agent, one should assume that
|
||||
# the agent is installed locally on the node and the NODE_IP should be used.
|
||||
# If the node can't run a Vault agent, then this service can be used to
|
||||
# communicate directly to a server agent.
|
||||
{{- if (and (or (and (ne (.Values.serverHA.enabled | toString) "-") .Values.serverHA.enabled) (and (eq (.Values.serverHA.enabled | toString) "-") .Values.global.enabled)) (or (and (ne (.Values.ui.enabled | toString) "-") .Values.ui.enabled) (and (eq (.Values.ui.enabled | toString) "-") .Values.global.enabled)) (or (and (ne (.Values.ui.service.enabled | toString) "-") .Values.ui.service.enabled) (and (eq (.Values.ui.service.enabled | toString) "-") .Values.global.enabled))) }}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "vault.fullname" . }}-ui
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
spec:
|
||||
selector:
|
||||
app: {{ template "vault.name" . }}
|
||||
release: "{{ .Release.Name }}"
|
||||
component: server
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 8200
|
||||
{{- if .Values.ui.service.type }}
|
||||
type: {{ .Values.ui.service.type }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -1,9 +1,10 @@
|
|||
# StatefulSet to run the actual vault server cluster.
|
||||
{{- if (or (and (ne (.Values.server.enabled | toString) "-") .Values.server.enabled) (and (eq (.Values.server.enabled | toString) "-") .Values.global.enabled)) }}
|
||||
{{ template "vault.mode" . }}
|
||||
{{- if and (eq (.Values.global.enabled | toString) "true") (ne .mode "dev") -}}
|
||||
{{ if or (ne .Values.server.standalone.config "") (ne .Values.server.ha.config "") -}}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "vault.fullname" . }}-server-config
|
||||
name: {{ template "vault.fullname" . }}-config
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
|
@ -11,5 +12,10 @@ metadata:
|
|||
release: {{ .Release.Name }}
|
||||
data:
|
||||
extraconfig-from-values.hcl: |-
|
||||
{{ tpl .Values.server.config . | indent 4 }}
|
||||
{{- if eq .mode "standalone" }}
|
||||
{{ tpl .Values.server.standalone.config . | nindent 4 | trim }}
|
||||
{{- else if eq .mode "ha" }}
|
||||
{{ tpl .Values.server.ha.config . | nindent 4 | trim }}
|
||||
{{ end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
|
|
@ -1,84 +0,0 @@
|
|||
# StatefulSet to run the actual vault server cluster.
|
||||
{{- if (or (and (ne (.Values.dev.enabled | toString) "-") .Values.dev.enabled) (and (eq (.Values.dev.enabled | toString) "-") .Values.global.enabled)) }}
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: {{ template "vault.fullname" . }}-dev-server
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
spec:
|
||||
serviceName: {{ template "vault.fullname" . }}-dev-server
|
||||
podManagementPolicy: Parallel
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
release: {{ .Release.Name }}
|
||||
component: server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
release: {{ .Release.Name }}
|
||||
component: server
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
volumes:
|
||||
containers:
|
||||
- name: vault
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
privileged: true
|
||||
image: "{{ default .Values.global.image .Values.dev.image }}"
|
||||
env:
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: VAULT_ADDR
|
||||
value: "http://localhost:8200"
|
||||
command:
|
||||
- "vault"
|
||||
- "server"
|
||||
- "-dev"
|
||||
volumeMounts:
|
||||
{{- range .Values.dev.extraVolumes }}
|
||||
- name: userconfig-{{ .name }}
|
||||
readOnly: true
|
||||
mountPath: /vault/userconfig/{{ .name }}
|
||||
{{- end }}
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command:
|
||||
- vault step-down
|
||||
ports:
|
||||
- containerPort: 8200
|
||||
name: http
|
||||
readinessProbe:
|
||||
# Check status; unsealed vault servers return 0
|
||||
# The exit code reflects the seal status:
|
||||
# 0 - unsealed
|
||||
# 1 - error
|
||||
# 2 - sealed
|
||||
exec:
|
||||
command:
|
||||
- "/bin/sh"
|
||||
- "-ec"
|
||||
- |
|
||||
vault status
|
||||
failureThreshold: 2
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 3
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
{{- end }}
|
|
@ -1,10 +1,11 @@
|
|||
# PodDisruptionBudget to prevent degrading the server cluster through
|
||||
# voluntary cluster changes.
|
||||
{{- if (and .Values.serverHA.disruptionBudget.enabled (or (and (ne (.Values.serverHA.enabled | toString) "-") .Values.serverHA.enabled) (and (eq (.Values.serverHA.enabled | toString) "-") .Values.global.enabled))) }}
|
||||
{{ template "vault.mode" . }}
|
||||
{{- if and (and (eq (.Values.global.enabled | toString) "true") (eq .mode "ha")) (eq (.Values.server.ha.disruptionBudget.enabled | toString) "true") -}}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: {{ template "vault.fullname" . }}-ha-server
|
||||
name: {{ template "vault.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
|
@ -17,4 +18,4 @@ spec:
|
|||
app: {{ template "vault.name" . }}
|
||||
release: "{{ .Release.Name }}"
|
||||
component: server
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
|
|
@ -1,15 +0,0 @@
|
|||
# StatefulSet to run the actual vault server cluster.
|
||||
{{- if (or (and (ne (.Values.serverHA.enabled | toString) "-") .Values.serverHA.enabled) (and (eq (.Values.serverHA.enabled | toString) "-") .Values.global.enabled)) }}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "vault.fullname" . }}-server-ha-config
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
data:
|
||||
extraconfig-from-values.hcl: |-
|
||||
{{ tpl .Values.serverHA.config . | indent 4 }}
|
||||
{{- end }}
|
|
@ -1,35 +0,0 @@
|
|||
# Headless service for Vault server DNS entries. This service should only
|
||||
# point to Vault servers. For access to an agent, one should assume that
|
||||
# the agent is installed locally on the node and the NODE_IP should be used.
|
||||
# If the node can't run a Vault agent, then this service can be used to
|
||||
# communicate directly to a server agent.
|
||||
# TODO: verify for Vault
|
||||
{{- if (or (and (ne (.Values.serverHA.enabled | toString) "-") .Values.serverHA.enabled) (and (eq (.Values.serverHA.enabled | toString) "-") .Values.global.enabled)) }}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "vault.fullname" . }}-ha-server
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
annotations:
|
||||
# This must be set in addition to publishNotReadyAddresses due
|
||||
# to an open issue where it may not work:
|
||||
# https://github.com/kubernetes/kubernetes/issues/58662
|
||||
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
|
||||
spec:
|
||||
clusterIP: None
|
||||
# We want the servers to become available even if they're not ready
|
||||
# since this DNS is also used for join operations.
|
||||
publishNotReadyAddresses: true
|
||||
ports:
|
||||
- name: http
|
||||
port: 8200
|
||||
targetPort: 8200
|
||||
selector:
|
||||
app: {{ template "vault.name" . }}
|
||||
release: "{{ .Release.Name }}"
|
||||
component: server
|
||||
{{- end }}
|
|
@ -1,122 +0,0 @@
|
|||
# StatefulSet to run the actual vault server cluster.
|
||||
{{- if (or (and (ne (.Values.serverHA.enabled | toString) "-") .Values.serverHA.enabled) (and (eq (.Values.serverHA.enabled | toString) "-") .Values.global.enabled)) }}
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: {{ template "vault.fullname" . }}-ha-server
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
spec:
|
||||
serviceName: {{ template "vault.fullname" . }}-ha-server
|
||||
podManagementPolicy: Parallel
|
||||
replicas: {{ .Values.serverHA.replicas }}
|
||||
# TODO: add updatePartition option
|
||||
{{- if (gt (int .Values.serverHA.updatePartition) 0) }}
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
partition: {{ .Values.serverHA.updatePartition }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
release: {{ .Release.Name }}
|
||||
component: server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
release: {{ .Release.Name }}
|
||||
component: server
|
||||
spec:
|
||||
{{- if .Values.server.affinity }}
|
||||
affinity:
|
||||
{{ tpl .Values.server.affinity . | nindent 8 | trim }}
|
||||
{{- end }}
|
||||
terminationGracePeriodSeconds: 10
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: {{ template "vault.fullname" . }}-server-ha-config
|
||||
defaultMode: 0755
|
||||
{{- range .Values.serverHA.extraVolumes }}
|
||||
- name: userconfig-{{ .name }}
|
||||
{{ .type }}:
|
||||
{{- if (eq .type "configMap") }}
|
||||
name: {{ .name }}
|
||||
{{- else if (eq .type "secret") }}
|
||||
secretName: {{ .name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: vault
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
# TODO: confirm Vault needs this
|
||||
privileged: true
|
||||
image: "{{ default .Values.global.image .Values.serverHA.image }}"
|
||||
env:
|
||||
- name: HOST_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.hostIP
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: VAULT_ADDR
|
||||
value: "http://localhost:8200"
|
||||
#TODO: review how swapping of POD_IP, HOST_IP values is done
|
||||
command:
|
||||
- "/bin/sh"
|
||||
- "-ec"
|
||||
- |
|
||||
export VAULT_CLUSTER_ADDR=http://${POD_IP}:8201
|
||||
|
||||
sed -E "s/HOST_IP/${HOST_IP}/g" /vault/config/extraconfig-from-values.hcl > storageconfig.hcl
|
||||
sed -Ei "s/POD_IP/${POD_IP}/g" storageconfig.hcl
|
||||
|
||||
vault server -config=storageconfig.hcl
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /vault/config
|
||||
{{- range .Values.serverHA.extraVolumes }}
|
||||
- name: userconfig-{{ .name }}
|
||||
readOnly: true
|
||||
mountPath: /vault/userconfig/{{ .name }}
|
||||
{{- end }}
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command:
|
||||
- vault operator step-down
|
||||
ports:
|
||||
- containerPort: 8200
|
||||
name: http
|
||||
readinessProbe:
|
||||
# Check status; unsealed vault servers return 0
|
||||
# The exit code reflects the seal status:
|
||||
# 0 - unsealed
|
||||
# 1 - error
|
||||
# 2 - sealed
|
||||
exec:
|
||||
command:
|
||||
- "/bin/sh"
|
||||
- "-ec"
|
||||
- |
|
||||
vault status
|
||||
failureThreshold: 2
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 3
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
{{- end }}
|
|
@ -3,12 +3,11 @@
|
|||
# the agent is installed locally on the node and the NODE_IP should be used.
|
||||
# If the node can't run a Vault agent, then this service can be used to
|
||||
# communicate directly to a server agent.
|
||||
# TODO: verify for Vault
|
||||
{{- if (or (and (ne (.Values.server.enabled | toString) "-") .Values.server.enabled) (and (eq (.Values.server.enabled | toString) "-") .Values.global.enabled)) }}
|
||||
{{- if and (eq (.Values.server.service.enabled | toString) "true" ) (eq (.Values.global.enabled | toString) "true") }}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "vault.fullname" . }}-server
|
||||
name: {{ template "vault.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
|
@ -28,6 +27,9 @@ spec:
|
|||
- name: http
|
||||
port: 8200
|
||||
targetPort: 8200
|
||||
- name: internal
|
||||
port: 8201
|
||||
targetPort: 8201
|
||||
selector:
|
||||
app: {{ template "vault.name" . }}
|
||||
release: "{{ .Release.Name }}"
|
||||
|
|
13
templates/server-serviceaccount.yaml
Normal file
13
templates/server-serviceaccount.yaml
Normal file
|
@ -0,0 +1,13 @@
|
|||
{{ template "vault.mode" . }}
|
||||
{{- if and (ne .mode "") (eq (.Values.global.enabled | toString) "true") }}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ template "vault.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
{{ end }}
|
|
@ -1,18 +1,21 @@
|
|||
# StatefulSet to run the actual vault server cluster.
|
||||
{{- if (or (and (ne (.Values.server.enabled | toString) "-") .Values.server.enabled) (and (eq (.Values.server.enabled | toString) "-") .Values.global.enabled)) }}
|
||||
{{ template "vault.mode" . }}
|
||||
{{- if and (ne .mode "") (eq (.Values.global.enabled | toString) "true") }}
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: {{ template "vault.fullname" . }}-server
|
||||
name: {{ template "vault.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "vault.name" . }}
|
||||
chart: {{ template "vault.chart" . }}
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
spec:
|
||||
serviceName: {{ template "vault.fullname" . }}-server
|
||||
serviceName: {{ template "vault.fullname" . }}
|
||||
podManagementPolicy: Parallel
|
||||
replicas: {{ .Values.server.replicas }}
|
||||
replicas: {{ template "vault.replicas" . }}
|
||||
updateStrategy:
|
||||
type: OnDelete
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "vault.name" . }}
|
||||
|
@ -27,78 +30,50 @@ spec:
|
|||
release: {{ .Release.Name }}
|
||||
component: server
|
||||
spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchLabels:
|
||||
app: {{ template "vault.name" . }}
|
||||
release: "{{ .Release.Name }}"
|
||||
component: server
|
||||
topologyKey: kubernetes.io/hostname
|
||||
{{ template "vault.affinity" . }}
|
||||
terminationGracePeriodSeconds: 10
|
||||
serviceAccountName: {{ template "vault.fullname" . }}
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
fsGroup: {{ template "vault.fsgroup" . }}
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: {{ template "vault.fullname" . }}-server-config
|
||||
{{- range .Values.server.extraVolumes }}
|
||||
- name: userconfig-{{ .name }}
|
||||
{{ .type }}:
|
||||
{{- if (eq .type "configMap") }}
|
||||
name: {{ .name }}
|
||||
{{- else if (eq .type "secret") }}
|
||||
secretName: {{ .name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{ template "vault.volumes" . }}
|
||||
containers:
|
||||
- name: vault
|
||||
{{ template "vault.resources" . }}
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
# TODO: confirm Vault needs this
|
||||
fsGroup: {{ template "vault.fsgroup" . }}
|
||||
privileged: true
|
||||
image: "{{ default .Values.global.image .Values.server.image }}"
|
||||
image: "{{ .Values.global.image }}"
|
||||
command: {{ template "vault.command" . }}
|
||||
args: {{ template "vault.args" . }}
|
||||
env:
|
||||
- name: HOST_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.hostIP
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: VAULT_ADDR
|
||||
- name: VAULT_ADDR
|
||||
value: "http://localhost:8200"
|
||||
command:
|
||||
- "/bin/sh"
|
||||
- "-ec"
|
||||
- |
|
||||
vault server -config=/vault/config/ \
|
||||
{{- range .Values.server.extraVolumes }}
|
||||
{{- if .load }}
|
||||
-config-dir=/vault/userconfig/{{ .name }} \
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
- name: SKIP_CHOWN
|
||||
value: "true"
|
||||
{{ template "vault.envs" . }}
|
||||
{{- include "vault.extraEnvironmentVars" .Values.server | nindent 12 }}
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /vault/data
|
||||
- name: config
|
||||
mountPath: /vault/config
|
||||
{{- range .Values.server.extraVolumes }}
|
||||
- name: userconfig-{{ .name }}
|
||||
readOnly: true
|
||||
mountPath: /vault/userconfig/{{ .name }}
|
||||
{{- end }}
|
||||
{{ template "vault.mounts" . }}
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command:
|
||||
- vault step-down
|
||||
command: ["vault", "step-down"]
|
||||
ports:
|
||||
- containerPort: 8200
|
||||
name: http
|
||||
- containerPort: 8201
|
||||
name: internal
|
||||
- containerPort: 8202
|
||||
name: replication
|
||||
readinessProbe:
|
||||
# Check status; unsealed vault servers return 0
|
||||
# The exit code reflects the seal status:
|
||||
|
@ -106,26 +81,11 @@ spec:
|
|||
# 1 - error
|
||||
# 2 - sealed
|
||||
exec:
|
||||
command:
|
||||
- "/bin/sh"
|
||||
- "-ec"
|
||||
- |
|
||||
vault status
|
||||
command: ["/bin/sh", "-ec", "vault status"]
|
||||
failureThreshold: 2
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 3
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.server.storage }}
|
||||
{{- if .Values.server.storageClass }}
|
||||
storageClassName: {{ .Values.server.storageClass }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{ template "vault.volumeclaims" . }}
|
||||
{{ end }}
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
# the agent is installed locally on the node and the NODE_IP should be used.
|
||||
# If the node can't run a Vault agent, then this service can be used to
|
||||
# communicate directly to a server agent.
|
||||
{{- if (and (or (and (ne (.Values.server.enabled | toString) "-") .Values.server.enabled) (and (eq (.Values.server.enabled | toString) "-") .Values.global.enabled)) (or (and (ne (.Values.ui.enabled | toString) "-") .Values.ui.enabled) (and (eq (.Values.ui.enabled | toString) "-") .Values.global.enabled)) (or (and (ne (.Values.ui.service.enabled | toString) "-") .Values.ui.service.enabled) (and (eq (.Values.ui.service.enabled | toString) "-") .Values.global.enabled))) }}
|
||||
{{- if eq (.Values.ui.enabled | toString) "true" }}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
|
@ -22,7 +22,5 @@ spec:
|
|||
- name: http
|
||||
port: 80
|
||||
targetPort: 8200
|
||||
{{- if .Values.ui.service.type }}
|
||||
type: {{ .Values.ui.service.type }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
type: {{ .Values.ui.serviceType | default "ClusterIP" }}
|
||||
{{- end -}}
|
||||
|
|
|
@ -3,6 +3,11 @@ name_prefix() {
|
|||
printf "vault"
|
||||
}
|
||||
|
||||
# chart_dir returns the directory for the chart
|
||||
chart_dir() {
|
||||
echo ${BATS_TEST_DIRNAME}/../..
|
||||
}
|
||||
|
||||
# helm_install installs the vault chart. This will source overridable
|
||||
# values from the "values.yaml" file in this directory. This can be set
|
||||
# by CI or other environments to do test-specific overrides. Note that its
|
||||
|
@ -49,7 +54,7 @@ wait_for_running_consul() {
|
|||
) | .metadata.name'
|
||||
}
|
||||
|
||||
for i in $(seq 30); do
|
||||
for i in $(seq 60); do
|
||||
if [ -n "$(check ${POD_NAME})" ]; then
|
||||
echo "consul clients are ready."
|
||||
return
|
||||
|
@ -79,7 +84,37 @@ wait_for_running() {
|
|||
) | .metadata.namespace + "/" + .metadata.name'
|
||||
}
|
||||
|
||||
for i in $(seq 30); do
|
||||
for i in $(seq 60); do
|
||||
if [ -n "$(check ${POD_NAME})" ]; then
|
||||
echo "${POD_NAME} is ready."
|
||||
sleep 2
|
||||
return
|
||||
fi
|
||||
|
||||
echo "Waiting for ${POD_NAME} to be ready..."
|
||||
sleep 2
|
||||
done
|
||||
|
||||
echo "${POD_NAME} never became ready."
|
||||
exit 1
|
||||
}
|
||||
|
||||
wait_for_ready() {
|
||||
POD_NAME=$1
|
||||
|
||||
check() {
|
||||
# This requests the pod and checks whether the status is running
|
||||
# and the ready state is true. If so, it outputs the name. Otherwise
|
||||
# it outputs empty. Therefore, to check for success, check for nonzero
|
||||
# string length.
|
||||
kubectl get pods $1 -o json | \
|
||||
jq -r 'select(
|
||||
.status.phase == "Running" and
|
||||
([ .status.conditions[] | select(.type == "Ready" and .status == "True") ] | length) == 1
|
||||
) | .metadata.namespace + "/" + .metadata.name'
|
||||
}
|
||||
|
||||
for i in $(seq 60); do
|
||||
if [ -n "$(check ${POD_NAME})" ]; then
|
||||
echo "${POD_NAME} is ready."
|
||||
sleep 2
|
||||
|
|
56
test/acceptance/server-dev.bats
Normal file
56
test/acceptance/server-dev.bats
Normal file
|
@ -0,0 +1,56 @@
|
|||
#!/usr/bin/env bats
|
||||
|
||||
load _helpers
|
||||
|
||||
@test "server/dev: testing deployment" {
|
||||
cd `chart_dir`
|
||||
helm install --name="$(name_prefix)" --set='server.dev.enabled=true' .
|
||||
wait_for_running $(name_prefix)-0
|
||||
|
||||
# Replicas
|
||||
local replicas=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.replicas')
|
||||
[ "${replicas}" == "1" ]
|
||||
|
||||
# Volume Mounts
|
||||
local volumeCount=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.containers[0].volumeMounts | length')
|
||||
[ "${volumeCount}" == "0" ]
|
||||
|
||||
# Service
|
||||
local service=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.clusterIP')
|
||||
[ "${service}" == "None" ]
|
||||
|
||||
local service=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.type')
|
||||
[ "${service}" == "ClusterIP" ]
|
||||
|
||||
local ports=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.ports | length')
|
||||
[ "${ports}" == "2" ]
|
||||
|
||||
local ports=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.ports[0].port')
|
||||
[ "${ports}" == "8200" ]
|
||||
|
||||
local ports=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.ports[1].port')
|
||||
[ "${ports}" == "8201" ]
|
||||
|
||||
# Sealed, not initialized
|
||||
local sealed_status=$(kubectl exec "$(name_prefix)-0" -- vault status -format=json |
|
||||
jq -r '.sealed' )
|
||||
[ "${sealed_status}" == "false" ]
|
||||
|
||||
local init_status=$(kubectl exec "$(name_prefix)-0" -- vault status -format=json |
|
||||
jq -r '.initialized')
|
||||
[ "${init_status}" == "true" ]
|
||||
}
|
||||
|
||||
# Clean up
|
||||
teardown() {
|
||||
echo "helm/pvc teardown"
|
||||
helm delete --purge vault
|
||||
kubectl delete --all pvc
|
||||
}
|
|
@ -2,23 +2,96 @@
|
|||
|
||||
load _helpers
|
||||
|
||||
@test "server-ha: default, comes up sealed, 1 replica" {
|
||||
helm_install_ha
|
||||
wait_for_running $(name_prefix)-ha-server-0
|
||||
@test "server/ha: testing deployment" {
|
||||
cd `chart_dir`
|
||||
|
||||
helm install --name="$(name_prefix)" \
|
||||
--set='server.ha.enabled=true' .
|
||||
wait_for_running $(name_prefix)-0
|
||||
|
||||
# Verify installed, sealed, and 1 replica
|
||||
local sealed_status=$(kubectl exec "$(name_prefix)-ha-server-0" -- vault status -format=json |
|
||||
jq .sealed )
|
||||
# Sealed, not initialized
|
||||
local sealed_status=$(kubectl exec "$(name_prefix)-0" -- vault status -format=json |
|
||||
jq -r '.sealed' )
|
||||
[ "${sealed_status}" == "true" ]
|
||||
|
||||
local init_status=$(kubectl exec "$(name_prefix)-ha-server-0" -- vault status -format=json |
|
||||
jq .initialized)
|
||||
local init_status=$(kubectl exec "$(name_prefix)-0" -- vault status -format=json |
|
||||
jq -r '.initialized')
|
||||
[ "${init_status}" == "false" ]
|
||||
|
||||
# Replicas
|
||||
local replicas=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.replicas')
|
||||
[ "${replicas}" == "3" ]
|
||||
|
||||
# Volume Mounts
|
||||
local volumeCount=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.containers[0].volumeMounts | length')
|
||||
[ "${volumeCount}" == "1" ]
|
||||
|
||||
# Volumes
|
||||
local volumeCount=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.volumes | length')
|
||||
[ "${volumeCount}" == "1" ]
|
||||
|
||||
local volume=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.volumes[0].configMap.name')
|
||||
[ "${volume}" == "$(name_prefix)-config" ]
|
||||
|
||||
local privileged=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.containers[0].securityContext.privileged')
|
||||
[ "${privileged}" == "true" ]
|
||||
|
||||
# Service
|
||||
local service=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.clusterIP')
|
||||
[ "${service}" == "None" ]
|
||||
|
||||
local service=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.type')
|
||||
[ "${service}" == "ClusterIP" ]
|
||||
|
||||
local ports=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.ports | length')
|
||||
[ "${ports}" == "2" ]
|
||||
|
||||
local ports=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.ports[0].port')
|
||||
[ "${ports}" == "8200" ]
|
||||
|
||||
local ports=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.ports[1].port')
|
||||
[ "${ports}" == "8201" ]
|
||||
|
||||
# Vault Init
|
||||
local token=$(kubectl exec -ti "$(name_prefix)-0" -- \
|
||||
vault operator init -format=json -n 1 -t 1 | \
|
||||
jq -r '.unseal_keys_b64[0]')
|
||||
[ "${token}" != "" ]
|
||||
|
||||
# Vault Unseal
|
||||
local pods=($(kubectl get pods --selector='app=vault' -o json | jq -r '.items[].metadata.name'))
|
||||
for pod in "${pods[@]}"
|
||||
do
|
||||
kubectl exec -ti ${pod} -- vault operator unseal ${token}
|
||||
done
|
||||
|
||||
wait_for_ready "$(name_prefix)-0"
|
||||
|
||||
# Sealed, not initialized
|
||||
local sealed_status=$(kubectl exec "$(name_prefix)-0" -- vault status -format=json |
|
||||
jq -r '.sealed' )
|
||||
[ "${sealed_status}" == "false" ]
|
||||
|
||||
local init_status=$(kubectl exec "$(name_prefix)-0" -- vault status -format=json |
|
||||
jq -r '.initialized')
|
||||
[ "${init_status}" == "true" ]
|
||||
}
|
||||
|
||||
# TODO: Auto unseal test
|
||||
|
||||
# setup a consul env
|
||||
setup() {
|
||||
helm install https://github.com/hashicorp/consul-helm/archive/v0.3.0.tar.gz \
|
||||
helm install https://github.com/hashicorp/consul-helm/archive/v0.8.1.tar.gz \
|
||||
--name consul \
|
||||
--set 'ui.enabled=false' \
|
||||
|
||||
|
|
|
@ -2,20 +2,105 @@
|
|||
|
||||
load _helpers
|
||||
|
||||
@test "server: default, comes up sealed" {
|
||||
helm_install
|
||||
wait_for_running $(name_prefix)-server-0
|
||||
@test "server/standalone: testing deployment" {
|
||||
cd `chart_dir`
|
||||
helm install --name="$(name_prefix)" .
|
||||
wait_for_running $(name_prefix)-0
|
||||
|
||||
# Verify installed, sealed, and 1 replica
|
||||
local sealed_status=$(kubectl exec "$(name_prefix)-server-0" -- vault status -format=json |
|
||||
jq .sealed )
|
||||
# Sealed, not initialized
|
||||
local sealed_status=$(kubectl exec "$(name_prefix)-0" -- vault status -format=json |
|
||||
jq -r '.sealed' )
|
||||
[ "${sealed_status}" == "true" ]
|
||||
|
||||
local init_status=$(kubectl exec "$(name_prefix)-server-0" -- vault status -format=json |
|
||||
jq .initialized)
|
||||
|
||||
local init_status=$(kubectl exec "$(name_prefix)-0" -- vault status -format=json |
|
||||
jq -r '.initialized')
|
||||
[ "${init_status}" == "false" ]
|
||||
|
||||
# TODO check pv, pvc
|
||||
# Replicas
|
||||
local replicas=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.replicas')
|
||||
[ "${replicas}" == "1" ]
|
||||
|
||||
# Affinity
|
||||
local affinity=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.affinity')
|
||||
[ "${affinity}" != "null" ]
|
||||
|
||||
# Volume Mounts
|
||||
local volumeCount=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.containers[0].volumeMounts | length')
|
||||
[ "${volumeCount}" == "2" ]
|
||||
|
||||
local mountName=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.containers[0].volumeMounts[0].name')
|
||||
[ "${mountName}" == "data" ]
|
||||
|
||||
local mountPath=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.containers[0].volumeMounts[0].mountPath')
|
||||
[ "${mountPath}" == "/vault/data" ]
|
||||
|
||||
# Volumes
|
||||
local volumeCount=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.volumes | length')
|
||||
[ "${volumeCount}" == "1" ]
|
||||
|
||||
local volume=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.volumes[0].configMap.name')
|
||||
[ "${volume}" == "$(name_prefix)-config" ]
|
||||
|
||||
# Security Context
|
||||
local fsGroup=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.securityContext.fsGroup')
|
||||
[ "${fsGroup}" == "1000" ]
|
||||
|
||||
local privileged=$(kubectl get statefulset "$(name_prefix)" --output json |
|
||||
jq -r '.spec.template.spec.containers[0].securityContext.privileged')
|
||||
[ "${privileged}" == "true" ]
|
||||
|
||||
# Service
|
||||
local service=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.clusterIP')
|
||||
[ "${service}" == "None" ]
|
||||
|
||||
local service=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.type')
|
||||
[ "${service}" == "ClusterIP" ]
|
||||
|
||||
local ports=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.ports | length')
|
||||
[ "${ports}" == "2" ]
|
||||
|
||||
local ports=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.ports[0].port')
|
||||
[ "${ports}" == "8200" ]
|
||||
|
||||
local ports=$(kubectl get service "$(name_prefix)" --output json |
|
||||
jq -r '.spec.ports[1].port')
|
||||
[ "${ports}" == "8201" ]
|
||||
|
||||
# Vault Init
|
||||
local token=$(kubectl exec -ti "$(name_prefix)-0" -- \
|
||||
vault operator init -format=json -n 1 -t 1 | \
|
||||
jq -r '.unseal_keys_b64[0]')
|
||||
[ "${token}" != "" ]
|
||||
|
||||
# Vault Unseal
|
||||
local pods=($(kubectl get pods -o json | jq -r '.items[].metadata.name'))
|
||||
for pod in "${pods[@]}"
|
||||
do
|
||||
kubectl exec -ti ${pod} -- vault operator unseal ${token}
|
||||
done
|
||||
|
||||
wait_for_ready "$(name_prefix)-0"
|
||||
|
||||
# Sealed, not initialized
|
||||
local sealed_status=$(kubectl exec "$(name_prefix)-0" -- vault status -format=json |
|
||||
jq -r '.sealed' )
|
||||
[ "${sealed_status}" == "false" ]
|
||||
|
||||
local init_status=$(kubectl exec "$(name_prefix)-0" -- vault status -format=json |
|
||||
jq -r '.initialized')
|
||||
[ "${init_status}" == "true" ]
|
||||
}
|
||||
|
||||
# Clean up
|
||||
|
|
|
@ -14,13 +14,28 @@ resource "random_id" "suffix" {
|
|||
}
|
||||
|
||||
data "google_container_engine_versions" "main" {
|
||||
zone = "${var.zone}"
|
||||
location = "${var.zone}"
|
||||
version_prefix = "1.12."
|
||||
}
|
||||
|
||||
data "google_service_account" "gcpapi" {
|
||||
account_id = "${var.gcp_service_account}"
|
||||
}
|
||||
|
||||
resource "google_kms_key_ring" "keyring" {
|
||||
name = "vault-helm-unseal-kr"
|
||||
location = "global"
|
||||
}
|
||||
|
||||
resource "google_kms_crypto_key" "vault-helm-unseal-key" {
|
||||
name = "vault-helm-unseal-key"
|
||||
key_ring = "${google_kms_key_ring.keyring.self_link}"
|
||||
|
||||
lifecycle {
|
||||
prevent_destroy = true
|
||||
}
|
||||
}
|
||||
|
||||
resource "google_container_cluster" "cluster" {
|
||||
name = "vault-helm-dev-${random_id.suffix.dec}"
|
||||
project = "${var.project}"
|
||||
|
@ -50,7 +65,7 @@ resource "google_container_cluster" "cluster" {
|
|||
resource "null_resource" "kubectl" {
|
||||
count = "${var.init_cli ? 1 : 0 }"
|
||||
|
||||
triggers {
|
||||
triggers = {
|
||||
cluster = "${google_container_cluster.cluster.id}"
|
||||
}
|
||||
|
||||
|
@ -81,7 +96,7 @@ resource "null_resource" "helm" {
|
|||
count = "${var.init_cli ? 1 : 0 }"
|
||||
depends_on = ["null_resource.kubectl"]
|
||||
|
||||
triggers {
|
||||
triggers = {
|
||||
cluster = "${google_container_cluster.cluster.id}"
|
||||
}
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
variable "project" {
|
||||
default = "vault-helm-dev"
|
||||
default = "vault-helm-dev-246514"
|
||||
|
||||
description = <<EOF
|
||||
Google Cloud Project to launch resources in. This project must have GKE
|
||||
|
@ -19,7 +19,7 @@ variable "init_cli" {
|
|||
}
|
||||
|
||||
variable "gcp_service_account" {
|
||||
default = "vault-helm-dev"
|
||||
default = "vault-terraform-helm-test"
|
||||
|
||||
description = <<EOF
|
||||
Service account used on the nodes to manage/use the API, specifically needed
|
||||
|
|
|
@ -5,28 +5,31 @@ load _helpers
|
|||
@test "server/ConfigMap: enabled by default" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-config-configmap.yaml \
|
||||
-x templates/server-config-configmap.yaml \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-config-configmap.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-config-configmap.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/ConfigMap: enable with global.enabled false" {
|
||||
@test "server/ConfigMap: disabled by server.dev.enabled true" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-config-configmap.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/ConfigMap: disable with server.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-config-configmap.yaml \
|
||||
--set 'server.enabled=false' \
|
||||
-x templates/server-config-configmap.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
@ -42,12 +45,40 @@ load _helpers
|
|||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "server/ConfigMap: extraConfig is set" {
|
||||
@test "server/ConfigMap: standalone extraConfig is set" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-config-configmap.yaml \
|
||||
--set 'server.config="{\"hello\": \"world\"}"' \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.standalone.config="{\"hello\": \"world\"}"' \
|
||||
. | tee /dev/stderr |
|
||||
yq '.data["extraconfig-from-values.hcl"] | match("world") | length' | tee /dev/stderr)
|
||||
[ ! -z "${actual}" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-config-configmap.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.standalone.config="{\"foo\": \"bar\"}"' \
|
||||
. | tee /dev/stderr |
|
||||
yq '.data["extraconfig-from-values.hcl"] | match("bar") | length' | tee /dev/stderr)
|
||||
[ ! -z "${actual}" ]
|
||||
}
|
||||
|
||||
@test "server/ConfigMap: ha extraConfig is set" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-config-configmap.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.ha.config="{\"hello\": \"world\"}"' \
|
||||
. | tee /dev/stderr |
|
||||
yq '.data["extraconfig-from-values.hcl"] | match("world") | length' | tee /dev/stderr)
|
||||
[ ! -z "${actual}" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-config-configmap.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.ha.config="{\"foo\": \"bar\"}"' \
|
||||
. | tee /dev/stderr |
|
||||
yq '.data["extraconfig-from-values.hcl"] | match("bar") | length' | tee /dev/stderr)
|
||||
[ ! -z "${actual}" ]
|
||||
}
|
||||
|
|
276
test/unit/server-dev-statefulset.bats
Executable file
276
test/unit/server-dev-statefulset.bats
Executable file
|
@ -0,0 +1,276 @@
|
|||
#!/usr/bin/env bats
|
||||
|
||||
load _helpers
|
||||
|
||||
@test "server/dev-StatefulSet: enable with server.dev.enabled true" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/dev-StatefulSet: disable with global.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "server/dev-StatefulSet: image defaults to global.image" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'global.image=foo' \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].image' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# replicas
|
||||
|
||||
@test "server/dev-StatefulSet: default replicas" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.replicas' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
}
|
||||
|
||||
@test "server/dev-StatefulSet: cant set replicas" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.dev.replicas=100' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.replicas' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# updateStrategy
|
||||
|
||||
@test "server/dev-StatefulSet: updateStrategy" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.updateStrategy.type' | tee /dev/stderr)
|
||||
[ "${actual}" = "OnDelete" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# resources
|
||||
|
||||
@test "server/dev-StatefulSet: default resources" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
}
|
||||
|
||||
@test "server/dev-StatefulSet: custom resources" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.resources.requests.memory=256Mi' \
|
||||
--set 'server.resources.requests.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.requests.memory' | tee /dev/stderr)
|
||||
[ "${actual}" = "256Mi" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.resources.limits.memory=256Mi' \
|
||||
--set 'server.resources.limits.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.limits.memory' | tee /dev/stderr)
|
||||
[ "${actual}" = "256Mi" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.resources.requests.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.requests.cpu' | tee /dev/stderr)
|
||||
[ "${actual}" = "250m" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.resources.limits.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.limits.cpu' | tee /dev/stderr)
|
||||
[ "${actual}" = "250m" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# extraVolumes
|
||||
|
||||
@test "server/dev-StatefulSet: adds extra volume" {
|
||||
cd `chart_dir`
|
||||
|
||||
# Test that it defines it
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=configMap' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.volumes[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.configMap.name' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.configMap.secretName' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
|
||||
# Test that it mounts it
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=configMap' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].volumeMounts[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.readOnly' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.mountPath' | tee /dev/stderr)
|
||||
[ "${actual}" = "/vault/userconfig/foo" ]
|
||||
}
|
||||
|
||||
@test "server/dev-StatefulSet: adds extra secret volume" {
|
||||
cd `chart_dir`
|
||||
|
||||
# Test that it defines it
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=secret' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.volumes[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.secret.name' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.secret.secretName' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
|
||||
# Test that it mounts it
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=configMap' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].volumeMounts[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.readOnly' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.mountPath' | tee /dev/stderr)
|
||||
[ "${actual}" = "/vault/userconfig/foo" ]
|
||||
}
|
||||
|
||||
@test "server/dev-StatefulSet: no storageClass on claim by default" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates[0].spec.storageClassName' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# extraEnvironmentVars
|
||||
|
||||
@test "server/dev-StatefulSet: set extraEnvironmentVars" {
|
||||
cd `chart_dir`
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.extraEnvironmentVars.FOO=bar' \
|
||||
--set 'server.extraEnvironmentVars.FOOBAR=foobar' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].env' | tee /dev/stderr)
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[5].name' | tee /dev/stderr)
|
||||
[ "${actual}" = "FOO" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[5].value' | tee /dev/stderr)
|
||||
[ "${actual}" = "bar" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[6].name' | tee /dev/stderr)
|
||||
[ "${actual}" = "FOOBAR" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[6].value' | tee /dev/stderr)
|
||||
[ "${actual}" = "foobar" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# storage class
|
||||
|
||||
@test "server/dev-StatefulSet: can't set storageClass" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=true' \
|
||||
--set 'server.dataStorage.storageClass=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.auditStorage.storageClass=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.auditStorage.storageClass=foo' \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.auditStorage.storageClass=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
}
|
|
@ -1,55 +0,0 @@
|
|||
#!/usr/bin/env bats
|
||||
|
||||
load _helpers
|
||||
|
||||
@test "server/ConfigMap: enabled by default" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-config-configmap.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/ConfigMap: enable with global.enabled false" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-config-configmap.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'serverHA.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/ConfigMap: disable with serverHA.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-config-configmap.yaml \
|
||||
--set 'serverHA.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "server/ConfigMap: disable with global.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-config-configmap.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "server/ConfigMap: extraConfig is set" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-config-configmap.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'serverHA.config="{\"hello\": \"world\"}"' \
|
||||
. | tee /dev/stderr |
|
||||
yq '.data["extraconfig-from-values.hcl"] | match("world") | length' | tee /dev/stderr)
|
||||
[ ! -z "${actual}" ]
|
||||
}
|
|
@ -6,18 +6,7 @@ load _helpers
|
|||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-disruptionbudget.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/DisruptionBudget: enable with global.enabled false" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-disruptionbudget.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
@ -27,7 +16,8 @@ load _helpers
|
|||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-disruptionbudget.yaml \
|
||||
--set 'serverHA.enabled=false' \
|
||||
--set 'globa.enabled=false' \
|
||||
--set 'server.ha.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
@ -37,7 +27,7 @@ load _helpers
|
|||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-disruptionbudget.yaml \
|
||||
--set 'server.disruptionBudget.enabled=false' \
|
||||
--set 'server.ha.disruptionBudget.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
@ -57,8 +47,8 @@ load _helpers
|
|||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-disruptionbudget.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'serverHA.replicas=3' \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.ha.replicas=3' \
|
||||
. | tee /dev/stderr |
|
||||
yq '.spec.maxUnavailable' | tee /dev/stderr)
|
||||
[ "${actual}" = "0" ]
|
||||
|
|
|
@ -2,123 +2,164 @@
|
|||
|
||||
load _helpers
|
||||
|
||||
@test "server/StatefulSet: disabled by default" {
|
||||
@test "server/ha-StatefulSet: enable with server.ha.enabled true" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: enable with --set" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: enable with global.enabled false" {
|
||||
@test "server/ha-StatefulSet: disable with global.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'serverHA.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: disable with serverHA.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'serverHA.enabled=false' \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: disable with global.enabled" {
|
||||
@test "server/ha-StatefulSet: image defaults to global.image" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: image defaults to global.image" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'global.image=foo' \
|
||||
--set 'serverHA.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].image' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'global.image=foo' \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].image' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: image can be overridden with serverHA.image" {
|
||||
#--------------------------------------------------------------------
|
||||
# updateStrategy
|
||||
|
||||
@test "server/ha-StatefulSet: OnDelete updateStrategy" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'global.image=foo' \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'serverHA.image=bar' \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].image' | tee /dev/stderr)
|
||||
[ "${actual}" = "bar" ]
|
||||
yq -r '.spec.updateStrategy.type' | tee /dev/stderr)
|
||||
[ "${actual}" = "OnDelete" ]
|
||||
}
|
||||
|
||||
##--------------------------------------------------------------------
|
||||
## updateStrategy
|
||||
#--------------------------------------------------------------------
|
||||
# affinity
|
||||
|
||||
@test "server/StatefulSet: no updateStrategy when not updating" {
|
||||
@test "server/ha-StatefulSet: default affinity" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.updateStrategy' | tee /dev/stderr)
|
||||
yq -r '.spec.template.spec.affinity' | tee /dev/stderr)
|
||||
[ "${actual}" != "null" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.affinity=' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.affinity' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: updateStrategy during update" {
|
||||
#--------------------------------------------------------------------
|
||||
# replicas
|
||||
|
||||
@test "server/ha-StatefulSet: default replicas" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'serverHA.updatePartition=2' \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.updateStrategy.type' | tee /dev/stderr)
|
||||
[ "${actual}" = "RollingUpdate" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'serverHA.updatePartition=2' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.updateStrategy.rollingUpdate.partition' | tee /dev/stderr)
|
||||
[ "${actual}" = "2" ]
|
||||
yq -r '.spec.replicas' | tee /dev/stderr)
|
||||
[ "${actual}" = "3" ]
|
||||
}
|
||||
|
||||
##--------------------------------------------------------------------
|
||||
## extraVolumes
|
||||
|
||||
@test "server/StatefulSet: adds extra volume" {
|
||||
@test "server/ha-StatefulSet: custom replicas" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.ha.replicas=10' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.replicas' | tee /dev/stderr)
|
||||
[ "${actual}" = "10" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# resources
|
||||
|
||||
@test "server/ha-StatefulSet: default resources" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
}
|
||||
|
||||
@test "server/ha-StatefulSet: custom resources" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.resources.requests.memory=256Mi' \
|
||||
--set 'server.resources.requests.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.requests.memory' | tee /dev/stderr)
|
||||
[ "${actual}" = "256Mi" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.resources.limits.memory=256Mi' \
|
||||
--set 'server.resources.limits.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.limits.memory' | tee /dev/stderr)
|
||||
[ "${actual}" = "256Mi" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.resources.requests.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.requests.cpu' | tee /dev/stderr)
|
||||
[ "${actual}" = "250m" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.resources.limits.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.limits.cpu' | tee /dev/stderr)
|
||||
[ "${actual}" = "250m" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# extraVolumes
|
||||
|
||||
@test "server/ha-StatefulSet: adds extra volume" {
|
||||
cd `chart_dir`
|
||||
# Test that it defines it
|
||||
local object=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'serverHA.extraVolumes[0].type=configMap' \
|
||||
--set 'serverHA.extraVolumes[0].name=foo' \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=configMap' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.volumes[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
|
@ -132,10 +173,10 @@ load _helpers
|
|||
|
||||
# Test that it mounts it
|
||||
local object=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'serverHA.extraVolumes[0].type=configMap' \
|
||||
--set 'serverHA.extraVolumes[0].name=foo' \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=configMap' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].volumeMounts[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
|
@ -146,27 +187,17 @@ load _helpers
|
|||
local actual=$(echo $object |
|
||||
yq -r '.mountPath' | tee /dev/stderr)
|
||||
[ "${actual}" = "/vault/userconfig/foo" ]
|
||||
|
||||
# Doesn't load it
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'serverHA.extraVolumes[0].type=configMap' \
|
||||
--set 'serverHA.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].command | map(select(test("userconfig"))) | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "0" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: adds extra secret volume" {
|
||||
@test "server/ha-StatefulSet: adds extra secret volume" {
|
||||
cd `chart_dir`
|
||||
|
||||
# Test that it defines it
|
||||
local object=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'serverHA.extraVolumes[0].type=secret' \
|
||||
--set 'serverHA.extraVolumes[0].name=foo' \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=secret' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.volumes[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
|
@ -180,10 +211,10 @@ load _helpers
|
|||
|
||||
# Test that it mounts it
|
||||
local object=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'serverHA.extraVolumes[0].type=configMap' \
|
||||
--set 'serverHA.extraVolumes[0].name=foo' \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=configMap' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].volumeMounts[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
|
@ -194,28 +225,115 @@ load _helpers
|
|||
local actual=$(echo $object |
|
||||
yq -r '.mountPath' | tee /dev/stderr)
|
||||
[ "${actual}" = "/vault/userconfig/foo" ]
|
||||
}
|
||||
|
||||
# Doesn't load it
|
||||
local actual=$(helm template \
|
||||
-x templates/server-ha-statefulset.yaml \
|
||||
--set 'serverHA.enabled=true' \
|
||||
--set 'serverHA.extraVolumes[0].type=configMap' \
|
||||
--set 'serverHA.extraVolumes[0].name=foo' \
|
||||
#--------------------------------------------------------------------
|
||||
# extraEnvironmentVars
|
||||
|
||||
@test "server/ha-StatefulSet: set extraEnvironmentVars" {
|
||||
cd `chart_dir`
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.extraEnvironmentVars.FOO=bar' \
|
||||
--set 'server.extraEnvironmentVars.FOOBAR=foobar' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].command | map(select(test("userconfig"))) | length' | tee /dev/stderr)
|
||||
yq -r '.spec.template.spec.containers[0].env' | tee /dev/stderr)
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[4].name' | tee /dev/stderr)
|
||||
[ "${actual}" = "FOO" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[4].value' | tee /dev/stderr)
|
||||
[ "${actual}" = "bar" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[5].name' | tee /dev/stderr)
|
||||
[ "${actual}" = "FOOBAR" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[5].value' | tee /dev/stderr)
|
||||
[ "${actual}" = "foobar" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# storage class
|
||||
|
||||
@test "server/ha-StatefulSet: no storage by default" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "0" ]
|
||||
}
|
||||
|
||||
# Extra volumes are not used for loading Vault configuration at this time
|
||||
#@test "server/StatefulSet: adds loadable volume" {
|
||||
# cd `chart_dir`
|
||||
# local actual=$(helm template \
|
||||
# -x templates/server-ha-statefulset.yaml \
|
||||
# --set 'serverHA.enabled=true' \
|
||||
# --set 'serverHA.extraVolumes[0].type=configMap' \
|
||||
# --set 'serverHA.extraVolumes[0].name=foo' \
|
||||
# --set 'serverHA.extraVolumes[0].load=true' \
|
||||
# . | tee /dev/stderr |
|
||||
# yq -r '.spec.template.spec.containers[0].command | map(select(test("/vault/userconfig/foo"))) | length' | tee /dev/stderr)
|
||||
# [ "${actual}" = "1" ]
|
||||
#}
|
||||
|
||||
@test "server/ha-StatefulSet: cant set data storage" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=true' \
|
||||
--set 'server.dataStorage.storageClass=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
}
|
||||
|
||||
@test "server/ha-StatefulSet: can set storageClass" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=false' \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.auditStorage.storageClass=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates[0].spec.storageClassName' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
}
|
||||
|
||||
@test "server/ha-StatefulSet: can disable storage" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=false' \
|
||||
--set 'server.dataStorage.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "0" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
}
|
||||
|
||||
@test "server/ha-StatefulSet: no data storage" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=false' \
|
||||
--set 'server.dataStorage.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "0" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
}
|
||||
|
|
|
@ -2,41 +2,113 @@
|
|||
|
||||
load _helpers
|
||||
|
||||
@test "server/Service: enabled by default" {
|
||||
@test "server/Service: service enabled by default" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/Service: enable with global.enabled false" {
|
||||
|
||||
@test "server/Service: disable with global.enabled false" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.enabled=true' \
|
||||
--set 'server.service.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
[ "${actual}" = "false" ]
|
||||
|
||||
@test "server/Service: disable with server.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.enabled=false' \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.service.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.service.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "server/Service: disable with global.enabled" {
|
||||
@test "server/Service: disable with server.service.enabled false" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'server.service.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'server.service.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.service.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "server/Service: disable with global.enabled false server.service.enabled false" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.service.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.service.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.service.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
@ -47,13 +119,46 @@ load _helpers
|
|||
@test "server/Service: tolerates unready endpoints" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.metadata.annotations["service.alpha.kubernetes.io/tolerate-unready-endpoints"]' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.metadata.annotations["service.alpha.kubernetes.io/tolerate-unready-endpoints"]' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.metadata.annotations["service.alpha.kubernetes.io/tolerate-unready-endpoints"]' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/Service: publish not ready" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.publishNotReadyAddresses' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.publishNotReadyAddresses' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-service.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.publishNotReadyAddresses' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
load _helpers
|
||||
|
||||
@test "server/StatefulSet: enabled by default" {
|
||||
@test "server/standalone-StatefulSet: default server.standalone.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
|
@ -11,38 +11,28 @@ load _helpers
|
|||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: enable with global.enabled false" {
|
||||
@test "server/standalone-StatefulSet: enable with server.standalone.enabled true" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.enabled=true' \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: disable with server.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: disable with global.enabled" {
|
||||
@test "server/standalone-StatefulSet: disable with global.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: image defaults to global.image" {
|
||||
@test "server/standalone-StatefulSet: image defaults to global.image" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
|
@ -50,36 +40,113 @@ load _helpers
|
|||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].image' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: image can be overridden with server.image" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'global.image=foo' \
|
||||
--set 'server.image=bar' \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].image' | tee /dev/stderr)
|
||||
[ "${actual}" = "bar" ]
|
||||
[ "${actual}" = "foo" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# updateStrategy
|
||||
# Single-Server does not include an update strategy
|
||||
|
||||
@test "server/StatefulSet: no updateStrategy" {
|
||||
@test "server/standalone-StatefulSet: OnDelete updateStrategy" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.updateStrategy' | tee /dev/stderr)
|
||||
yq -r '.spec.updateStrategy.type' | tee /dev/stderr)
|
||||
[ "${actual}" = "OnDelete" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# replicas
|
||||
|
||||
@test "server/standalone-StatefulSet: default replicas" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.replicas' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
}
|
||||
|
||||
@test "server/standalone-StatefulSet: custom replicas" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.replicas=100' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.replicas' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.standalone.replicas=100' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.replicas' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# resources
|
||||
|
||||
@test "server/standalone-StatefulSet: default resources" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
}
|
||||
|
||||
@test "server/standalone-StatefulSet: custom resources" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.resources.requests.memory=256Mi' \
|
||||
--set 'server.resources.requests.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.requests.memory' | tee /dev/stderr)
|
||||
[ "${actual}" = "256Mi" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.resources.limits.memory=256Mi' \
|
||||
--set 'server.resources.limits.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.limits.memory' | tee /dev/stderr)
|
||||
[ "${actual}" = "256Mi" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.resources.requests.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.requests.cpu' | tee /dev/stderr)
|
||||
[ "${actual}" = "250m" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.resources.limits.cpu=250m' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].resources.limits.cpu' | tee /dev/stderr)
|
||||
[ "${actual}" = "250m" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# extraVolumes
|
||||
|
||||
@test "server/StatefulSet: adds extra volume" {
|
||||
@test "server/standalone-StatefulSet: adds extra volume" {
|
||||
cd `chart_dir`
|
||||
|
||||
# Test that it defines it
|
||||
|
@ -98,6 +165,22 @@ load _helpers
|
|||
yq -r '.configMap.secretName' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=configMap' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.volumes[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.configMap.name' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.configMap.secretName' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
|
||||
# Test that it mounts it
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
|
@ -114,17 +197,24 @@ load _helpers
|
|||
yq -r '.mountPath' | tee /dev/stderr)
|
||||
[ "${actual}" = "/vault/userconfig/foo" ]
|
||||
|
||||
# Doesn't load it
|
||||
local actual=$(helm template \
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=configMap' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].command | map(select(test("userconfig"))) | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "0" ]
|
||||
yq -r '.spec.template.spec.containers[0].volumeMounts[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.readOnly' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.mountPath' | tee /dev/stderr)
|
||||
[ "${actual}" = "/vault/userconfig/foo" ]
|
||||
}
|
||||
|
||||
@test "server/StatefulSet: adds extra secret volume" {
|
||||
@test "server/standalone-StatefulSet: adds extra secret volume" {
|
||||
cd `chart_dir`
|
||||
|
||||
# Test that it defines it
|
||||
|
@ -143,6 +233,22 @@ load _helpers
|
|||
yq -r '.secret.secretName' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=secret' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.volumes[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.secret.name' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.secret.secretName' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
|
||||
# Test that it mounts it
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
|
@ -159,47 +265,210 @@ load _helpers
|
|||
yq -r '.mountPath' | tee /dev/stderr)
|
||||
[ "${actual}" = "/vault/userconfig/foo" ]
|
||||
|
||||
# Doesn't load it
|
||||
local actual=$(helm template \
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.extraVolumes[0].type=configMap' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].command | map(select(test("userconfig"))) | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "0" ]
|
||||
}
|
||||
yq -r '.spec.template.spec.containers[0].volumeMounts[] | select(.name == "userconfig-foo")' | tee /dev/stderr)
|
||||
|
||||
@test "server/StatefulSet: adds loadable volume" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.extraVolumes[0].type=configMap' \
|
||||
--set 'server.extraVolumes[0].name=foo' \
|
||||
--set 'server.extraVolumes[0].load=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].command | map(select(test("/vault/userconfig/foo"))) | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
local actual=$(echo $object |
|
||||
yq -r '.readOnly' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.mountPath' | tee /dev/stderr)
|
||||
[ "${actual}" = "/vault/userconfig/foo" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# updateStrategy
|
||||
# extraEnvironmentVars
|
||||
|
||||
@test "server/StatefulSet: no storageClass on claim by default" {
|
||||
@test "server/standalone-StatefulSet: set extraEnvironmentVars" {
|
||||
cd `chart_dir`
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.stanadlone.enabled=true' \
|
||||
--set 'server.extraEnvironmentVars.FOO=bar' \
|
||||
--set 'server.extraEnvironmentVars.FOOBAR=foobar' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].env' | tee /dev/stderr)
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[4].name' | tee /dev/stderr)
|
||||
[ "${actual}" = "FOO" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[4].value' | tee /dev/stderr)
|
||||
[ "${actual}" = "bar" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[5].name' | tee /dev/stderr)
|
||||
[ "${actual}" = "FOOBAR" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[5].value' | tee /dev/stderr)
|
||||
[ "${actual}" = "foobar" ]
|
||||
|
||||
local object=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.extraEnvironmentVars.FOO=bar' \
|
||||
--set 'server.extraEnvironmentVars.FOOBAR=foobar' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.template.spec.containers[0].env' | tee /dev/stderr)
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[4].name' | tee /dev/stderr)
|
||||
[ "${actual}" = "FOO" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[4].value' | tee /dev/stderr)
|
||||
[ "${actual}" = "bar" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[5].name' | tee /dev/stderr)
|
||||
[ "${actual}" = "FOOBAR" ]
|
||||
|
||||
local actual=$(echo $object |
|
||||
yq -r '.[5].value' | tee /dev/stderr)
|
||||
[ "${actual}" = "foobar" ]
|
||||
}
|
||||
|
||||
#--------------------------------------------------------------------
|
||||
# storage class
|
||||
|
||||
@test "server/standalone-StatefulSet: storageClass on claim by default" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates[0].spec.storageClassName' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates[0].spec.storageClassName' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
}
|
||||
|
||||
|
||||
@test "server/StatefulSet: can set storageClass" {
|
||||
@test "server/standalone-StatefulSet: can set storageClass" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.storageClass=foo' \
|
||||
--set 'server.dataStorage.enabled=true' \
|
||||
--set 'server.dataStorage.storageClass=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates[0].spec.storageClassName' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=false' \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.auditStorage.storageClass=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates[0].spec.storageClassName' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.auditStorage.storageClass=foo' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates[1].spec.storageClassName' | tee /dev/stderr)
|
||||
[ "${actual}" = "foo" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "2" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "2" ]
|
||||
}
|
||||
|
||||
@test "server/standalone-StatefulSet: can disable storage" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.auditStorage.enabled=false' \
|
||||
--set 'server.dataStorage.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=false' \
|
||||
--set 'server.dataStorage.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "1" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "2" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=true' \
|
||||
--set 'server.dataStorage.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "2" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.auditStorage.enabled=fa;se' \
|
||||
--set 'server.dataStorage.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "0" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/server-statefulset.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'server.auditStorage.enabled=false' \
|
||||
--set 'server.dataStorage.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.volumeClaimTemplates | length' | tee /dev/stderr)
|
||||
[ "${actual}" = "0" ]
|
||||
}
|
||||
|
|
|
@ -5,29 +5,22 @@ load _helpers
|
|||
@test "ui/Service: disabled by default" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "ui/Service: enable with global.enabled false" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.enabled=true' \
|
||||
--set 'ui.enabled=true' \
|
||||
--set 'server.ha.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "true" ]
|
||||
}
|
||||
[ "${actual}" = "false" ]
|
||||
|
||||
@test "ui/Service: disable with server.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.enabled=false' \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
@ -37,6 +30,23 @@ load _helpers
|
|||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'ui.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'ui.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'ui.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
|
@ -47,47 +57,80 @@ load _helpers
|
|||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'ui.service.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'ui.service.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'ui.service.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "ui/Service: disable with global.enabled" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "ui/Service: disable with global.enabled and server.enabled on" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'global.enabled=false' \
|
||||
--set 'server.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq 'length > 0' | tee /dev/stderr)
|
||||
[ "${actual}" = "false" ]
|
||||
}
|
||||
|
||||
@test "ui/Service: no type by default" {
|
||||
@test "ui/Service: ClusterIP type by default" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'ui.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.type' | tee /dev/stderr)
|
||||
[ "${actual}" = "null" ]
|
||||
[ "${actual}" = "ClusterIP" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'ui.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.type' | tee /dev/stderr)
|
||||
[ "${actual}" = "ClusterIP" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'ui.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.type' | tee /dev/stderr)
|
||||
[ "${actual}" = "ClusterIP" ]
|
||||
}
|
||||
|
||||
@test "ui/Service: specified type" {
|
||||
cd `chart_dir`
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'ui.service.type=LoadBalancer' \
|
||||
--set 'server.dev.enabled=true' \
|
||||
--set 'ui.serviceType=LoadBalancer' \
|
||||
--set 'ui.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.type' | tee /dev/stderr)
|
||||
[ "${actual}" = "LoadBalancer" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.ha.enabled=true' \
|
||||
--set 'ui.serviceType=LoadBalancer' \
|
||||
--set 'ui.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.type' | tee /dev/stderr)
|
||||
[ "${actual}" = "LoadBalancer" ]
|
||||
|
||||
local actual=$(helm template \
|
||||
-x templates/ui-service.yaml \
|
||||
--set 'server.standalone.enabled=true' \
|
||||
--set 'ui.serviceType=LoadBalancer' \
|
||||
--set 'ui.enabled=true' \
|
||||
. | tee /dev/stderr |
|
||||
yq -r '.spec.type' | tee /dev/stderr)
|
||||
|
|
268
values.yaml
268
values.yaml
|
@ -1,65 +1,32 @@
|
|||
# Available parameters and their default values for the Vault chart.
|
||||
|
||||
# Server, when enabled, configures a server cluster to run. This should
|
||||
# be disabled if you plan on connecting to a Vault cluster external to
|
||||
# the Kube cluster.
|
||||
|
||||
global:
|
||||
# enabled is the master enabled switch. Setting this to true or false
|
||||
# will enable or disable all the components within this chart by default.
|
||||
# Each component can be overridden using the component-specific "enabled"
|
||||
# value.
|
||||
enabled: true
|
||||
|
||||
# Image is the name (and tag) of the Vault Docker image for clients and
|
||||
# servers below. This can be overridden per component.
|
||||
#image: "vault:0.11.1"
|
||||
image: "vault:1.0.0-beta2"
|
||||
# Image is the name (and tag) of the Vault Docker image.
|
||||
image: "vault:1.2.0-beta2"
|
||||
|
||||
server:
|
||||
enabled: "-"
|
||||
image: null
|
||||
replicas: 1
|
||||
|
||||
# storage and storageClass are the settings for configuring stateful
|
||||
# storage for the server pods. storage should be set to the disk size of
|
||||
# the attached volume. storageClass is the class of storage which defaults
|
||||
# to null (the Kube cluster will pick the default).
|
||||
storage: 10Gi
|
||||
storageClass: null
|
||||
|
||||
# Resource requests, limits, etc. for the server cluster placement. This
|
||||
# should map directly to the value of the resources field for a PodSpec.
|
||||
# By default no direct resource request is made.
|
||||
resources: {}
|
||||
resources:
|
||||
# resources:
|
||||
# requests:
|
||||
# memory: 256Mi
|
||||
# cpu: 250m
|
||||
# limits:
|
||||
# memory: 256Mi
|
||||
# cpu: 250m
|
||||
|
||||
# config is a raw string of default configuration when using a Stateful
|
||||
# deployment. Default is to use a PersistentVolumeClaim mounted at /vault/data
|
||||
# and store data there. This is only used when using a Replica count of 1, and
|
||||
# using a stateful set
|
||||
# This should be HCL
|
||||
config: |
|
||||
ui = true
|
||||
listener "tcp" {
|
||||
tls_disable = 1
|
||||
address = "0.0.0.0:8200"
|
||||
}
|
||||
|
||||
#api_addr = "POD_IP:8201"
|
||||
|
||||
storage "file" {
|
||||
path = "/vault/data"
|
||||
}
|
||||
|
||||
# Example configuration for using auto-unseal, using Google Cloud KMS. The
|
||||
# GKMS keys must already exist, and the cluster must have a service account
|
||||
# that is authorized to access GCP KMS.
|
||||
# seal "gcpckms" {
|
||||
# project = "vault-helm-dev"
|
||||
# region = "global"
|
||||
# key_ring = "vault-helm"
|
||||
# crypto_key = "vault-init"
|
||||
# }
|
||||
# extraEnvVars is a list of extra enviroment variables to set with the stateful set. These could be
|
||||
# used to include variables required for auto-unseal.
|
||||
extraEnvironmentVars: {}
|
||||
# GOOGLE_REGION: global,
|
||||
# GOOGLE_PROJECT: myproject,
|
||||
# GOOGLE_CREDENTIALS: /vault/userconfig/myproject/myproject-creds.json
|
||||
|
||||
# extraVolumes is a list of extra volumes to mount. These will be exposed
|
||||
# to Vault in the path `/vault/userconfig/<name>/`. The value below is
|
||||
|
@ -69,74 +36,9 @@ server:
|
|||
# name: my-secret
|
||||
# load: false # if true, will add to `-config` to load by Vault
|
||||
|
||||
serverHA:
|
||||
enabled: false
|
||||
image: null
|
||||
replicas: 1
|
||||
|
||||
# storage and storageClass are the settings for configuring stateful
|
||||
# storage for the server pods. storage should be set to the disk size of
|
||||
# the attached volume. storageClass is the class of storage which defaults
|
||||
# to null (the Kube cluster will pick the default).
|
||||
storage: 2Gi
|
||||
storageClass: null
|
||||
|
||||
# Resource requests, limits, etc. for the server cluster placement. This
|
||||
# should map directly to the value of the resources field for a PodSpec.
|
||||
# By default no direct resource request is made.
|
||||
resources: {}
|
||||
|
||||
# updatePartition is used to control a careful rolling update of Vault
|
||||
# servers. This should be done particularly when changing the version
|
||||
# of Vault. Please refer to the documentation for more information.
|
||||
updatePartition: 0
|
||||
|
||||
# config is a raw string of default configuration when using a Stateful
|
||||
# deployment. Default is to use a PersistentVolumeClaim mounted at /vault/data
|
||||
# and store data there. This is only used when using a Replica count of 1, and
|
||||
# using a stateful set
|
||||
# This should be HCL
|
||||
config: |
|
||||
ui = true
|
||||
listener "tcp" {
|
||||
tls_disable = 1
|
||||
address = "0.0.0.0:8200"
|
||||
cluster_address = "POD_IP:8201"
|
||||
}
|
||||
|
||||
storage "consul" {
|
||||
path = "vault"
|
||||
address = "HOST_IP:8500"
|
||||
}
|
||||
|
||||
# Example configuration for using auto-unseal, using Google Cloud KMS. The
|
||||
# GKMS keys must already exist, and the cluster must have a service account
|
||||
# that is authorized to access GCP KMS.
|
||||
# seal "gcpckms" {
|
||||
# project = "vault-helm-dev"
|
||||
# region = "global"
|
||||
# key_ring = "vault-helm"
|
||||
# crypto_key = "vault-init"
|
||||
# }
|
||||
|
||||
# extraVolumes is a list of extra volumes to mount. These will be exposed
|
||||
# to Vault in the path `/vault/userconfig/<name>/`. The value below is
|
||||
# an array of objects, examples are shown below.
|
||||
extraVolumes: []
|
||||
# - type: secret (or "configMap")
|
||||
# name: my-secret
|
||||
# load: false # if true, will add to `-config` to load by Vault
|
||||
|
||||
disruptionBudget:
|
||||
enabled: true
|
||||
|
||||
# maxUnavailable will default to (n/2)-1 where n is the number of
|
||||
# replicas. If you'd like a custom value, you can specify an override here.
|
||||
maxUnavailable: null
|
||||
|
||||
# Affinity Settings
|
||||
# Commenting out or setting as empty the affinity variable, will allow
|
||||
# deployment to single node services such as Minikube
|
||||
# deployment to single node services such as Minikube
|
||||
affinity: |
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
|
@ -146,30 +48,128 @@ serverHA:
|
|||
release: "{{ .Release.Name }}"
|
||||
component: server
|
||||
topologyKey: kubernetes.io/hostname
|
||||
|
||||
# Enables a headless service to be used by the Vault Statefulset
|
||||
service:
|
||||
enabled: true
|
||||
|
||||
# This configures the Vault Statefulset to create a PVC for data
|
||||
# storage when using the file backend.
|
||||
# See https://www.vaultproject.io/docs/audit/index.html to know more
|
||||
dataStorage:
|
||||
enabled: true
|
||||
# Size of the PVC created
|
||||
size: 10Gi
|
||||
# Name of the storage class to use. If null it will use the
|
||||
# configured default Storage Class.
|
||||
storageClass: null
|
||||
# Access Mode of the storage device being used for the PVC
|
||||
accessMode: ReadWriteOnce
|
||||
|
||||
# This configures the Vault Statefulset to create a PVC for audit
|
||||
# logs. Once Vault is deployed, initialized and unseal, Vault must
|
||||
# be configured to use this for audit logs. This will be mounted to
|
||||
# /vault/audit
|
||||
# See https://www.vaultproject.io/docs/audit/index.html to know more
|
||||
auditStorage:
|
||||
enabled: false
|
||||
# Size of the PVC created
|
||||
size: 10Gi
|
||||
# Name of the storage class to use. If null it will use the
|
||||
# configured default Storage Class.
|
||||
storageClass: null
|
||||
# Access Mode of the storage device being used for the PVC
|
||||
accessMode: ReadWriteOnce
|
||||
|
||||
# Run Vault in "dev" mode. This requires no further setup, no state management,
|
||||
# and no initialization. This is useful for experimenting with Vault without
|
||||
# needing to unseal, store keys, et. al. All data is lost on restart - do not
|
||||
# use dev mode for anything other than experimenting.
|
||||
# See https://www.vaultproject.io/docs/concepts/dev-server.html to know more
|
||||
dev:
|
||||
enabled: false
|
||||
|
||||
# Run Vault in "standalone" mode. This is the default mode that will deploy if
|
||||
# no arguments are given to helm. This requires a PVC for data storage to use
|
||||
# the "file" backend. This mode is not highly available and should not be scaled
|
||||
# past a single replica.
|
||||
standalone:
|
||||
enabled: "-"
|
||||
|
||||
# config is a raw string of default configuration when using a Stateful
|
||||
# deployment. Default is to use a PersistentVolumeClaim mounted at /vault/data
|
||||
# and store data there. This is only used when using a Replica count of 1, and
|
||||
# using a stateful set. This should be HCL.
|
||||
config: |
|
||||
ui = true
|
||||
api_addr = "http://POD_IP:8200"
|
||||
listener "tcp" {
|
||||
tls_disable = 1
|
||||
address = "0.0.0.0:8200"
|
||||
}
|
||||
storage "file" {
|
||||
path = "/vault/data"
|
||||
}
|
||||
|
||||
# Example configuration for using auto-unseal, using Google Cloud KMS. The
|
||||
# GKMS keys must already exist, and the cluster must have a service account
|
||||
# that is authorized to access GCP KMS.
|
||||
#seal "gcpckms" {
|
||||
# project = "vault-helm-dev"
|
||||
# region = "global"
|
||||
# key_ring = "vault-helm-unseal-kr"
|
||||
# crypto_key = "vault-helm-unseal-key"
|
||||
#}
|
||||
|
||||
# Run Vault in "HA" mode. There are no storage requirements unless audit log
|
||||
# persistence is required. In HA mode Vault will configure itself to use Consul
|
||||
# for its storage backend. The default configuration provided will work the Consul
|
||||
# Helm project by default. It is possible to manually configure Vault to use a
|
||||
# different HA backend.
|
||||
ha:
|
||||
enabled: false
|
||||
replicas: 3
|
||||
|
||||
# config is a raw string of default configuration when using a Stateful
|
||||
# deployment. Default is to use a Consul for its HA storage backend.
|
||||
# This should be HCL.
|
||||
config: |
|
||||
ui = true
|
||||
api_addr = "http://POD_IP:8200"
|
||||
listener "tcp" {
|
||||
tls_disable = 1
|
||||
address = "0.0.0.0:8200"
|
||||
}
|
||||
storage "consul" {
|
||||
path = "vault"
|
||||
address = "HOST_IP:8500"
|
||||
}
|
||||
|
||||
# Example configuration for using auto-unseal, using Google Cloud KMS. The
|
||||
# GKMS keys must already exist, and the cluster must have a service account
|
||||
# that is authorized to access GCP KMS.
|
||||
#seal "gcpckms" {
|
||||
# project = "vault-helm-dev-246514"
|
||||
# region = "global"
|
||||
# key_ring = "vault-helm-unseal-kr"
|
||||
# crypto_key = "vault-helm-unseal-key"
|
||||
#}
|
||||
|
||||
# A disruption budget limits the number of pods of a replicated application
|
||||
# that are down simultaneously from voluntary disruptions
|
||||
disruptionBudget:
|
||||
enabled: true
|
||||
|
||||
# maxUnavailable will default to (n/2)-1 where n is the number of
|
||||
# replicas. If you'd like a custom value, you can specify an override here.
|
||||
maxUnavailable: null
|
||||
|
||||
# Vault UI
|
||||
ui:
|
||||
# True if you want to enable the Vault UI. The UI will run only
|
||||
# on the server nodes. This makes UI access via the service below (if
|
||||
# enabled) predictable rather than "any node" if you're running Vault
|
||||
# clients as well.
|
||||
#
|
||||
# This value is used for both Single Server and HA mode setups
|
||||
enabled: false
|
||||
|
||||
# True if you want to create a Service entry for the Vault UI.
|
||||
#
|
||||
# serviceType can be used to control the type of service created. For
|
||||
# example, setting this to "LoadBalancer" will create an external load
|
||||
# balancer (for supported K8S installations) to access the UI.
|
||||
service:
|
||||
enabled: true
|
||||
type: LoadBalancer
|
||||
|
||||
# Run Vault in "dev" mode. This requires no further setup, no state management,
|
||||
# and no initialization. This is useful for experimenting with Vault without
|
||||
# needing to unseal, store keys, et. al. All data is lost on restart - do not
|
||||
# use dev mode for anything other than experimenting.
|
||||
# See https://www.vaultproject.io/docs/concepts/dev-server.html to know more
|
||||
dev:
|
||||
enabled: false
|
||||
image: null
|
||||
serviceType: "ClusterIP"
|
||||
|
|
Loading…
Reference in a new issue