copy from idpbuilder example dir (#3)

Signed-off-by: Manabu McCloskey <manabu.mccloskey@gmail.com>
This commit is contained in:
Manabu McCloskey 2024-06-04 14:43:36 -07:00 committed by GitHub
parent d147e84b3f
commit 388a1b5b4f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
86 changed files with 14817 additions and 0 deletions

18
basic/README.md Normal file
View file

@ -0,0 +1,18 @@
## Basic Example
This directory contains basic examples of using the custom package feature.
### Local manifests
The [package1 directory](./package1) is an example of a custom package that you have developed locally, and you want test.
This configuration instructs idpbuilder to:
1. Create a Gitea repository.
2. Sync the contents of the [manifests](./package1/manifests) directory to the repository.
3. Replace the `spec.Source(s).repoURL` field with the Gitea repository URL.
### Remote manifests
The [package2 directory](./package2) is an example for packages available remotely. This is applied directly to the cluster.

24
basic/package1/app.yaml Normal file
View file

@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
labels:
example: basic
spec:
destination:
namespace: my-app
server: "https://kubernetes.default.svc"
source:
# cnoe:// indicates we want to sync from a local directory.
# values after cnoe:// is treated as a relative path from this file.
repoURL: cnoe://manifests
targetRevision: HEAD
# with path set to '.' and cnoe://manifests. we are wanting ArgoCD to sync from the ./manifests directory.
path: "."
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -0,0 +1,17 @@
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: argocd
labels:
abc: ded
notused: remove-me
spec:
containers:
- image: alpine:3.18
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always

22
basic/package2/app.yaml Normal file
View file

@ -0,0 +1,22 @@
# this is an ordinary ArgoCD application file
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
labels:
example: basic
spec:
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
destination:
server: https://kubernetes.default.svc
namespace: guestbook
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true

22
basic/package2/app2.yaml Normal file
View file

@ -0,0 +1,22 @@
# this is an ordinary ArgoCD application file
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook2
namespace: argocd
labels:
example: basic
spec:
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
destination:
server: https://kubernetes.default.svc
namespace: guestbook2
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true

169
local-backup/README.md Normal file
View file

@ -0,0 +1,169 @@
# Local Backup with Velero and Minio
This example creates a configuration that allows you to back up Kubernetes objects
to your laptop (or wherever you are running idpbuilder from).
In short, it:
1. Creates a [MinIO](https://min.io/) installation that mounts a local directory.
2. Creates a [Velero](https://velero.io/) installation that targets the in-cluster MinIO storage.
## Installation
First, we need to ensure the local cluster is configured to mount a local directory.
This is done through the kind configuration file that you can supply to `idpbuilder`.
Take a look at the [kind.yaml](./kind.yaml) file. The most relevant part is this bit:
```yaml
nodes:
- role: control-plane
extraMounts:
- hostPath: /home/ubuntu/backup # replace with your own path
containerPath: /backup
```
This instructs Kind to make your machine's directory at `/home/ubuntu/backup`
available at `/backup` for the Kubernetes node.
You **must** change this value for your own setup. This directory also must exist on your machine.
For example, you may want to change it to `/Users/my-name/backup`.
Once you've made the change, run this command from the root of this repository.
```bash
# example: mkdir /Users/my-name/backup
mkdir <path/to/directory>
idpbuilder create --kind-config examples/local-backup/kind.yaml --package-dir examples/local-backup/
```
This command:
1. Creates a standard idpbuilder installation, a kind cluster and core packages (ArgoCD, Gitea, and Ingress-Nginx).
2. Creates two custom packages: [MinIO](./minio.yaml) and [Velero](./velero.yaml).
Once the command exits, you can check the status of installation by going to https://argocd.cnoe.localtest.me:8443/applications.
You can also check the status with the following command:
```bash
kubectl get application -n argocd
```
## Using it
Once MinIO and Velero ArgoCD applications are ready, you can start playing with it.
MinIO console is accessible at [https://minio.cnoe.localtest.me:8443/login](https://minio.cnoe.localtest.me:8443/login)
You can log in to the console by obtaining credentials:
```bash
kubectl -n minio get secret root-creds -o go-template='{{ range $key, $value := .data }}{{ printf "%s: %s\n" $key ($value | base64decode) }}{{ end }}'
# example output
# rootPassword: aKKZzLnyry6OYZts17vMTf32H5ghFL4WYgu6bHujm
# rootUser: ge8019yksArb7BICt3MLY9
```
Once you log in, you will notice a bucket is already created for you. Velero will use this bucket to back up kubernetes objects.
![image](./images/bucket.png)
### Backup
Let's try creating a backup of an example application.
First, create an example nginx app straight from the Velero repository.
```bash
kubectl apply -f https://raw.githubusercontent.com/vmware-tanzu/velero/main/examples/nginx-app/base.yaml
```
Once they are created and running, create a backup.
```bash
kubectl apply -f examples/local-backup/demo/backup.yaml
```
This command is equivalent to this Velero command: `velero backup create nginx-backup --selector app=nginx`
After you run the command, go back to the MinIO console. You will notice that file objects are created in your bucket.
![img.png](./images/nginx-backup.png)
You can also see these files on your local machine.
```shell
$ ls -lh /home/ubuntu/backup/idpbuilder-backups/backups/nginx-backup/
total 44K
drwxr-xr-x 2 ubuntu ubuntu 4.0K Jan 18 01:25 nginx-backup-csi-volumesnapshotclasses.json.gz
drwxr-xr-x 2 ubuntu ubuntu 4.0K Jan 18 01:25 nginx-backup-csi-volumesnapshotcontents.json.gz
drwxr-xr-x 2 ubuntu ubuntu 4.0K Jan 18 01:25 nginx-backup-csi-volumesnapshots.json.gz
drwxr-xr-x 2 ubuntu ubuntu 4.0K Jan 18 01:25 nginx-backup-itemoperations.json.gz
drwxr-xr-x 2 ubuntu ubuntu 4.0K Jan 18 01:25 nginx-backup-logs.gz
drwxr-xr-x 2 ubuntu ubuntu 4.0K Jan 18 01:25 nginx-backup-podvolumebackups.json.gz
drwxr-xr-x 2 ubuntu ubuntu 4.0K Jan 18 01:25 nginx-backup-resource-list.json.gz
drwxr-xr-x 2 ubuntu ubuntu 4.0K Jan 18 01:25 nginx-backup-results.gz
drwxr-xr-x 2 ubuntu ubuntu 4.0K Jan 18 01:25 nginx-backup-volumesnapshots.json.gz
drwxr-xr-x 2 ubuntu ubuntu 4.0K Jan 18 01:25 nginx-backup.tar.gz
drwxr-xr-x 2 ubuntu ubuntu 4.0K Jan 18 01:25 velero-backup.json
```
### Restore
Let's simulate a cluster loss by deleting the kind cluster forcibly.
```bash
kind delete clusters localdev && docker system prune -f
```
Once it is destroyed, create it again.
```bash
idpbuilder create --kind-config examples/local-backup/kind.yaml --package-dir examples/local-backup/
```
Make sure everything looks good:
```bash
$ kubectl get application -n argocd
NAME SYNC STATUS HEALTH STATUS
argocd Synced Healthy
gitea Synced Healthy
minio Synced Healthy
nginx Synced Healthy
velero Synced Healthy
```
Let's make sure Velero can validate the MinIO bucket:
```bash
$ kubectl get backupstoragelocations.velero.io -n velero
NAME PHASE LAST VALIDATED AGE DEFAULT
default Available 4s 52m true
```
Looks good. Let's make sure the backup from the destroyed cluster is available.
```bash
$ kubectl get backup -n velero
NAME AGE
nginx-backup 1m
```
Target this backup to restore objects.
```bash
kubectl apply -f examples/local-backup/demo/restore.yaml
```
This command is equivalent to `velero restore create --from-backup nginx-backup`.
Verify everything was restored:
```bash
$ kubectl get backup -n velero -o custom-columns="NAME":.metadata.name,"PHASE":.status.phase
NAME PHASE
nginx-backup Completed
$ kubectl get pods -n nginx-example
```

View file

@ -0,0 +1,12 @@
# velero backup create nginx-backup --selector app=nginx
apiVersion: velero.io/v1
kind: Backup
metadata:
name: nginx-backup
namespace: velero
spec:
includedNamespaces:
- 'nginx-example'
labelSelector:
matchLabels:
app: nginx

View file

@ -0,0 +1,10 @@
# /velero restore create --from-backup nginx-backup
apiVersion: velero.io/v1
kind: Restore
metadata:
name: nginx-backup
namespace: velero
spec:
backupName: nginx-backup
includedNamespaces:
- 'nginx-example'

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

18
local-backup/kind.yaml Normal file
View file

@ -0,0 +1,18 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: "kindest/node:v1.27.3"
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraMounts:
- hostPath: /home/ubuntu/backup # replace with your own path
containerPath: /backup
extraPortMappings:
- containerPort: 443
hostPort: {{ .Port }}
protocol: TCP

33
local-backup/minio.yaml Normal file
View file

@ -0,0 +1,33 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: minio
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
sources:
- repoURL: 'https://charts.min.io'
targetRevision: 5.0.15
helm:
releaseName: minio
valueFiles:
- $values/helm/values.yaml
chart: minio
- repoURL: cnoe://minio
targetRevision: HEAD
ref: values
- repoURL: cnoe://minio
targetRevision: HEAD
path: "manifests"
destination:
server: "https://kubernetes.default.svc"
namespace: minio
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true

View file

@ -0,0 +1,23 @@
replicas: 1
mode: standalone
resources:
requests:
memory: 128Mi
persistence:
enabled: true
storageClass: standard
size: 512Mi
volumeName: backup
buckets:
- name: idpbuilder-backups
consoleIngress:
enabled: true
ingressClassName: nginx
hosts:
- minio.cnoe.localtest.me
existingSecret: root-creds

View file

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: backup
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
capacity:
storage: 512Mi
hostPath:
path: /backup

View file

@ -0,0 +1,154 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: secret-sync
namespace: minio
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-20"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-sync
namespace: minio
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-20"
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-sync
namespace: minio
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-20"
subjects:
- kind: ServiceAccount
name: secret-sync
namespace: minio
roleRef:
kind: Role
name: secret-sync
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-sync
namespace: velero
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-20"
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-sync
namespace: velero
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-20"
subjects:
- kind: ServiceAccount
name: secret-sync
namespace: minio
roleRef:
kind: Role
name: secret-sync
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
name: secret-sync
namespace: minio
annotations:
argocd.argoproj.io/hook: PostSync
spec:
template:
metadata:
generateName: secret-sync
spec:
serviceAccountName: secret-sync
restartPolicy: Never
containers:
- name: kubectl
image: docker.io/bitnami/kubectl
command: ["/bin/bash", "-c"]
args:
- |
set -e
kubectl get secrets -n minio root-creds -o json > /tmp/secret
ACCESS=$(jq -r '.data.rootUser | @base64d' /tmp/secret)
SECRET=$(jq -r '.data.rootPassword | @base64d' /tmp/secret)
echo \
"apiVersion: v1
kind: Secret
metadata:
name: secret-key
namespace: velero
type: Opaque
stringData:
aws: |
[default]
aws_access_key_id=${ACCESS}
aws_secret_access_key=${SECRET}
" > /tmp/secret.yaml
kubectl apply -f /tmp/secret.yaml
---
apiVersion: batch/v1
kind: Job
metadata:
name: minio-root-creds
namespace: minio
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-10"
spec:
template:
metadata:
generateName: minio-root-creds
spec:
serviceAccountName: secret-sync
restartPolicy: Never
containers:
- name: kubectl
image: docker.io/bitnami/kubectl
command: ["/bin/bash", "-c"]
args:
- |
kubectl get secrets -n minio root-creds
if [ $? -eq 0 ]; then
exit 0
fi
set -e
NAME=$(openssl rand -base64 24)
PASS=$(openssl rand -base64 36)
echo \
"apiVersion: v1
kind: Secret
metadata:
name: root-creds
namespace: minio
type: Opaque
stringData:
rootUser: "${NAME}"
rootPassword: "${PASS}"
" > /tmp/secret.yaml
kubectl apply -f /tmp/secret.yaml

31
local-backup/velero.yaml Normal file
View file

@ -0,0 +1,31 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: velero
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
sources:
- repoURL: 'https://vmware-tanzu.github.io/helm-charts'
targetRevision: 5.2.2
helm:
releaseName: velero
valueFiles:
- $values/helm/values.yaml
chart: velero
- repoURL: cnoe://velero
targetRevision: HEAD
ref: values
destination:
server: "https://kubernetes.default.svc"
namespace: velero
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: true
selfHeal: true

View file

@ -0,0 +1,23 @@
resources:
requests:
memory: 128Mi
snapshotsEnabled: false
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:v1.8.2
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
configuration:
backupStorageLocation:
- name: default
provider: aws
bucket: idpbuilder-backups
credential:
name: secret-key
key: aws
config:
region: minio
s3Url: http://minio.minio.svc.cluster.local:9000
s3ForcePathStyle: "true"

View file

@ -0,0 +1,17 @@
# Localstack Integration
Please use the below command to deploy an IDP reference implementation with an Argo application that adds Localstack, as well as integrating with Crossplane.
```bash
idpbuilder create \
--use-path-routing \
--package-dir examples/ref-implementation \
--package-dir examples/localstack-integration
```
As you see above, this add-on to `idpbuilder` has a dependency on the [reference implementation](../ref-implementation/). This command primarily does the following:
1. Installs `localstack` helmchart as an `argo` application.
2. Adds localstack crossplane ProviderConfig, targetting localstack
Once the custom package is installed, localstack can be used from the backstage template `app-with-aws-resources`, by changing the `providerConfigName` during the bucket configuration page from `default` to `localstack`.

View file

@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: crossplane-provider-localstack
namespace: argocd
labels:
example: localstack-integration
spec:
project: default
source:
repoURL: cnoe://crossplane-provider-localstack
targetRevision: HEAD
path: "."
destination:
server: "https://kubernetes.default.svc"
namespace: crossplane-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -0,0 +1,19 @@
apiVersion: aws.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: localstack
annotations:
argocd.argoproj.io/sync-wave: "20"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
credentials:
source: Secret
secretRef:
namespace: crossplane-system
name: local-secret
key: creds
endpoint:
hostnameImmutable: true
url:
type: Static
static: http://localstack.localstack.svc.cluster.local:4566

View file

@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: localstack
namespace: argocd
labels:
example: localstack-integration
spec:
project: default
source:
repoURL: https://localstack.github.io/helm-charts
targetRevision: 0.6.12
chart: localstack
helm:
releaseName: localstack
destination:
server: "https://kubernetes.default.svc"
namespace: localstack
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -0,0 +1,162 @@
# Reference implementation
This example creates a local version of the CNOE reference implementation.
## Prerequisites
Ensure you have the following tools installed on your computer.
**Required**
- [idpbuilder](https://github.com/cnoe-io/idpbuilder/releases/latest): version `0.3.0` or later
- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl): version `1.27` or later
- Your computer should have at least 6 GB RAM allocated to Docker. If you are on Docker Desktop, see [this guide](https://docs.docker.com/desktop/settings/mac/).
**Optional**
- AWS credentials: Access Key and secret Key. If you want to create AWS resources in one of examples below.
## Installation
**_NOTE:_**
- If you'd like to run this in your web browser through Codespaces, please follow [the instructions here](./codespaces.md) to install instead.
- _This example assumes that you run the reference implementation with the default port configguration of 8443 for the idpBuilder.
If you happen to configure a different host or port for the idpBuilder, the manifests in the reference example need to be updated
and be configured with the new host and port. you can use the [replace.sh](replace.sh) to change the port as desired prior to applying the manifest as instructed in the command above._
Run the following command from the root of this repository.
```bash
idpbuilder create --use-path-routing --package-dir examples/ref-implementation
```
This will take ~6 minutes for everything to come up. To track the progress, you can go to the [ArgoCD UI](https://cnoe.localtest.me:8443/argocd/applications).
### What was installed?
1. **Argo Workflows** to enable workflow orchestrations.
2. **Backstage** as the UI for software catalog and templating. Source is available [here](https://github.com/cnoe-io/backstage-app).
3. **Crossplane**, AWS providers, and basic compositions for deploying cloud related resources (needs your credentials for this to work)
4. **External Secrets** to generate secrets and coordinate secrets between applications.
5. **Keycloak** as the identity provider for applications.
6. **Spark Operator** to demonstrate an example Spark workload through Backstage.
If you don't want to install a package above, you can remove the ArgoCD Application file corresponding to the package you want to remove.
For example, if you want to remove Spark Operator, you can delete [this file](./spark-operator.yaml).
```bash
# remove spark operator from this installation.
rm examples/ref-implementation/spark-operator.yaml
```
The only package that cannot be removed this way is Keycloak because other packages rely on it.
#### Accessing UIs
- Argo CD: https://cnoe.localtest.me:8443/argocd
- Argo Workflows: https://cnoe.localtest.me:8443/argo-workflows
- Backstage: https://cnoe.localtest.me:8443/
- Gitea: https://cnoe.localtest.me:8443/gitea
- Keycloak: https://cnoe.localtest.me:8443/keycloak/admin/master/console/
# Using it
For this example, we will walk through a few demonstrations. Once applications are ready, go to the [backstage URL](https://cnoe.localtest.me:8443).
Click on the Sign-In button, you will be asked to log into the Keycloak instance. There are two users set up in this
configuration, and their password can be retrieved with the following command:
```bash
idpbuilder get secrets
```
Use the username **`user1`** and the password value given by `USER_PASSWORD` field to login to the backstage instance.
`user1` is an admin user who has access to everything in the cluster, while `user2` is a regular user with limited access.
Both users use the same password retrieved above.
If you want to create a new user or change existing users:
1. Go to the [Keycloak UI](https://cnoe.localtest.me:8443/keycloak/admin/master/console/).
Login with the username `cnoe-admin`. Password is the `KEYCLOAK_ADMIN_PASSWORD` field from the command above.
2. Select `cnoe` from the realms drop down menu.
3. Select users tab.
## Basic Deployment
Let's start by deploying a simple application to the cluster through Backstage.
Click on the `Create...` button on the left, then select the `Create a Basic Deployment` template.
![img.png](images/backstage-templates.png)
In the next screen, type `demo` for the name field, then click Review, then Create.
Once steps run, click the Open In Catalog button to go to the entity page.
![img.png](images/basic-template-flow.png)
In the demo entity page, you will notice a ArgoCD overview card associated with this entity.
You can click on the ArgoCD Application name to see more details.
![img.png](images/demo-entity.png)
### What just happened?
1. Backstage created [a git repository](https://cnoe.localtest.me:8443/gitea/giteaAdmin/demo), then pushed templated contents to it.
2. Backstage created [an ArgoCD Application](https://cnoe.localtest.me:8443/argocd/applications/argocd/demo?) and pointed it to the git repository.
3. Backstage registered the application as [a component](https://cnoe.localtest.me:8443/gitea/giteaAdmin/demo/src/branch/main/catalog-info.yaml) in Backstage.
4. ArgoCD deployed the manifests stored in the repo to the cluster.
5. Backstage retrieved application health from ArgoCD API, then displayed it.
![image.png](images/basic-deployment.png)
## Argo Workflows and Spark Operator
In this example, we will deploy a simple Apache Spark job through Argo Workflows.
Click on the `Create...` button on the left, then select the `Basic Argo Workflow with a Spark Job` template.
![img.png](images/backstage-templates-spark.png)
Type `demo2` for the name field, then click create. You will notice that the Backstage templating steps are very similar to the basic example above.
Click on the Open In Catalog button to go to the entity page.
![img.png](images/demo2-entity.png)
Deployment processes are the same as the first example. Instead of deploying a pod, we deployed a workflow to create a Spark job.
In the entity page, there is a card for Argo Workflows, and it should say running or succeeded.
You can click the name in the card to go to the Argo Workflows UI to view more details about this workflow run.
When prompted to log in, click the login button under single sign on. Argo Workflows is configured to use SSO with Keycloak allowing you to login with the same credentials as Backstage login.
Note that Argo Workflows are not usually deployed this way. This is just an example to show you how you can integrate workflows, backstage, and spark.
Back in the entity page, you can view more details about Spark jobs by navigating to the Spark tab.
## Application with cloud resources.
Similar to the above, we can deploy an application with cloud resources using Backstage templates.
In this example, we will create an application with a S3 Bucket.
Choose a template named `App with S3 bucket`, type `demo3` as the name, then choose a region to create this bucket in.
Once you click the create button, you will have a very similar setup as the basic example.
The only difference is we now have a resource for a S3 Bucket which is managed by Crossplane.
Note that Bucket is **not** created because Crossplane doesn't have necessary credentials to do so.
If you'd like it to actually create a bucket, update [the credentials secret file](crossplane-providers/provider-secret.yaml), then run `idpbuilder create --package-dir examples/ref-implementation`.
In this example, we used Crossplane to provision resources, but you can use other cloud resource management tools such as Terraform instead.
Regardless of your tool choice, concepts are the same. We use Backstage as the templating mechanism and UI for users, then use Kubernetes API with GitOps to deploy resources.
## Notes
- In these examples, we have used the pattern of creating a new repository for every app, then having ArgoCD deploy it.
This is done for convenience and demonstration purposes only. There are alternative actions that you can use.
For example, you can create a PR to an existing repository, create a repository but not deploy them yet, etc.
- If Backstage's pipelining and templating mechanisms is too simple, you can use more advanced workflow engines like Tekton or Argo Workflows.
You can invoke them in Backstage templates, then track progress similar to how it was described above.

View file

@ -0,0 +1,23 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argo-workflows
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: cnoe://argo-workflows/manifests
targetRevision: HEAD
path: "dev"
destination:
server: "https://kubernetes.default.svc"
namespace: argo
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,2 @@
resources:
- install.yaml

View file

@ -0,0 +1,20 @@
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: keycloak-oidc
namespace: argo
spec:
secretStoreRef:
name: keycloak
kind: ClusterSecretStore
target:
name: keycloak-oidc
data:
- secretKey: client-id
remoteRef:
key: keycloak-clients
property: ARGO_WORKFLOWS_CLIENT_ID
- secretKey: secret-key
remoteRef:
key: keycloak-clients
property: ARGO_WORKFLOWS_CLIENT_SECRET

View file

@ -0,0 +1,31 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argo-workflows-ingress
namespace: argo
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: "nginx"
rules:
- host: localhost
http:
paths:
- path: /argo-workflows(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: argo-server
port:
name: web
- host: cnoe.localtest.me
http:
paths:
- path: /argo-workflows(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: argo-server
port:
name: web

View file

@ -0,0 +1,8 @@
resources:
- ../base
- external-secret.yaml
- ingress.yaml
- sa-admin.yaml
patches:
- path: patches/cm-argo-workflows.yaml
- path: patches/deployment-argo-server.yaml

View file

@ -0,0 +1,26 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: workflow-controller-configmap
namespace: argo
data:
config: |
sso:
insecureSkipVerify: true
issuer: https://cnoe.localtest.me:8443/keycloak/realms/cnoe
clientId:
name: keycloak-oidc
key: client-id
clientSecret:
name: keycloak-oidc
key: secret-key
redirectUrl: https://cnoe.localtest.me:8443/argo-workflows/oauth2/callback
rbac:
enabled: true
scopes:
- openid
- profile
- email
- groups
nodeEvents:
enabled: false

View file

@ -0,0 +1,28 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: argo-server
namespace: argo
spec:
template:
spec:
containers:
- name: argo-server
readinessProbe:
httpGet:
path: /
port: 2746
scheme: HTTP
env:
- name: BASE_HREF
value: "/argo-workflows/"
args:
- server
- --configmap=workflow-controller-configmap
- --auth-mode=client
- --auth-mode=sso
- "--secure=false"
- "--loglevel"
- "info"
- "--log-format"
- "text"

View file

@ -0,0 +1,32 @@
# Used by users in the admin group
# TODO Need to tighten up permissions.
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: argo
annotations:
workflows.argoproj.io/rbac-rule: "'admin' in groups"
workflows.argoproj.io/rbac-rule-precedence: "10"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argo-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin
namespace: argo
---
apiVersion: v1
kind: Secret
metadata:
name: admin.service-account-token
annotations:
kubernetes.io/service-account.name: admin
namespace: argo
type: kubernetes.io/service-account-token

View file

@ -0,0 +1,23 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: backstage-templates
namespace: argocd
labels:
env: dev
spec:
project: default
source:
repoURL: cnoe://backstage-templates/entities
targetRevision: HEAD
path: "."
directory:
exclude: 'catalog-info.yaml'
destination:
server: "https://kubernetes.default.svc"
namespace: backstage
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true

View file

@ -0,0 +1,30 @@
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: ${{values.name}}-bucket
description: Stores things
annotations:
argocd/app-name: ${{values.name | dump}}
spec:
type: s3-bucket
owner: guest
---
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: ${{values.name | dump}}
description: This is for testing purposes
annotations:
backstage.io/kubernetes-label-selector: 'entity-id=${{values.name}}'
backstage.io/kubernetes-namespace: default
argocd/app-name: ${{values.name | dump}}
links:
- url: https://cnoe.localtest.me:8443/gitea
title: Repo URL
icon: github
spec:
owner: guest
lifecycle: experimental
type: service
dependsOn:
- resource:default/${{values.name}}-bucket

View file

@ -0,0 +1,3 @@
module ${{ values.name }}
go 1.19

View file

@ -0,0 +1,3 @@
resources:
- nginx.yaml
- ${{ values.name }}.yaml

View file

@ -0,0 +1,34 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx

View file

@ -0,0 +1,35 @@
{%- if values.awsResources %}
resources:
{%- if 'Bucket' in values.awsResources %}
- ../base/
{%- endif %}
{%- if 'Table' in values.awsResources %}
- ../base/table.yaml
{%- endif %}
{%- endif %}
namespace: default
patches:
- target:
kind: Deployment
patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: not-used
labels:
backstage.io/kubernetes-id: ${{values.name}}
spec:
template:
metadata:
labels:
backstage.io/kubernetes-id: ${{values.name}}
- target:
kind: Service
patch: |
apiVersion: apps/v1
kind: Service
metadata:
name: not-used
labels:
backstage.io/kubernetes-id: ${{values.name}}

View file

@ -0,0 +1,5 @@
package main
func main() {
}

View file

@ -0,0 +1,126 @@
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
description: Adds a Go application with AWS resources
name: app-with-aws-resources
title: Add a Go App with AWS resources
spec:
owner: guest
type: service
parameters:
- properties:
name:
title: Application Name
type: string
description: Unique name of the component
ui:autofocus: true
labels:
title: Labels
type: object
additionalProperties:
type: string
description: Labels to apply to the application
ui:autofocus: true
required:
- name
title: Choose your repository location
- description: Configure your bucket
properties:
apiVersion:
default: awsblueprints.io/v1alpha1
description: APIVersion for the resource
type: string
kind:
default: ObjectStorage
description: Kind for the resource
type: string
config:
description: ObjectStorageSpec defines the desired state of ObjectStorage
properties:
resourceConfig:
description: ResourceConfig defines general properties of this AWS resource.
properties:
deletionPolicy:
description: Defaults to Delete
enum:
- Delete
- Orphan
type: string
region:
type: string
providerConfigName:
type: string
default: default
tags:
items:
properties:
key:
type: string
value:
type: string
required:
- key
- value
type: object
type: array
required:
- region
type: object
required:
- resourceConfig
title: Bucket configuration options
type: object
steps:
- id: template
name: Generating component
action: fetch:template
input:
url: ./skeleton
values:
name: ${{parameters.name}}
- action: roadiehq:utils:serialize:yaml
id: serialize
input:
data:
apiVersion: awsblueprints.io/v1alpha1
kind: ${{ parameters.kind }}
metadata:
name: ${{ parameters.name }}
spec: ${{ parameters.config }}
name: serialize
- action: roadiehq:utils:fs:write
id: write
input:
content: ${{ steps['serialize'].output.serialized }}
path: kustomize/base/${{ parameters.name }}.yaml
name: write-to-file
- id: publish
name: Publishing to a gitea git repository
action: publish:gitea
input:
description: This is an example app
# Hard coded value for this demo purposes only.
repoUrl: cnoe.localtest.me:8443/gitea?repo=${{parameters.name}}
defaultBranch: main
- id: create-argocd-app
name: Create ArgoCD App
action: cnoe:create-argocd-app
input:
appName: ${{parameters.name}}
appNamespace: default
argoInstance: in-cluster
projectName: default
# necessary until we generate our own cert
repoUrl: http://my-gitea-http.gitea.svc.cluster.local:3000/giteaAdmin/${{parameters.name}}
path: "kustomize/base"
- id: register
name: Register
action: catalog:register
input:
repoContentsUrl: ${{ steps['publish'].output.repoContentsUrl }}
catalogInfoPath: 'catalog-info.yaml'
output:
links:
- title: Open in catalog
icon: catalog
entityRef: ${{ steps['register'].output.entityRef }}

View file

@ -0,0 +1,22 @@
---
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: ${{values.name | dump}}
description: This is for testing purposes
annotations:
backstage.io/kubernetes-label-selector: 'entity-id=${{values.name}}'
backstage.io/kubernetes-namespace: argo
argocd/app-name: ${{values.name | dump}}
argo-workflows.cnoe.io/label-selector: env=dev,entity-id=${{values.name}}
argo-workflows.cnoe.io/cluster-name: local
apache-spark.cnoe.io/label-selector: env=dev,entity-id=${{values.name}}
apache-spark.cnoe.io/cluster-name: local
links:
- url: https://cnoe.localtest.me:8443/gitea
title: Repo URL
icon: github
spec:
owner: guest
lifecycle: experimental
type: service

View file

@ -0,0 +1,109 @@
# apiVersion: argoproj.io/v1alpha1
# kind: Workflow
# metadata:
# name: ${{values.name}}
# namespace: argo
# labels:
# env: dev
# entity-id: ${{values.name}}
# spec:
# serviceAccountName: admin
# entrypoint: whalesay
# templates:
# - name: whalesay
# container:
# image: docker/whalesay:latest
# command: [cowsay]
# args: ["hello world"]
---
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: ${{values.name}}
namespace: argo
labels:
env: dev
entity-id: ${{values.name}}
spec:
serviceAccountName: admin
entrypoint: main
action: create
templates:
- name: main
steps:
- - name: spark-job
template: spark-job
- - name: wait
template: wait
arguments:
parameters:
- name: spark-job-name
value: '{{steps.spark-job.outputs.parameters.spark-job-name}}'
- name: wait
inputs:
parameters:
- name: spark-job-name
resource:
action: get
successCondition: status.applicationState.state == COMPLETED
failureCondition: status.applicationState.state == FAILED
manifest: |
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: {{inputs.parameters.spark-job-name}}
namespace: argo
- name: spark-job
outputs:
parameters:
- name: spark-job-name
valueFrom:
jsonPath: '{.metadata.name}'
resource:
action: create
setOwnerReference: true
manifest: |
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi-${{values.name}}
namespace: argo
labels:
env: dev
entity-id: ${{values.name}}
spec:
type: Scala
mode: cluster
image: "docker.io/apache/spark:v3.1.3"
imagePullPolicy: IfNotPresent
mainClass: org.apache.spark.examples.SparkPi
mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.12-3.1.3.jar"
sparkVersion: "3.1.1"
restartPolicy:
type: Never
volumes:
- name: "test-volume"
hostPath:
path: "/tmp"
type: Directory
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 3.1.1
serviceAccount: admin
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"
executor:
cores: 1
instances: 1
memory: "512m"
labels:
version: 3.1.1
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"

View file

@ -0,0 +1,62 @@
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
description: Creates a Basic Kubernetes Deployment
name: argo-workflows-basic
title: Basic Argo Workflow with a Spark Job
spec:
owner: guest
type: service
parameters:
- title: Configuration Options
required:
- name
properties:
name:
type: string
description: name of this application
mainApplicationFile:
type: string
default: 'local:///opt/spark/examples/jars/spark-examples_2.12-3.1.3.jar'
description: Path to the main application file
steps:
- id: template
name: Generating component
action: fetch:template
input:
url: ./skeleton
values:
name: ${{parameters.name}}
- id: publish
name: Publishing to a gitea git repository
action: publish:gitea
input:
description: This is an example app
# Hard coded value for this demo purposes only.
repoUrl: cnoe.localtest.me:8443/gitea?repo=${{parameters.name}}
defaultBranch: main
- id: create-argocd-app
name: Create ArgoCD App
action: cnoe:create-argocd-app
input:
appName: ${{parameters.name}}
appNamespace: ${{parameters.name}}
argoInstance: in-cluster
projectName: default
# necessary until we generate our own cert
repoUrl: http://my-gitea-http.gitea.svc.cluster.local:3000/giteaAdmin/${{parameters.name}}
path: "manifests"
- id: register
name: Register
action: catalog:register
input:
repoContentsUrl: ${{ steps['publish'].output.repoContentsUrl }}
catalogInfoPath: 'catalog-info.yaml'
output:
links:
- title: Open in catalog
icon: catalog
entityRef: ${{ steps['register'].output.entityRef }}

View file

@ -0,0 +1,18 @@
---
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: ${{values.name | dump}}
description: This is for testing purposes
annotations:
backstage.io/kubernetes-label-selector: 'entity-id=${{values.name}}'
backstage.io/kubernetes-namespace: default
argocd/app-name: ${{values.name | dump}}
links:
- url: https://cnoe.localtest.me:8443/gitea
title: Repo URL
icon: github
spec:
owner: guest
lifecycle: experimental
type: service

View file

@ -0,0 +1,24 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${{values.name | dump}}
namespace: default
labels:
entity-id: ${{values.name}}
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
entity-id: ${{values.name}}
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

View file

@ -0,0 +1,58 @@
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
description: Creates a Basic Kubernetes Deployment
name: basic
title: Create a Basic Deployment
spec:
owner: guest
type: service
parameters:
- title: Configuration Options
required:
- name
properties:
name:
type: string
description: name of this application
steps:
- id: template
name: Generating component
action: fetch:template
input:
url: ./skeleton
values:
name: ${{parameters.name}}
- id: publish
name: Publishing to a gitea git repository
action: publish:gitea
input:
description: This is an example app
# Hard coded value for this demo purposes only.
repoUrl: cnoe.localtest.me:8443/gitea?repo=${{parameters.name}}
defaultBranch: main
- id: create-argocd-app
name: Create ArgoCD App
action: cnoe:create-argocd-app
input:
appName: ${{parameters.name}}
appNamespace: ${{parameters.name}}
argoInstance: in-cluster
projectName: default
# necessary until we generate our own cert
repoUrl: http://my-gitea-http.gitea.svc.cluster.local:3000/giteaAdmin/${{parameters.name}}
path: "manifests"
- id: register
name: Register
action: catalog:register
input:
repoContentsUrl: ${{ steps['publish'].output.repoContentsUrl }}
catalogInfoPath: 'catalog-info.yaml'
output:
links:
- title: Open in catalog
icon: catalog
entityRef: ${{ steps['register'].output.entityRef }}

View file

@ -0,0 +1,10 @@
apiVersion: backstage.io/v1alpha1
kind: Location
metadata:
name: basic-example-templates
description: A collection of example templates
spec:
targets:
- ./basic/template.yaml
- ./argo-workflows/template.yaml
- ./app-with-bucket/template.yaml

View file

@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: backstage
namespace: argocd
labels:
env: dev
spec:
project: default
source:
repoURL: cnoe://backstage/manifests
targetRevision: HEAD
path: "."
destination:
server: "https://kubernetes.default.svc"
namespace: backstage
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true

View file

@ -0,0 +1,77 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: eso-store
namespace: argocd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: eso-store
namespace: argocd
rules:
- apiGroups: [""]
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- authorization.k8s.io
resources:
- selfsubjectrulesreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: eso-store
namespace: argocd
subjects:
- kind: ServiceAccount
name: eso-store
namespace: argocd
roleRef:
kind: Role
name: eso-store
apiGroup: rbac.authorization.k8s.io
---
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: argocd
spec:
provider:
kubernetes:
remoteNamespace: argocd
server:
caProvider:
type: ConfigMap
name: kube-root-ca.crt
namespace: argocd
key: ca.crt
auth:
serviceAccount:
name: eso-store
namespace: argocd
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: argocd-credentials
namespace: backstage
spec:
secretStoreRef:
name: argocd
kind: ClusterSecretStore
refreshInterval: "0"
target:
name: argocd-credentials
data:
- secretKey: ARGOCD_ADMIN_PASSWORD
remoteRef:
key: argocd-initial-admin-secret
property: password

View file

@ -0,0 +1,457 @@
apiVersion: v1
kind: Namespace
metadata:
name: backstage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backstage
namespace: backstage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: backstage-argo-worfklows
rules:
- apiGroups:
- argoproj.io
resources:
- workflows
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: read-all
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: backstage-argo-worfklows
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: backstage-argo-worfklows
subjects:
- kind: ServiceAccount
name: backstage
namespace: backstage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: backstage-read-all
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: read-all
subjects:
- kind: ServiceAccount
name: backstage
namespace: backstage
---
apiVersion: v1
kind: ConfigMap
metadata:
name: backstage-config
namespace: backstage
data:
app-config.yaml: |
app:
title: CNOE Backstage
baseUrl: https://cnoe.localtest.me:8443
organization:
name: CNOE
backend:
# Used for enabling authentication, secret is shared by all backend plugins
# See https://backstage.io/docs/tutorials/backend-to-backend-auth for
# information on the format
# auth:
# keys:
# - secret: ${BACKEND_SECRET}
baseUrl: https://cnoe.localtest.me:8443
listen:
port: 7007
# Uncomment the following host directive to bind to specific interfaces
# host: 127.0.0.1
csp:
connect-src: ["'self'", 'http:', 'https:']
# Content-Security-Policy directives follow the Helmet format: https://helmetjs.github.io/#reference
# Default Helmet Content-Security-Policy values can be removed by setting the key to false
cors:
origin: https://cnoe.localtest.me:8443
methods: [GET, HEAD, PATCH, POST, PUT, DELETE]
credentials: true
database:
client: pg
connection:
host: ${POSTGRES_HOST}
port: ${POSTGRES_PORT}
user: ${POSTGRES_USER}
password: ${POSTGRES_PASSWORD}
cache:
store: memory
# workingDirectory: /tmp # Use this to configure a working directory for the scaffolder, defaults to the OS temp-dir
integrations:
gitea:
- baseUrl: https://cnoe.localtest.me:8443/gitea
host: cnoe.localtest.me:8443
username: ${GITEA_USERNAME}
password: ${GITEA_PASSWORD}
- baseUrl: https://cnoe.localtest.me/gitea
host: cnoe.localtest.me
username: ${GITEA_USERNAME}
password: ${GITEA_PASSWORD}
# github:
# - host: github.com
# apps:
# - $include: github-integration.yaml
# - host: github.com
# # This is a Personal Access Token or PAT from GitHub. You can find out how to generate this token, and more information
# # about setting up the GitHub integration here: https://backstage.io/docs/getting-started/configuration#setting-up-a-github-integration
# token: ${GITHUB_TOKEN}
### Example for how to add your GitHub Enterprise instance using the API:
# - host: ghe.example.net
# apiBaseUrl: https://ghe.example.net/api/v3
# token: ${GHE_TOKEN}
# Reference documentation http://backstage.io/docs/features/techdocs/configuration
# Note: After experimenting with basic setup, use CI/CD to generate docs
# and an external cloud storage when deploying TechDocs for production use-case.
# https://backstage.io/docs/features/techdocs/how-to-guides#how-to-migrate-from-techdocs-basic-to-recommended-deployment-approach
techdocs:
builder: 'local' # Alternatives - 'external'
generator:
runIn: 'docker' # Alternatives - 'local'
publisher:
type: 'local' # Alternatives - 'googleGcs' or 'awsS3'. Read documentation for using alternatives.
auth:
environment: development
session:
secret: MW2sV-sIPngEl26vAzatV-6VqfsgAx4bPIz7PuE_2Lk=
providers:
keycloak-oidc:
development:
metadataUrl: ${KEYCLOAK_NAME_METADATA}
clientId: backstage
clientSecret: ${KEYCLOAK_CLIENT_SECRET}
scope: 'openid profile email groups'
prompt: auto
scaffolder:
# see https://backstage.io/docs/features/software-templates/configuration for software template options
defaultAuthor:
name: backstage-scaffolder
email: noreply
defaultCommitMessage: "backstage scaffolder"
catalog:
import:
entityFilename: catalog-info.yaml
pullRequestBranchName: backstage-integration
rules:
- allow: [Component, System, API, Resource, Location, Template]
locations:
# Examples from a public GitHub repository.
- type: url
target: https://cnoe.localtest.me/gitea/giteaAdmin/idpbuilder-localdev-backstage-templates-entities/raw/branch/main/catalog-info.yaml
## Uncomment these lines to add an example org
# - type: url
# target: https://github.com/backstage/backstage/blob/master/packages/catalog-model/examples/acme-corp.yaml
# rules:
# - allow: [User, Group]
kubernetes:
serviceLocatorMethod:
type: 'multiTenant'
clusterLocatorMethods:
- $include: k8s-config.yaml
argocd:
username: admin
password: ${ARGOCD_ADMIN_PASSWORD}
appLocatorMethods:
- type: 'config'
instances:
- name: in-cluster
url: https://cnoe.localtest.me:8443/argocd
username: admin
password: ${ARGOCD_ADMIN_PASSWORD}
argoWorkflows:
baseUrl: ${ARGO_WORKFLOWS_URL}
---
apiVersion: v1
kind: Secret
metadata:
name: k8s-config
namespace: backstage
stringData:
k8s-config.yaml: "type: 'config'\nclusters:\n - url: https://kubernetes.default.svc.cluster.local\n
\ name: local\n authProvider: 'serviceAccount'\n skipTLSVerify: true\n
\ skipMetricsLookup: true\n serviceAccountToken: \n $file: /var/run/secrets/kubernetes.io/serviceaccount/token\n
\ caData: \n $file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\n"
---
apiVersion: v1
kind: Service
metadata:
name: backstage
namespace: backstage
spec:
ports:
- name: http
port: 7007
targetPort: http
selector:
app: backstage
---
apiVersion: v1
kind: Service
metadata:
labels:
app: postgresql
name: postgresql
namespace: backstage
spec:
clusterIP: None
ports:
- name: postgres
port: 5432
selector:
app: postgresql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
annotations:
argocd.argoproj.io/sync-wave: "10"
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
spec:
containers:
- command:
- node
- packages/backend
- --config
- config/app-config.yaml
env:
- name: LOG_LEVEL
value: debug
- name: NODE_TLS_REJECT_UNAUTHORIZED
value: "0"
envFrom:
- secretRef:
name: backstage-env-vars
- secretRef:
name: gitea-credentials
- secretRef:
name: argocd-credentials
image: public.ecr.aws/cnoe-io/backstage:rc1
name: backstage
ports:
- containerPort: 7007
name: http
volumeMounts:
- mountPath: /app/config
name: backstage-config
readOnly: true
serviceAccountName: backstage
volumes:
- name: backstage-config
projected:
sources:
- configMap:
items:
- key: app-config.yaml
path: app-config.yaml
name: backstage-config
- secret:
items:
- key: k8s-config.yaml
path: k8s-config.yaml
name: k8s-config
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: postgresql
name: postgresql
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: postgresql
serviceName: service-postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: backstage-env-vars
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: backstage-env-vars
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: backstage-env-vars
key: POSTGRES_PASSWORD
image: docker.io/library/postgres:15.3-alpine3.18
name: postgres
ports:
- containerPort: 5432
name: postgresdb
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 300Mi
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: "500Mi"
---
apiVersion: generators.external-secrets.io/v1alpha1
kind: Password
metadata:
name: backstage
namespace: backstage
spec:
length: 36
digits: 5
symbols: 5
symbolCharacters: "/-+"
noUpper: false
allowRepeat: true
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: backstage-oidc
namespace: backstage
spec:
secretStoreRef:
name: keycloak
kind: ClusterSecretStore
refreshInterval: "0"
target:
name: backstage-env-vars
template:
engineVersion: v2
data:
BACKSTAGE_FRONTEND_URL: https://cnoe.localtest.me:8443/backstage
POSTGRES_HOST: postgresql.backstage.svc.cluster.local
POSTGRES_PORT: '5432'
POSTGRES_DB: backstage
POSTGRES_USER: backstage
POSTGRES_PASSWORD: "{{.POSTGRES_PASSWORD}}"
ARGO_WORKFLOWS_URL: https://cnoe.localtest.me:8443/argo-workflows
KEYCLOAK_NAME_METADATA: https://cnoe.localtest.me:8443/keycloak/realms/cnoe/.well-known/openid-configuration
KEYCLOAK_CLIENT_SECRET: "{{.BACKSTAGE_CLIENT_SECRET}}"
ARGOCD_AUTH_TOKEN: "argocd.token={{.ARGOCD_SESSION_TOKEN}}"
ARGO_CD_URL: 'https://argocd-server.argocd.svc.cluster.local/api/v1/'
data:
- secretKey: ARGOCD_SESSION_TOKEN
remoteRef:
key: keycloak-clients
property: ARGOCD_SESSION_TOKEN
- secretKey: BACKSTAGE_CLIENT_SECRET
remoteRef:
key: keycloak-clients
property: BACKSTAGE_CLIENT_SECRET
dataFrom:
- sourceRef:
generatorRef:
apiVersion: generators.external-secrets.io/v1alpha1
kind: Password
name: backstage
rewrite:
- transform:
template: "POSTGRES_PASSWORD"
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: gitea-credentials
namespace: backstage
spec:
secretStoreRef:
name: gitea
kind: ClusterSecretStore
refreshInterval: "0"
target:
name: gitea-credentials
data:
- secretKey: GITEA_USERNAME
remoteRef:
key: gitea-credential
property: username
- secretKey: GITEA_PASSWORD
remoteRef:
key: gitea-credential
property: password
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backstage
namespace: backstage
spec:
ingressClassName: "nginx"
rules:
- host: localhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backstage
port:
name: http
- host: cnoe.localtest.me
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backstage
port:
name: http

View file

@ -0,0 +1,71 @@
## Running idpbuilder in Codespaces in Browser
**_NOTE:_**: __Steps described below applies to running this implementation in Codespaces in **web browsers** (e.g. Firefox and Chrome).
If you are using Codespaces with GitHub CLI, steps described here do not apply to you.__
Let's create an instance of Codespaces.
![img.png](images/codespaces-create.png)
It may take a few minutes for it to be ready. Once it's ready, you can either get the latest release of idpbuilder or build from the main branch.
Build the idpbuilder binary.
- Get the latest release:
```bash
version=$(curl -Ls -o /dev/null -w %{url_effective} https://github.com/cnoe-io/idpbuilder/releases/latest)
version=${version##*/}
wget https://github.com/cnoe-io/idpbuilder/releases/download/${version}/idpbuilder-linux-amd64.tar.gz
tar xzf idpbuilder-linux-amd64.tar.gz
sudo mv ./idpbuilder /usr/local/bin/
```
- Alternatively, build from the main branch
```bash
make build
sudo mv ./idpbuilder /usr/local/bin/
```
Codespaces assigns random hostname to your specific instance. You need to make sure they are reflected correctly.
Instance host name is available as an environment variable (`CODESPACE_NAME`). Let's use it to setup our host names.
Run the following commands to update host name and ports. Port is set to 443 because this is the port used by the browser to access your instance.
```bash
cd examples/ref-implementation
./replace.sh ${CODESPACE_NAME}-8080.${GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN} 443
cd -
```
Now you are ready to run idpbuilder with reference implementation.
```bash
idpbuilder create --protocol http \
--host ${CODESPACE_NAME}-8080.${GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN} \
--port 8080 --use-path-routing --package-dir examples/ref-implementation
```
Once idpbuilder finishes bootstrapping, you should have port 8080 forward in the port tab within Codespaces.
![](images/port.png)
You may get a 404 page after clicking the port 8080 forwarded address. This is completely normal because Backstage may not be ready yet.
Give it a few more minutes and it should redirect you to a Backstage page.
### Accessing UIs
If you'd like to track progress of deployment, go to `/argocd` path and login with your ArgoCD credentials.
For example run this command to get the URL for Argo CD:
```bash
echo https://${CODESPACE_NAME}-8080.${GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN}/argocd
```
From here on, you can follow the instructions in the [README](./README.md) file. The only difference is that the URL to access UIs is given by:
```echo
echo https://${CODESPACE_NAME}-8080.${GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN}
```
For example, if you need to access Argo Workflows UI, instead of going to `https://cnoe.localtest.me:8443/argo`,
you go to `https://${CODESPACE_NAME}-8080.${GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN}/argo`

View file

@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: coredns
namespace: argocd
labels:
env: dev
spec:
project: default
source:
repoURL: cnoe://coredns/manifests
targetRevision: HEAD
path: "."
destination:
server: "https://kubernetes.default.svc"
namespace: kube-system
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true

View file

@ -0,0 +1,33 @@
# the only purpose of this is to resolve external DNS entries such as `redesigned-bassoon-r4jjwpvv99vhx9gp-8080.app.github.dev` to a cluster IP
# normally, `redesigned-bassoon-r4jjwpvv99vhx9gp-8080.app.github.dev` resolves to 127.0.0.1 and thus oidc endpoint configurations cannot be obtained.
# in addition, we need to ensure traffic do not go out of cluster when not necessary.
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
rewrite name cnoe.localtest.me ingress-nginx-controller.ingress-nginx.svc.cluster.local
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}

View file

@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: crossplane-compositions
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: cnoe://crossplane-compositions/manifests
targetRevision: HEAD
path: "."
directory:
recurse: true
destination:
server: "https://kubernetes.default.svc"
namespace: crossplane-system
syncPolicy:
automated: {}

View file

@ -0,0 +1,76 @@
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: xobjectstorages.awsblueprints.io
spec:
claimNames:
kind: ObjectStorage
plural: objectstorages
group: awsblueprints.io
names:
kind: XObjectStorage
plural: xobjectstorages
connectionSecretKeys:
- region
- bucket-name
- s3-put-policy
versions:
- name: v1alpha1
served: true
referenceable: true
schema:
openAPIV3Schema:
properties:
spec:
description: ObjectStorageSpec defines the desired state of ObjectStorage
properties:
resourceConfig:
description: ResourceConfig defines general properties of this AWS
resource.
properties:
deletionPolicy:
description: Defaults to Delete
enum:
- Delete
- Orphan
type: string
name:
description: Set the name of this resource in AWS to the value
provided by this field.
type: string
providerConfigName:
type: string
region:
type: string
tags:
items:
properties:
key:
type: string
value:
type: string
required:
- key
- value
type: object
type: array
required:
- providerConfigName
- region
- tags
type: object
required:
- resourceConfig
type: object
status:
description: ObjectStorageStatus defines the observed state of ObjectStorage
properties:
bucketName:
type: string
bucketArn:
type: string
type: object
type: object

View file

@ -0,0 +1,80 @@
apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
name: s3bucket.awsblueprints.io
labels:
awsblueprints.io/provider: aws
awsblueprints.io/environment: dev
s3.awsblueprints.io/configuration: standard
spec:
writeConnectionSecretsToNamespace: crossplane-system
compositeTypeRef:
apiVersion: awsblueprints.io/v1alpha1
kind: XObjectStorage
patchSets:
- name: common-fields
patches:
- type: FromCompositeFieldPath
fromFieldPath: spec.resourceConfig.providerConfigName
toFieldPath: spec.providerConfigRef.name
- type: FromCompositeFieldPath
fromFieldPath: spec.resourceConfig.deletionPolicy
toFieldPath: spec.deletionPolicy
- type: FromCompositeFieldPath
fromFieldPath: spec.resourceConfig.region
toFieldPath: spec.forProvider.region
- type: FromCompositeFieldPath
fromFieldPath: spec.resourceConfig.name
toFieldPath: metadata.annotations[crossplane.io/external-name]
resources:
- name: s3-bucket
connectionDetails:
- name: bucket-name
fromConnectionSecretKey: endpoint
- name: region
fromConnectionSecretKey: region
base:
apiVersion: s3.aws.crossplane.io/v1beta1
kind: Bucket
spec:
deletionPolicy: Delete
forProvider:
objectOwnership: BucketOwnerEnforced
publicAccessBlockConfiguration:
blockPublicPolicy: true
restrictPublicBuckets: true
serverSideEncryptionConfiguration:
rules:
- applyServerSideEncryptionByDefault:
sseAlgorithm: AES256
tagging:
tagSet:
- key: cnoe
value: "1"
patches:
- type: PatchSet
patchSetName: common-fields
- type: FromCompositeFieldPath
fromFieldPath: spec.resourceConfig.tags
toFieldPath: spec.forProvider.tagging.tagSet
policy:
mergeOptions:
appendSlice: true
keepMapValues: true
- type: FromCompositeFieldPath
fromFieldPath: spec.resourceConfig.region
toFieldPath: spec.forProvider.locationConstraint
- fromFieldPath: spec.writeConnectionSecretToRef.namespace
toFieldPath: spec.writeConnectionSecretToRef.namespace
- type: ToCompositeFieldPath
fromFieldPath: metadata.annotations[crossplane.io/external-name]
toFieldPath: status.bucketName
- type: ToCompositeFieldPath
fromFieldPath: status.atProvider.arn
toFieldPath: status.bucketArn
- fromFieldPath: metadata.uid
toFieldPath: spec.writeConnectionSecretToRef.name
transforms:
- type: string
string:
fmt: "%s-bucket"

View file

@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: crossplane-providers
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: cnoe://crossplane-providers
targetRevision: HEAD
path: "."
destination:
server: "https://kubernetes.default.svc"
namespace: crossplane-system
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true

View file

@ -0,0 +1,6 @@
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-aws
spec:
package: xpkg.upbound.io/crossplane-contrib/provider-aws:v0.48.0

View file

@ -0,0 +1,14 @@
apiVersion: aws.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
annotations:
argocd.argoproj.io/sync-wave: "20"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
credentials:
source: Secret
secretRef:
namespace: crossplane-system
name: local-secret
key: creds

View file

@ -0,0 +1,11 @@
apiVersion: v1
kind: Secret
metadata:
name: local-secret
namespace: crossplane-system
stringData:
creds: |
[default]
aws_access_key_id = replaceme
aws_secret_access_key = replaceme
aws_session_token = replacemeifneeded

View file

@ -0,0 +1,26 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: crossplane
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: 'https://charts.crossplane.io/stable'
targetRevision: 1.15.0
helm:
releaseName: crossplane
chart: crossplane
destination:
server: 'https://kubernetes.default.svc'
namespace: crossplane-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -0,0 +1,23 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-secrets
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: external-secrets
server: "https://kubernetes.default.svc"
source:
repoURL: cnoe://external-secrets/manifests
targetRevision: HEAD
path: "."
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -0,0 +1,12 @@
#!/bin/bash
set -e
INSTALL_YAML="manifests/install.yaml"
CHART_VERSION="0.9.11"
echo "# EXTERNAL SECRETS INSTALL RESOURCES" > ${INSTALL_YAML}
echo "# This file is auto-generated with 'examples/ref-impelmentation/external-secrets/generate-manifests.sh'" >> ${INSTALL_YAML}
helm repo add external-secrets --force-update https://charts.external-secrets.io
helm repo update
helm template --namespace external-secrets external-secrets external-secrets/external-secrets -f values.yaml --version ${CHART_VERSION} >> ${INSTALL_YAML}

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.5 KiB

View file

@ -0,0 +1,688 @@
{
"type": "excalidraw",
"version": 2,
"source": "https://excalidraw.com",
"elements": [
{
"id": "yozZorioSE1OUkpHktzVP",
"type": "rectangle",
"x": 727,
"y": 454,
"width": 138,
"height": 68.00000000000001,
"angle": 0,
"strokeColor": "#e03131",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": {
"type": 3
},
"seed": 1193746031,
"version": 164,
"versionNonce": 917424207,
"isDeleted": false,
"boundElements": [
{
"type": "text",
"id": "Qn0U1j1w19_hNzfHMrEHQ"
},
{
"id": "Um8DNgdEeXUjERYx_0rtv",
"type": "arrow"
},
{
"id": "qJj5wVYIiRzV91y3h6Xbi",
"type": "arrow"
},
{
"id": "cE_ucOKJBcWQXtcgaSoPF",
"type": "arrow"
}
],
"updated": 1707246661988,
"link": null,
"locked": false
},
{
"id": "Qn0U1j1w19_hNzfHMrEHQ",
"type": "text",
"x": 760.3499984741211,
"y": 475.5,
"width": 71.30000305175781,
"height": 25,
"angle": 0,
"strokeColor": "#e03131",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 1937750799,
"version": 157,
"versionNonce": 397238721,
"isDeleted": false,
"boundElements": null,
"updated": 1707246500158,
"link": null,
"locked": false,
"text": "ArgoCD",
"fontSize": 20,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"baseline": 18,
"containerId": "yozZorioSE1OUkpHktzVP",
"originalText": "ArgoCD",
"lineHeight": 1.25
},
{
"type": "rectangle",
"version": 183,
"versionNonce": 512282671,
"isDeleted": false,
"id": "z1vPsJxFaPRhe0i1Ck0Je",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 418,
"y": 351,
"strokeColor": "#f08c00",
"backgroundColor": "transparent",
"width": 138,
"height": 68.00000000000001,
"seed": 1492127791,
"groupIds": [],
"frameId": null,
"roundness": {
"type": 3
},
"boundElements": [
{
"type": "text",
"id": "DyaGAvMwuxh_cnuhL8d3P"
},
{
"id": "ahkUXt0AQa8URVqUCdwu5",
"type": "arrow"
},
{
"id": "Um8DNgdEeXUjERYx_0rtv",
"type": "arrow"
}
],
"updated": 1707246694929,
"link": null,
"locked": false
},
{
"type": "text",
"version": 186,
"versionNonce": 1954345551,
"isDeleted": false,
"id": "DyaGAvMwuxh_cnuhL8d3P",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 434.9583320617676,
"y": 372.5,
"strokeColor": "#f08c00",
"backgroundColor": "transparent",
"width": 104.08333587646484,
"height": 25,
"seed": 1610363471,
"groupIds": [],
"frameId": null,
"roundness": null,
"boundElements": [],
"updated": 1707246694929,
"link": null,
"locked": false,
"fontSize": 20,
"fontFamily": 1,
"text": "Backstage",
"textAlign": "center",
"verticalAlign": "middle",
"containerId": "z1vPsJxFaPRhe0i1Ck0Je",
"originalText": "Backstage",
"lineHeight": 1.25,
"baseline": 18
},
{
"type": "rectangle",
"version": 205,
"versionNonce": 1736977089,
"isDeleted": false,
"id": "hKolk3HE8f7p7kku0fuAR",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 722,
"y": 251,
"strokeColor": "#2f9e44",
"backgroundColor": "transparent",
"width": 138,
"height": 68.00000000000001,
"seed": 1171434639,
"groupIds": [],
"frameId": null,
"roundness": {
"type": 3
},
"boundElements": [
{
"type": "text",
"id": "DpYp_SU3PTt5pGMJEYXeQ"
},
{
"id": "ahkUXt0AQa8URVqUCdwu5",
"type": "arrow"
},
{
"id": "cE_ucOKJBcWQXtcgaSoPF",
"type": "arrow"
}
],
"updated": 1707246657028,
"link": null,
"locked": false
},
{
"type": "text",
"version": 212,
"versionNonce": 420957761,
"isDeleted": false,
"id": "DpYp_SU3PTt5pGMJEYXeQ",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 763.1333332061768,
"y": 272.5,
"strokeColor": "#2f9e44",
"backgroundColor": "transparent",
"width": 55.733333587646484,
"height": 25,
"seed": 1661747887,
"groupIds": [],
"frameId": null,
"roundness": null,
"boundElements": [],
"updated": 1707246497718,
"link": null,
"locked": false,
"fontSize": 20,
"fontFamily": 1,
"text": "Gitea",
"textAlign": "center",
"verticalAlign": "middle",
"containerId": "hKolk3HE8f7p7kku0fuAR",
"originalText": "Gitea",
"lineHeight": 1.25,
"baseline": 18
},
{
"type": "rectangle",
"version": 192,
"versionNonce": 567119311,
"isDeleted": false,
"id": "A_LZS0mn561UWD01SaaNw",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 932,
"y": 353,
"strokeColor": "#9c36b5",
"backgroundColor": "transparent",
"width": 138,
"height": 68.00000000000001,
"seed": 639538113,
"groupIds": [],
"frameId": null,
"roundness": {
"type": 3
},
"boundElements": [
{
"type": "text",
"id": "pFG3mG67d8W-gP9a7l27j"
},
{
"id": "qJj5wVYIiRzV91y3h6Xbi",
"type": "arrow"
}
],
"updated": 1707246620246,
"link": null,
"locked": false
},
{
"type": "text",
"version": 210,
"versionNonce": 1183409057,
"isDeleted": false,
"id": "pFG3mG67d8W-gP9a7l27j",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 947.6500015258789,
"y": 374.5,
"strokeColor": "#9c36b5",
"backgroundColor": "transparent",
"width": 106.69999694824219,
"height": 25,
"seed": 1601729441,
"groupIds": [],
"frameId": null,
"roundness": null,
"boundElements": [],
"updated": 1707246498719,
"link": null,
"locked": false,
"fontSize": 20,
"fontFamily": 1,
"text": "Kubernetes",
"textAlign": "center",
"verticalAlign": "middle",
"containerId": "A_LZS0mn561UWD01SaaNw",
"originalText": "Kubernetes",
"lineHeight": 1.25,
"baseline": 18
},
{
"id": "ahkUXt0AQa8URVqUCdwu5",
"type": "arrow",
"x": 561,
"y": 389.03022718221666,
"width": 154,
"height": 102.11103654737104,
"angle": 0,
"strokeColor": "#f08c00",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": {
"type": 2
},
"seed": 1646169281,
"version": 238,
"versionNonce": 1284654255,
"isDeleted": false,
"boundElements": null,
"updated": 1707246701910,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
81,
-32.03022718221666
],
[
154,
-102.11103654737104
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "z1vPsJxFaPRhe0i1Ck0Je",
"focus": 0.5432390553840177,
"gap": 5
},
"endBinding": {
"elementId": "hKolk3HE8f7p7kku0fuAR",
"focus": 0.7087101937049524,
"gap": 7
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"type": "arrow",
"version": 337,
"versionNonce": 2107204335,
"isDeleted": false,
"id": "Um8DNgdEeXUjERYx_0rtv",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 565.0411501895638,
"y": 389.1698221307173,
"strokeColor": "#f08c00",
"backgroundColor": "transparent",
"width": 153.9999999999999,
"height": 103.34207184944046,
"seed": 1365398817,
"groupIds": [],
"frameId": null,
"roundness": {
"type": 2
},
"boundElements": [],
"updated": 1707246699212,
"link": null,
"locked": false,
"startBinding": {
"elementId": "z1vPsJxFaPRhe0i1Ck0Je",
"focus": -0.45382037830581345,
"gap": 9.041150189563837
},
"endBinding": {
"elementId": "yozZorioSE1OUkpHktzVP",
"focus": -0.8066378321183331,
"gap": 7.958849810436277
},
"lastCommittedPoint": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"points": [
[
0,
0
],
[
94.95884981043616,
39.83017786928269
],
[
153.9999999999999,
103.34207184944046
]
]
},
{
"id": "XAiE7TdBFNjm7rN5XwJO2",
"type": "text",
"x": 508,
"y": 297,
"width": 164.89999389648438,
"height": 25,
"angle": 0,
"strokeColor": "#f08c00",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 1475743759,
"version": 54,
"versionNonce": 201561807,
"isDeleted": false,
"boundElements": null,
"updated": 1707246643630,
"link": null,
"locked": false,
"text": "Create Git Repo",
"fontSize": 20,
"fontFamily": 1,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 18,
"containerId": null,
"originalText": "Create Git Repo",
"lineHeight": 1.25
},
{
"id": "Wtfg9wiBcJ8qgM5sJ1Rgy",
"type": "text",
"x": 522,
"y": 444,
"width": 159.28334045410156,
"height": 50,
"angle": 0,
"strokeColor": "#f08c00",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 1002133263,
"version": 60,
"versionNonce": 1766483329,
"isDeleted": false,
"boundElements": null,
"updated": 1707246645667,
"link": null,
"locked": false,
"text": "Create ArgoCD \nApplication",
"fontSize": 20,
"fontFamily": 1,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 43,
"containerId": null,
"originalText": "Create ArgoCD \nApplication",
"lineHeight": 1.25
},
{
"id": "qJj5wVYIiRzV91y3h6Xbi",
"type": "arrow",
"x": 873,
"y": 489,
"width": 114,
"height": 66,
"angle": 0,
"strokeColor": "#e03131",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": {
"type": 2
},
"seed": 630215073,
"version": 118,
"versionNonce": 1585297729,
"isDeleted": false,
"boundElements": null,
"updated": 1707246649748,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
78,
-6
],
[
114,
-66
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "yozZorioSE1OUkpHktzVP",
"focus": 0.17612524461839527,
"gap": 8
},
"endBinding": {
"elementId": "A_LZS0mn561UWD01SaaNw",
"focus": -0.08501118568232663,
"gap": 2
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"id": "cE_ucOKJBcWQXtcgaSoPF",
"type": "arrow",
"x": 794,
"y": 449,
"width": 2,
"height": 127,
"angle": 0,
"strokeColor": "#e03131",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": {
"type": 2
},
"seed": 1514085633,
"version": 138,
"versionNonce": 842839791,
"isDeleted": false,
"boundElements": null,
"updated": 1707246662294,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
2,
-127
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "yozZorioSE1OUkpHktzVP",
"focus": -0.037594836371871804,
"gap": 5
},
"endBinding": {
"elementId": "hKolk3HE8f7p7kku0fuAR",
"focus": -0.08028535839655757,
"gap": 3
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"id": "8nULB38EPuEIAjdNdyYp0",
"type": "text",
"x": 991,
"y": 479,
"width": 62.13333511352539,
"height": 25,
"angle": 0,
"strokeColor": "#e03131",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 1148815169,
"version": 7,
"versionNonce": 1706607119,
"isDeleted": false,
"boundElements": null,
"updated": 1707246674659,
"link": null,
"locked": false,
"text": "Deploy",
"fontSize": 20,
"fontFamily": 1,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 18,
"containerId": null,
"originalText": "Deploy",
"lineHeight": 1.25
},
{
"id": "dxtUjQKSuIlFaCv7spFWr",
"type": "text",
"x": 809,
"y": 377,
"width": 35.11666488647461,
"height": 25,
"angle": 0,
"strokeColor": "#e03131",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 393005647,
"version": 29,
"versionNonce": 1356449295,
"isDeleted": false,
"boundElements": null,
"updated": 1707246685968,
"link": null,
"locked": false,
"text": "Pull",
"fontSize": 20,
"fontFamily": 1,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 18,
"containerId": null,
"originalText": "Pull",
"lineHeight": 1.25
}
],
"appState": {
"gridSize": null,
"viewBackgroundColor": "#ffffff"
},
"files": {}
}

View file

@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: keycloak
namespace: argocd
labels:
example: ref-implementation
spec:
destination:
namespace: keycloak
server: "https://kubernetes.default.svc"
source:
repoURL: cnoe://keycloak/manifests
targetRevision: HEAD
path: "."
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -0,0 +1,30 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: keycloak-ingress-localhost
namespace: keycloak
annotations:
argocd.argoproj.io/sync-wave: "100"
spec:
ingressClassName: "nginx"
rules:
- host: localhost
http:
paths:
- path: /keycloak
pathType: ImplementationSpecific
backend:
service:
name: keycloak
port:
name: http
- host: cnoe.localtest.me
http:
paths:
- path: /keycloak
pathType: ImplementationSpecific
backend:
service:
name: keycloak
port:
name: http

View file

@ -0,0 +1,164 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: keycloak
---
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak
name: keycloak
namespace: keycloak
annotations:
argocd.argoproj.io/sync-wave: "10"
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- args:
- start-dev
env:
- name: KEYCLOAK_ADMIN
value: cnoe-admin
- name: KEYCLOAK_LOGLEVEL
value: ALL
- name: QUARKUS_TRANSACTION_MANAGER_ENABLE_RECOVERY
value: 'true'
envFrom:
- secretRef:
name: keycloak-config
image: quay.io/keycloak/keycloak:22.0.3
name: keycloak
ports:
- containerPort: 8080
name: http
readinessProbe:
httpGet:
path: /keycloak/realms/master
port: 8080
volumeMounts:
- mountPath: /opt/keycloak/conf
name: keycloak-config
readOnly: true
volumes:
- configMap:
name: keycloak-config
name: keycloak-config
---
apiVersion: v1
data:
keycloak.conf: |
# Database
# The database vendor.
db=postgres
# The username of the database user.
db-url=jdbc:postgresql://postgresql.keycloak.svc.cluster.local:5432/postgres
# The proxy address forwarding mode if the server is behind a reverse proxy.
proxy=edge
# hostname configuration
hostname=cnoe.localtest.me
hostname-port=8443
http-relative-path=keycloak
# the admin url requires its own configuration to reflect correct url
hostname-admin=cnoe.localtest.me:8443
hostname-debug=true
# this should only be allowed in development. NEVER in production.
hostname-strict=false
hostname-strict-backchannel=false
kind: ConfigMap
metadata:
name: keycloak-config
namespace: keycloak
---
apiVersion: v1
kind: Service
metadata:
labels:
app: postgresql
name: postgresql
namespace: keycloak
spec:
clusterIP: None
ports:
- name: postgres
port: 5432
selector:
app: postgresql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: postgresql
name: postgresql
namespace: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: postgresql
serviceName: service-postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- envFrom:
- secretRef:
name: keycloak-config
image: docker.io/library/postgres:15.3-alpine3.18
name: postgres
ports:
- containerPort: 5432
name: postgresdb
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 300Mi
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: "500Mi"

View file

@ -0,0 +1,366 @@
# resources here are used to configure keycloak instance for SSO
apiVersion: v1
kind: ServiceAccount
metadata:
name: keycloak-config
namespace: keycloak
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: keycloak-config
namespace: keycloak
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: keycloak-config
namespace: keycloak
subjects:
- kind: ServiceAccount
name: keycloak-config
namespace: keycloak
roleRef:
kind: Role
name: keycloak-config
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: keycloak-config
namespace: argocd
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: keycloak-config
namespace: argocd
subjects:
- kind: ServiceAccount
name: keycloak-config
namespace: keycloak
roleRef:
kind: Role
name: keycloak-config
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: config-job
namespace: keycloak
data:
client-scope-groups-payload.json: |
{
"name": "groups",
"description": "groups a user belongs to",
"attributes": {
"consent.screen.text": "Access to groups a user belongs to.",
"display.on.consent.screen": "true",
"include.in.token.scope": "true",
"gui.order": ""
},
"type": "default",
"protocol": "openid-connect"
}
group-admin-payload.json: |
{"name":"admin"}
group-base-user-payload.json: |
{"name":"base-user"}
group-mapper-payload.json: |
{
"protocol": "openid-connect",
"protocolMapper": "oidc-group-membership-mapper",
"name": "groups",
"config": {
"claim.name": "groups",
"full.path": "false",
"id.token.claim": "true",
"access.token.claim": "true",
"userinfo.token.claim": "true"
}
}
realm-payload.json: |
{"realm":"cnoe","enabled":true}
user-password.json: |
{
"temporary": false,
"type": "password",
"value": "${USER1_PASSWORD}"
}
user-user1.json: |
{
"username": "user1",
"email": "",
"firstName": "user",
"lastName": "one",
"requiredActions": [],
"emailVerified": false,
"groups": [
"/admin"
],
"enabled": true
}
user-user2.json: |
{
"username": "user2",
"email": "",
"firstName": "user",
"lastName": "two",
"requiredActions": [],
"emailVerified": false,
"groups": [
"/base-user"
],
"enabled": true
}
argo-client-payload.json: |
{
"protocol": "openid-connect",
"clientId": "argo-workflows",
"name": "Argo Workflows Client",
"description": "Used for Argo Workflows SSO",
"publicClient": false,
"authorizationServicesEnabled": false,
"serviceAccountsEnabled": false,
"implicitFlowEnabled": false,
"directAccessGrantsEnabled": true,
"standardFlowEnabled": true,
"frontchannelLogout": true,
"attributes": {
"saml_idp_initiated_sso_url_name": "",
"oauth2.device.authorization.grant.enabled": false,
"oidc.ciba.grant.enabled": false
},
"alwaysDisplayInConsole": false,
"rootUrl": "",
"baseUrl": "",
"redirectUris": [
"https://cnoe.localtest.me:8443/argo-workflows/oauth2/callback"
],
"webOrigins": [
"/*"
]
}
backstage-client-payload.json: |
{
"protocol": "openid-connect",
"clientId": "backstage",
"name": "Backstage Client",
"description": "Used for Backstage SSO",
"publicClient": false,
"authorizationServicesEnabled": false,
"serviceAccountsEnabled": false,
"implicitFlowEnabled": false,
"directAccessGrantsEnabled": true,
"standardFlowEnabled": true,
"frontchannelLogout": true,
"attributes": {
"saml_idp_initiated_sso_url_name": "",
"oauth2.device.authorization.grant.enabled": false,
"oidc.ciba.grant.enabled": false
},
"alwaysDisplayInConsole": false,
"rootUrl": "",
"baseUrl": "",
"redirectUris": [
"https://cnoe.localtest.me:8443/api/auth/keycloak-oidc/handler/frame"
],
"webOrigins": [
"/*"
]
}
---
apiVersion: batch/v1
kind: Job
metadata:
name: config
namespace: keycloak
annotations:
argocd.argoproj.io/hook: PostSync
spec:
template:
metadata:
generateName: config
spec:
serviceAccountName: keycloak-config
restartPolicy: Never
volumes:
- name: keycloak-config
secret:
secretName: keycloak-config
- name: config-payloads
configMap:
name: config-job
containers:
- name: kubectl
image: docker.io/library/ubuntu:22.04
volumeMounts:
- name: keycloak-config
readOnly: true
mountPath: "/var/secrets/"
- name: config-payloads
readOnly: true
mountPath: "/var/config/"
command: ["/bin/bash", "-c"]
args:
- |
#! /bin/bash
set -ex -o pipefail
apt -qq update && apt -qq install curl jq -y
ADMIN_PASSWORD=$(cat /var/secrets/KEYCLOAK_ADMIN_PASSWORD)
USER1_PASSWORD=$(cat /var/secrets/USER_PASSWORD)
KEYCLOAK_URL=http://keycloak.keycloak.svc.cluster.local:8080/keycloak
KEYCLOAK_TOKEN=$(curl -sS --fail-with-body -X POST -H "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode "username=cnoe-admin" \
--data-urlencode "password=${ADMIN_PASSWORD}" \
--data-urlencode "grant_type=password" \
--data-urlencode "client_id=admin-cli" \
${KEYCLOAK_URL}/realms/master/protocol/openid-connect/token | jq -e -r '.access_token')
set +e
curl --fail-with-body -H "Authorization: bearer ${KEYCLOAK_TOKEN}" "${KEYCLOAK_URL}/admin/realms/cnoe" &> /dev/null
if [ $? -eq 0 ]; then
exit 0
fi
set -e
curl -sS -LO "https://dl.k8s.io/release/v1.28.3//bin/linux/amd64/kubectl"
chmod +x kubectl
echo "creating cnoe realm and groups"
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X POST --data @/var/config/realm-payload.json \
${KEYCLOAK_URL}/admin/realms
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X POST --data @/var/config/client-scope-groups-payload.json \
${KEYCLOAK_URL}/admin/realms/cnoe/client-scopes
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X POST --data @/var/config/group-admin-payload.json \
${KEYCLOAK_URL}/admin/realms/cnoe/groups
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X POST --data @/var/config/group-base-user-payload.json \
${KEYCLOAK_URL}/admin/realms/cnoe/groups
# Create scope mapper
echo 'adding group claim to tokens'
CLIENT_SCOPE_GROUPS_ID=$(curl -sS -H "Content-Type: application/json" -H "Authorization: bearer ${KEYCLOAK_TOKEN}" -X GET ${KEYCLOAK_URL}/admin/realms/cnoe/client-scopes | jq -e -r '.[] | select(.name == "groups") | .id')
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X POST --data @/var/config/group-mapper-payload.json \
${KEYCLOAK_URL}/admin/realms/cnoe/client-scopes/${CLIENT_SCOPE_GROUPS_ID}/protocol-mappers/models
echo "creating test users"
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X POST --data @/var/config/user-user1.json \
${KEYCLOAK_URL}/admin/realms/cnoe/users
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X POST --data @/var/config/user-user2.json \
${KEYCLOAK_URL}/admin/realms/cnoe/users
USER1ID=$(curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" "${KEYCLOAK_URL}/admin/realms/cnoe/users?lastName=one" | jq -r '.[0].id')
USER2ID=$(curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" "${KEYCLOAK_URL}/admin/realms/cnoe/users?lastName=two" | jq -r '.[0].id')
echo "setting user passwords"
jq -r --arg pass ${USER1_PASSWORD} '.value = $pass' /var/config/user-password.json > /tmp/user-password-to-be-applied.json
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X PUT --data @/tmp/user-password-to-be-applied.json \
${KEYCLOAK_URL}/admin/realms/cnoe/users/${USER1ID}/reset-password
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X PUT --data @/tmp/user-password-to-be-applied.json \
${KEYCLOAK_URL}/admin/realms/cnoe/users/${USER2ID}/reset-password
echo "creating Argo Workflows client"
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X POST --data @/var/config/argo-client-payload.json \
${KEYCLOAK_URL}/admin/realms/cnoe/clients
CLIENT_ID=$(curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X GET ${KEYCLOAK_URL}/admin/realms/cnoe/clients | jq -e -r '.[] | select(.clientId == "argo-workflows") | .id')
CLIENT_SCOPE_GROUPS_ID=$(curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X GET ${KEYCLOAK_URL}/admin/realms/cnoe/client-scopes | jq -e -r '.[] | select(.name == "groups") | .id')
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X PUT ${KEYCLOAK_URL}/admin/realms/cnoe/clients/${CLIENT_ID}/default-client-scopes/${CLIENT_SCOPE_GROUPS_ID}
ARGO_WORKFLOWS_CLIENT_SECRET=$(curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X GET ${KEYCLOAK_URL}/admin/realms/cnoe/clients/${CLIENT_ID} | jq -e -r '.secret')
echo "creating Backstage client"
curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X POST --data @/var/config/backstage-client-payload.json \
${KEYCLOAK_URL}/admin/realms/cnoe/clients
CLIENT_ID=$(curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X GET ${KEYCLOAK_URL}/admin/realms/cnoe/clients | jq -e -r '.[] | select(.clientId == "backstage") | .id')
CLIENT_SCOPE_GROUPS_ID=$(curl -sS -H "Content-Type: application/json" -H "Authorization: bearer ${KEYCLOAK_TOKEN}" -X GET ${KEYCLOAK_URL}/admin/realms/cnoe/client-scopes | jq -e -r '.[] | select(.name == "groups") | .id')
curl -sS -H "Content-Type: application/json" -H "Authorization: bearer ${KEYCLOAK_TOKEN}" -X PUT ${KEYCLOAK_URL}/admin/realms/cnoe/clients/${CLIENT_ID}/default-client-scopes/${CLIENT_SCOPE_GROUPS_ID}
BACKSTAGE_CLIENT_SECRET=$(curl -sS -H "Content-Type: application/json" \
-H "Authorization: bearer ${KEYCLOAK_TOKEN}" \
-X GET ${KEYCLOAK_URL}/admin/realms/cnoe/clients/${CLIENT_ID} | jq -e -r '.secret')
ARGOCD_PASSWORD=$(./kubectl -n argocd get secret argocd-initial-admin-secret -o go-template='{{.data.password | base64decode }}')
ARGOCD_SESSION_TOKEN=$(curl -k -sS http://argocd-server.argocd.svc.cluster.local:443/api/v1/session -H 'Content-Type: application/json' -d "{\"username\":\"admin\",\"password\":\"${ARGOCD_PASSWORD}\"}" | jq -r .token)
echo \
"apiVersion: v1
kind: Secret
metadata:
name: keycloak-clients
namespace: keycloak
type: Opaque
stringData:
ARGO_WORKFLOWS_CLIENT_SECRET: ${ARGO_WORKFLOWS_CLIENT_SECRET}
ARGO_WORKFLOWS_CLIENT_ID: argo-workflows
ARGOCD_SESSION_TOKEN: ${ARGOCD_SESSION_TOKEN}
BACKSTAGE_CLIENT_SECRET: ${BACKSTAGE_CLIENT_SECRET}
BACKSTAGE_CLIENT_ID: backstage
" > /tmp/secret.yaml
./kubectl apply -f /tmp/secret.yaml

View file

@ -0,0 +1,179 @@
apiVersion: generators.external-secrets.io/v1alpha1
kind: Password
metadata:
name: keycloak
namespace: keycloak
spec:
length: 36
digits: 5
symbols: 5
symbolCharacters: "/-+"
noUpper: false
allowRepeat: true
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: keycloak-config
namespace: keycloak
spec:
refreshInterval: "0"
target:
name: keycloak-config
template:
metadata:
labels:
cnoe.io/cli-secret: "true"
cnoe.io/package-name: keycloak
engineVersion: v2
data:
KEYCLOAK_ADMIN_PASSWORD: "{{.KEYCLOAK_ADMIN_PASSWORD}}"
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: "{{.KC_DB_PASSWORD}}"
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: "{{.KC_DB_PASSWORD}}"
USER_PASSWORD: "{{.USER_PASSWORD}}"
dataFrom:
- sourceRef:
generatorRef:
apiVersion: generators.external-secrets.io/v1alpha1
kind: Password
name: keycloak
rewrite:
- transform:
template: "KEYCLOAK_ADMIN_PASSWORD"
- sourceRef:
generatorRef:
apiVersion: generators.external-secrets.io/v1alpha1
kind: Password
name: keycloak
rewrite:
- transform:
template: "KC_DB_PASSWORD"
- sourceRef:
generatorRef:
apiVersion: generators.external-secrets.io/v1alpha1
kind: Password
name: keycloak
rewrite:
- transform:
template: "USER_PASSWORD"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: eso-store
namespace: keycloak
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: keycloak
name: eso-store
rules:
- apiGroups: [""]
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- authorization.k8s.io
resources:
- selfsubjectrulesreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: eso-store
namespace: keycloak
subjects:
- kind: ServiceAccount
name: eso-store
namespace: keycloak
roleRef:
kind: Role
name: eso-store
apiGroup: rbac.authorization.k8s.io
---
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: keycloak
spec:
provider:
kubernetes:
remoteNamespace: keycloak
server:
caProvider:
type: ConfigMap
name: kube-root-ca.crt
namespace: keycloak
key: ca.crt
auth:
serviceAccount:
name: eso-store
namespace: keycloak
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: eso-store
namespace: gitea
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: eso-store
namespace: gitea
rules:
- apiGroups: [""]
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- authorization.k8s.io
resources:
- selfsubjectrulesreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: eso-store
namespace: gitea
subjects:
- kind: ServiceAccount
name: eso-store
namespace: gitea
roleRef:
kind: Role
name: eso-store
apiGroup: rbac.authorization.k8s.io
---
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: gitea
spec:
provider:
kubernetes:
remoteNamespace: gitea
server:
caProvider:
type: ConfigMap
name: kube-root-ca.crt
namespace: gitea
key: ca.crt
auth:
serviceAccount:
name: eso-store
namespace: gitea

View file

@ -0,0 +1,29 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metric-server
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://kubernetes-sigs.github.io/metrics-server
targetRevision: 3.12.1
helm:
releaseName: metrics-server
values: |
args:
- --kubelet-insecure-tls #required for kind/minikube
chart: metrics-server
destination:
server: 'https://kubernetes.default.svc'
namespace: kube-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

36
ref-implementation/replace.sh Executable file
View file

@ -0,0 +1,36 @@
# this script replaces hostname and port used by this implementation.
# intended for use in environments such as Codespaces where external host and port need to be updated to access in-cluster resources.
#!/bin/bash
set -e
# Check if the new port number is provided as an argument
if [ "$#" -ne 2 ]; then
echo "Usage: NEW_HOST NEW_PORT"
exit 1
fi
# Assign the first script argument to NEW_PORT
NEW_HOST="$1"
NEW_PORT="$2"
# Base directory to start from, "." means the current directory
CURRENT_DIR=$(echo "${PWD##*/}")
if [[ ${CURRENT_DIR} != "ref-implementation" ]]; then
echo "please run this script from the examples/ref-implementation directory"
exit 10
fi
BASE_DIRECTORY="."
# Find all .yaml files recursively starting from the base directory
# and perform an in-place search and replace from 8443 to the new port
find "$BASE_DIRECTORY" -type f -name "*.yaml" -exec sed -i "s/8443/${NEW_PORT}/g" {} +
find "$BASE_DIRECTORY" -type f -name "*.yaml" -exec sed -i "s/cnoe\.localtest\.me/${NEW_HOST}/g" {} +
# Remove hostname-port configuration if the new port is 443. Browsers strip 443 but keycloak still expects 443 in url.
if [[ ${NEW_PORT} == "443" ]]; then
sed -i "/hostname-port/d" keycloak/manifests/install.yaml
sed -i "/hostname-admin/d" keycloak/manifests/install.yaml
sed -i '0,/:443/{s/:443//}' argo-workflows/manifests/dev/patches/cm-argo-workflows.yaml
fi
echo "Replacement complete."

View file

@ -0,0 +1,25 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: spark-operator
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
sources:
- repoURL: https://kubeflow.github.io/spark-operator
targetRevision: 1.1.27
helm:
releaseName: spark-operator
chart: spark-operator
destination:
server: "https://kubernetes.default.svc"
namespace: spark-operator
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true

View file

@ -0,0 +1,17 @@
# Terraform Integrations for Backstage
`idpBuilder` is now experimentally extensible to launch custom terraform patterns using package extensions. This is an experimental effort allowing the users of the `idpBuilder` to run terraform modules using the tooling in place.
Please use the below command to deploy an IDP reference implementation with an Argo application for preparing up the setup for terraform integrations:
```bash
idpbuilder create \
--use-path-routing \
--package-dir examples/ref-implementation \
--package-dir examples/terraform-integrations
```
As you see above, this add-on to `idpbuilder` has a dependency to the [reference implementation](../ref-implementation/). This command primarily does the following:
1. Installs `fluxcd` source repository controller as an `argo` application.
2. Installs `tofu-controller` for managing the lifecycle of terraform deployments from your Kubernetes cluster for operations such as create, delete and update.

View file

@ -0,0 +1,37 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: fluxcd
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: 'https://fluxcd-community.github.io/helm-charts'
targetRevision: 2.12.4
helm:
releaseName: flux2
values: |
helmController:
create: false
imageAutomationController:
create: false
imageReflectionController:
create: false
kustomizeController:
create: false
notificationController:
create: false
chart: flux2
destination:
server: 'https://kubernetes.default.svc'
namespace: flux-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -0,0 +1,25 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: terraform-argo-workflows-templates
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://github.com/cnoe-io/backstage-terraform-integrations
targetRevision: main
path: argo-workflows-templates/dev
destination:
server: "https://kubernetes.default.svc"
namespace: argo
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true

View file

@ -0,0 +1,33 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: gitops-terraform-controller
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: 'https://flux-iac.github.io/tofu-controller'
targetRevision: v0.15.1
helm:
releaseName: tf-controller
values: |
allowCrossNamespaceRefs: true
watchAllNamespaces: true
awsPackage:
install: true
repository: ghcr.io/flux-iac/aws-primitive-modules
chart: tf-controller
destination:
server: 'https://kubernetes.default.svc'
namespace: flux-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true