Compare commits

..

238 commits

Author SHA1 Message Date
e240efb872 feat(stacks): change references to point to new PROD 2025-07-04 10:49:42 +02:00
65ecf59bbf feat(forgejo): reconfigure loadbalancer to allow ssh 2025-07-04 10:05:57 +02:00
5f116820d0 feat(forgejo): new name and slogan 2025-07-02 14:04:40 +00:00
249ef87844 chore(image): switched to forgejo-edp 2025-07-02 09:51:57 +00:00
4678343084 chore(deps): update to forgejo v11, which is helm chart version v12 2025-07-02 09:47:21 +00:00
4e91bab0c8 feat(observability-client): user and password for vector referenced from a secret 2025-06-26 15:05:00 +02:00
12e12a5d60 fix(grafana): changed host name 2025-06-26 11:22:34 +02:00
1ee8310b51 feat(grafana): added ingress 2025-06-25 16:13:28 +02:00
3c8eaf8fff fix(enc): of victoria pvc's 2025-06-25 15:52:52 +02:00
60e1d119c1 feat(observability): encrypt persistent data 2025-06-25 10:00:25 +02:00
63218c5847 fix(o12y): 2025-06-23 16:29:08 +02:00
dc1052182d fix(grafana): 2025-06-23 16:18:16 +02:00
8d1d968b7b feat(o12y): parametrized enpoints 2025-06-23 16:02:39 +02:00
e1e1efa1e7 feat(observability): exchanged hardcoded observability endpoint to a parametrized one 2025-06-23 15:28:19 +02:00
80a8fe661b fix(victoriaMetrics): added cluster_environment label to logs and metrics 2025-06-23 11:27:37 +02:00
51fe5cc5b5 feat(observability): added cluster_environment label
Refs: DevFW/infra-deploy#36
2025-06-20 15:16:01 +02:00
2ef4f03ce6 feat(observability): added cluster_environment label
Refs: DevFW/infra-deploy#36
2025-06-20 15:08:06 +02:00
9625a8cf84 feat(observability): added dashboards for ingress, victoria-logs, argocd. Added grafana-operator 2025-06-20 14:19:33 +02:00
13e4dec40c
fix: 🔧 Update VictoriaMetrics datasource configuration
Refines the VictoriaMetrics datasource settings in Grafana by updating the name and type to ensure compatibility with the latest version.

Additionally, enables the logs datasource plugin for improved observability.
2025-06-20 10:18:24 +02:00
ca953b5074 fix(forgejo): switched from rollingUpdate to recreate 2025-06-19 14:25:30 +02:00
6e8c5673e2 feat(database): use verify-ca instead of verify-full 2025-06-17 14:58:10 +02:00
4b50107eaa feat(database): enable ssl mode in database 2025-06-17 14:43:04 +02:00
40fa8d0c11 feat(database): test with bogus folder name 2025-06-17 14:35:50 +02:00
21d6c39b5c feat(database): rename postgres certificate in k8s secrets 2025-06-17 14:30:51 +02:00
c303b38966 feat(database): rename postgres certificate in k8s secrets 2025-06-17 14:19:46 +02:00
5500b58a9a feat(database): use certificate folder for elasticsearch 2025-06-17 14:05:41 +02:00
d14138996e feat(database): change folder for certificate 2025-06-17 13:55:36 +02:00
e3a93114c9 feat(database): use itemized list in extra volume for certificate 2025-06-17 13:50:11 +02:00
47de41ba4c fix(database): try creating a custom ssl certificate directory 2025-06-17 13:34:02 +02:00
19cfe4dec8 fix(database): reverted changes 2025-06-17 11:18:33 +02:00
aa30d027bf feat(database): use common directory for certificates 2025-06-17 11:03:56 +02:00
bfad711a9a feat(database): enable postgres tls verification 2025-06-17 09:32:45 +02:00
6841bdf94d feat(postgres): added volumeMount for postgres cert 2025-06-16 17:28:02 +02:00
acf5c7f284 fix(pipeline): fix argocd path routing due to domain 2025-06-16 15:07:16 +02:00
087b6d9c49 fix(application urls): fixed argocd url 2025-06-16 14:52:19 +02:00
ab8791d530 fix(application urls): renamed gitea domain 2025-06-16 14:42:11 +02:00
8cf22ec66f
feat(pvc): Increase persistence size and add annotations
Updates the persistence size from 5Gi to 200Gi to accommodate larger data storage needs.

Adds annotations for KMS key ID to enhance security and management of persistent volumes.
2025-06-16 14:01:57 +02:00
1a12aa3674
fix(mail): 🔧 Update connection string reference in values.yaml
Corrects the key reference in the configuration for Gitea's email user credentials from 'password' to 'connection-string' to ensure proper connection handling.
2025-06-16 13:23:40 +02:00
2e57ab463e
feat(mail): Update mailer configuration and add credentials
Enhances the mailer setup by updating SMTP details to use secure connection and new credentials.

Adds a reference to secret key for email password, improving security for email communication.
2025-06-16 13:09:23 +02:00
c87a920b74 Update template/stacks/observability-client/vm-client-stack.yaml 2025-06-13 07:34:09 +00:00
0fab276a36 feat(o12y): Added SIG Kubernetes Metrics Server, closes #33 2025-06-12 14:19:26 +02:00
c1c2d4f1ea
Merge branch 'o12yclient' 2025-06-10 11:20:08 +02:00
923d549290 fix(observability): Changed auth route target to new name 2025-06-10 09:16:42 +00:00
19c4694119 fix(observability): Removed auth lifetime config 2025-06-10 09:16:42 +00:00
eacdcf2eae feat(observability): Disabled grafana auth protection 2025-06-10 09:16:42 +00:00
050c774db0 fix(observability): Switched to ServerSideApply for o12y stack 2025-06-10 09:16:42 +00:00
b2ca785ff2 refactor(observability): Renamed argo app to o12y 2025-06-10 09:16:42 +00:00
bcfd471073 fix(vmetrics): fixed the vmetrics route 2025-06-10 09:16:42 +00:00
17b13041b4 feat(observability): Created observability-client stack
Moved vector from core stack to observability-client
Added victoriametrics-k8s-stack to observability-client for easy vmagent
and scraping config
2025-06-10 09:16:42 +00:00
9bd4871127 Update template/stacks/forgejo/forgejo-server.yaml 2025-06-06 09:50:20 +00:00
e5b633fbf4 Update template/stacks/forgejo/forgejo-server.yaml 2025-06-06 09:46:11 +00:00
fc860747fd feat(forgejo,argocd): Fixed the Forgejo ingress and moved argocd and forgejo ingresses into the argocd and forgejo application manifests folder 2025-06-06 11:34:30 +02:00
fc12862e12 feat(forgejo,argocd): Fixed the Forgejo ingress and moved argocd and forgejo ingresses into the argocd and forgejo application manifests folder 2025-06-06 11:29:46 +02:00
490e4fcfd9 fix(forgejo): renamed forgejo service to match forgejo-server- 2025-06-06 10:12:13 +02:00
358be3205b
fix(forgejo): Properly interpolate minio bucket name in forgejo config 2025-06-04 16:27:10 +02:00
b775019744
feat: 🎉 Add SSL certificate configuration for deployment
Adds configuration for SSL certificate in the deployment settings by introducing environment variables and volume mounts for the Elasticsearch certificate.

This enhancement improves security by ensuring that the application can properly utilize SSL certificates for secure communication.
2025-06-03 16:54:06 +02:00
4761fef87c feat(forgejo): Resolved duplicate forgejo argocd application name 2025-06-03 14:19:47 +02:00
104b811e7e Update template/registry/forgejo.yaml 2025-06-03 12:11:31 +00:00
02d9d207dd feat(forgejo): separate forgejo from core into its own stack 2025-06-03 10:17:24 +02:00
dd46f37e43
feat: Add Elasticsearch indexer configuration
Introduces the configuration for the issue indexer using Elasticsearch, enabling the ISSUE_INDEXER feature.

Sets the ISSUE_INDEXER_ENABLED flag to true and specifies the connection string sourced from a secret.

Prepares for future enhancements by including placeholders for repository indexing options.
2025-06-02 17:39:15 +02:00
e1bf3012e2 feat(forgejo): database reference refactoring 2025-06-02 15:05:51 +02:00
942cedd845
feat(observability): Switched to static endpoints due to bug in CRD selector
CRD selected the wrong port otherwise
2025-06-02 14:57:12 +02:00
fc34fb4ee6 enabled authorized access to vlogs and vmetrics 2025-06-02 14:21:31 +02:00
32bb201e82 feat(forgejo): rename forgejo database host secret key 2025-06-02 13:13:42 +02:00
Bot
15457a0f81 feat(forgejo): Added postgres password 2025-05-30 18:02:59 +02:00
Bot
7a05ca605b feat(forgejo): Added postgres to forgejo ini 2025-05-30 16:49:03 +02:00
fda834d703 feat(redis): removed duplicate entries in forgejo values.yaml 2025-05-30 09:25:14 +00:00
3752fbd341 feat(observability): Added rewrite rules for prometheus remote write to victoria metrics 2025-05-28 16:00:27 +02:00
d4ef3d4a44 feat(grafana): added basic persistence for grafana 2025-05-28 14:54:10 +02:00
00dd935a88 Update template/stacks/core/forgejo/values.yaml 2025-05-28 12:21:15 +00:00
774871c878
feat: 🎉 Add MinIO credentials for repository archiving
Adds MinIO access and secret keys for repository archiving functionality in the configuration.

This enhancement ensures that the necessary credentials are securely referenced, improving access to MinIO storage for archived repositories.

Relates to improved storage management.
2025-05-28 10:31:28 +02:00
528b44a1ba feat(pipeline): Created managed storage for forgejo 2025-05-27 16:33:20 +02:00
95ba18bb56 Removed cert-manager argocd application manifest releaseName entry to prevent out-of-sync state 2025-05-27 08:59:08 +00:00
1f38cc5755 Delete template/stacks/core/ingress-apps/openbao.yaml 2025-05-27 06:59:54 +00:00
0c2e94dc24 Delete template/stacks/core/ingress-apps/mailhog.yaml 2025-05-27 06:59:48 +00:00
ad72626d27 Delete template/stacks/core/ingress-apps/kube-prometheus-stack-grafana.yaml 2025-05-27 06:59:32 +00:00
7cdeed9aff Delete template/stacks/core/ingress-apps/keycloak-ingress-localhost.yaml 2025-05-27 06:59:23 +00:00
5ca95ca4ff Delete template/stacks/core/ingress-apps/backstage.yaml 2025-05-27 06:59:18 +00:00
96c514912d Delete template/stacks/core/ingress-apps/argo-workflows-ingress.yaml 2025-05-27 06:59:12 +00:00
5f91a08c42 Removed unneded code 2025-05-26 17:13:29 +00:00
d76579d814 Update template/stacks/observability/victoria-k8s-stack/values.yaml 2025-05-26 17:12:47 +00:00
29d6cc2660
fix(victoria-k8s-stack): Fixed TLS connection for observability stack 2025-05-26 17:07:28 +02:00
ff978767f6 feat(victoria-k8s-stack): added vmauth 2025-05-26 16:37:28 +02:00
d80ef86286 fix: test environment to ini 2025-05-26 16:21:30 +02:00
1fce183187 fix(vector): use correct deployment name for vector 2025-05-26 13:26:59 +00:00
654daa1743 feat(observability): added vector as logshipper in the core stack 2025-05-21 11:57:36 +02:00
1f7b8e962e fix(victoria): fixes helm value path 2025-05-20 17:26:15 +02:00
9dd41f8b6d feat(otc): Setting ArgoCD retry limit to -1 to core and otc stack 2025-05-20 16:21:55 +02:00
08c8ea6a39 feat(observability): added new stack bases on victoria-k8s-stack 2025-05-20 15:47:05 +02:00
b824738a34 feat(monitoring): remove monitoring as in forgejo-as-a-service we switch to central monitoring 2025-05-20 14:57:57 +02:00
1343794825
chore(cert): Switchted to prod let's encrypt 2025-05-14 14:48:42 +02:00
137cfca08c feat: deleted keycloak related argocd, forgejo manifests 2025-05-14 13:56:10 +02:00
5075deec67 feat(otc) changed nginx-ingress service annotation from custom-eip to eip 2025-05-09 13:51:51 +02:00
74e97f0dcd feat(otc) fixed argocd ingress-nginx settings 2025-05-08 15:48:39 +02:00
56be9fa0b2 feat(otc) setup cert-manager in the ingresses 2025-05-08 15:31:03 +02:00
ec862e92eb Added cert-manager to otc stack 2025-05-08 15:10:59 +02:00
cc107f4ff4 feat(otc): Added LB IP ID 2025-05-07 17:10:12 +02:00
03f113f339 feat(otc): Fixed typo 2025-05-07 16:46:07 +02:00
6908182367 feat(otc): Moved ingress-nginx to otc stack, removed KIND stuff and added OTC annotations 2025-05-07 16:44:39 +02:00
d2cce953a1
feat: 🏗️ Add otc stack 2025-05-06 16:35:20 +02:00
48b6067bf8
feat: 🗃️ Add storageclass for otc 2025-05-06 16:31:38 +02:00
3419b428ea Merged SSO from development 2025-04-28 10:56:27 +02:00
cd4abc47b9 Removed merge artifacts 2025-04-28 10:55:10 +02:00
fbfc42cf47 Merge branch 'development' into modularise_edp 2025-04-28 10:33:25 +02:00
d390833416 Automated connection between OpenBAO and ESO 2025-04-28 10:16:40 +02:00
b2e91d0163 Automated connection between OpenBAO and ESO 2025-04-28 10:10:34 +02:00
a090677c0f Automated connection between OpenBAO and ESO 2025-04-28 09:54:23 +02:00
d0388bcd20 Automated connection between OpenBAO and ESO 2025-04-28 09:42:11 +02:00
4888c9db93 Merge pull request 'IPCEICIS-2297_working_oidc' (#30) from IPCEICIS-2297_working_oidc into development
Reviewed-on: #30
2025-04-25 12:11:02 +00:00
ffd5111bce Merge branch 'development' into IPCEICIS-2297_working_oidc 2025-04-25 12:10:06 +00:00
16dde9ead1 final changes 2025-04-25 14:09:17 +02:00
f434e0680f template/stacks/core/forgejo/values.yaml aktualisiert 2025-04-25 10:54:28 +00:00
d3546717c0 template/stacks/core/forgejo/values.yaml aktualisiert 2025-04-24 16:11:58 +00:00
dbd391d29c template/stacks/core/forgejo/values.yaml aktualisiert 2025-04-24 16:07:22 +00:00
4fd88985ef template/stacks/core/forgejo.yaml aktualisiert 2025-04-24 15:29:34 +00:00
f67bc40d1e Using ESO for Grafana admin password generation 2025-04-23 16:03:09 +02:00
d5ad448d2b Using ESO for Forgejo admin password generation 2025-04-23 15:50:14 +02:00
1530e4787b Combined helm and kubernetes deployments into a singel argocd application 2025-04-23 15:40:38 +02:00
dd8feba996 Combined helm and kubernetes deployments into a singel argocd application 2025-04-23 15:30:19 +02:00
7287a6cf56 testing redis changes 2025-04-23 15:03:49 +02:00
183cec8a9d testing redis changes 2025-04-23 14:37:50 +02:00
Bot
abeeb7ee23 chore(backstage): pin to backstage-edp v1.1.0 2025-04-23 13:20:24 +02:00
aec54530f8 Merge branch 'development' into IPCEICIS-2297_working_oidc 2025-04-23 11:40:48 +02:00
7e599a9422 testing redis changes 2025-04-23 11:21:51 +02:00
fbee7995e1 testing redis changes 2025-04-23 11:14:27 +02:00
15d9160b16 testing redis changes 2025-04-23 11:02:59 +02:00
ee08dc2f33 testing redis changes 2025-04-23 10:56:34 +02:00
Bot
3f78b2839a Moved client stack repo to a central instance 2025-04-22 21:44:56 +02:00
Bot
d94a445f47 Changes templates to be based on a central client repo 2025-04-22 19:36:14 +02:00
Bot
4eb6fa0908 Removed unused ArgoCD Application manifests of Crossplane 2025-04-22 18:56:30 +02:00
6afdc2c64f removes some comments 2025-04-22 15:17:34 +02:00
c8eac10fcf muss so 2025-04-22 15:11:16 +02:00
4447c29987 cancel last ommit 2025-04-22 14:59:44 +02:00
9bb0063f8b Use Redis in the Forgejo configuration to support rolling updates of Forgejo itself
Forgejo is not able to be reconfigured by default: a queue is locked
To circumvent the problem, we need simply to enable the use of Redis as a Forgejo component
2025-04-22 12:29:50 +00:00
6ac5a94503 updates Forgejo sync policy 2025-04-22 09:55:18 +02:00
f783a582c6 does cleanup 2025-04-17 16:45:59 +02:00
4e50289d91 testing the hydration of domains 2025-04-17 15:50:35 +02:00
ba2b7dbc9f adds missing secret for 'git clone'-command 2025-04-17 14:46:29 +02:00
9dd9184cfd uses the new secrets for 'git clone'-command 2025-04-17 14:31:56 +02:00
0e26cc9a3f adds forgejo-access-token external secret for gitea namespace 2025-04-17 13:09:43 +02:00
0668eb7c5f Merge branch 'IPCEICIS-2297_working_oidc' of https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW-CICD/stacks into IPCEICIS-2297_working_oidc 2025-04-17 12:59:21 +02:00
74523447ae adds the correct secrets 2025-04-17 12:56:58 +02:00
cce8c51b75 Add template/stacks/core/argocd-sso/argocd-forgejo-access-token.yaml 2025-04-17 10:54:47 +00:00
11d9ad5fcc testing 2025-04-16 15:24:28 +02:00
42d65e95be testing 2025-04-16 14:59:25 +02:00
5165583b9a testing 2025-04-16 14:53:10 +02:00
701771ad13 adds secretRefs to the jobs 2025-04-14 17:42:27 +02:00
d90402b74a renaming 2025-04-14 16:56:45 +02:00
b533f7adf3 adds a kubernetes job that configures ArgoCD 2025-04-14 16:39:37 +02:00
620f7a3fd9 adds a kubernetes job that configures Forgejo 2025-04-14 13:30:50 +02:00
1a8c2846bc Update template/stacks/core/forgejo-sso/secret-forgejo.yaml 2025-04-12 21:21:16 +00:00
ead21d078a Update template/stacks/core/argocd-sso/argocd-secret.yaml 2025-04-12 20:42:55 +00:00
30d1d51884 Merge pull request 'Added keycloak client externalsecret for Forgejo and ArgoCD' (#27) from keycloak_externalsecret_for_argocd_and_forgejo into development
Reviewed-on: #27
2025-04-12 19:38:52 +00:00
Richard Robert Reitz
33def8aba5 Added keycloak client externalsecret for Forgejo and ArgoCD 2025-04-12 21:31:05 +02:00
0a307e5b35 Merge pull request 'keycloak_oidc_forgejo_config' (#25) from keycloak_oidc_forgejo_config into development
Reviewed-on: #25
2025-04-12 19:13:13 +00:00
Richard Robert Reitz
55a1eaa6f6 Added Forgejo to Keycloak config 2025-04-12 21:07:43 +02:00
Richard Robert Reitz
2532958de8 Added Forgejo to Keycloak config 2025-04-12 21:05:35 +02:00
7a5e29e47d Update template/stacks/ref-implementation/keycloak/manifests/keycloak-config.yaml 2025-04-12 18:52:41 +00:00
3943b3d46e Merge pull request 'Update template/stacks/ref-implementation/keycloak/manifests/keycloak-config.yaml' (#24) from keycloak_oidc_argocd_config into development
Reviewed-on: #24
2025-04-12 18:50:49 +00:00
3263113ebe Update template/stacks/ref-implementation/keycloak/manifests/keycloak-config.yaml 2025-04-12 18:49:15 +00:00
5d0182d6ee Update template/stacks/core/forgejo/values.yaml 2025-04-12 16:27:05 +00:00
c01d4952ad Disabled user self registration in Forgejo 2025-04-12 16:17:20 +00:00
777d6afeb4 Update template/stacks/core/forgejo-runner/dind-docker.yaml 2025-04-11 14:12:29 +00:00
d6fa372e5f Merge pull request 'Update fix to latest kindserver' (#23) from kindserver_development_test into development
Reviewed-on: #23
2025-03-31 08:33:58 +00:00
Richard Robert Reitz
51e765049b Update fix to latest kindserver 2025-03-30 22:34:04 +02:00
4814dff26f Merge pull request 'updated argocd nginxingress and forgejo' (#22) from forgejo_upgrade_to_11_0_5 into development
Reviewed-on: #22
2025-03-27 19:49:13 +00:00
b3495f610c updated argocd 2025-03-27 20:42:01 +01:00
9ba027f94b updated nginx-ingress 2025-03-27 20:10:06 +01:00
dd7551a293 updated forgejo and forgejo-runner 2025-03-27 19:33:56 +01:00
7179d2568c Merge pull request 'feat(mailhog): IPCEICIS-3048 Implement mailhog in edp stacks' (#18) from feature/IPCEICIS-3048-Implement-mailhog-in-edp-stacks into development
Reviewed-on: #18
2025-03-24 17:19:22 +00:00
Bot
55435a3ad2 feat(mailhog): IPCEICIS-3048 - added documentation 2025-03-24 17:09:44 +01:00
Stephan Lo
d0585fd2b7 feat(mailhog): IPCEICIS-3048 - mailhog deployed, ingress is https://<URL>/mailhog, forgje is configured 2025-03-20 23:57:52 +01:00
5d2df3db8e Merge pull request 'alloy_implementation' (#13) from alloy_implementation into development
Reviewed-on: #13
Reviewed-by: Christopher.Hase <Christopher.Hase@telekom.de>
2025-03-18 09:03:25 +00:00
65b74abeda Merge branch 'development' into alloy_implementation 2025-03-18 08:52:51 +00:00
fc287acf58 Update template/stacks/ref-implementation/backstage-templates/entities/spring-petclinic/skeleton/.github/workflows/maven-build.yml 2025-03-17 21:50:50 +00:00
94e3a759b2 Update template/stacks/core/crossplane-providers/provider-shell.yaml 2025-03-16 22:53:03 +00:00
31b768eebc Update template/stacks/core/crossplane-providers/provider-kind.yaml 2025-03-16 22:51:03 +00:00
9b5457e45f Update template/stacks/ref-implementation/backstage/manifests/install.yaml
chore(backstage): adjust to forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/devfw-cicd/backstage-edp:development
2025-03-15 13:27:41 +00:00
Stephan Lo
c1b68bfdb2 chore(provider-shell): adjust to https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW-CICD/-/packages/container/provider-shell/v0.1.3 2025-03-14 19:20:29 +01:00
beeb1f916b Hofix for ArgoCD problems after path routing fix 2025-03-14 09:34:45 +01:00
b42bba4379 Merge pull request 'IPCEICIS-2751_backstage' (#14) from IPCEICIS-2751_backstage into development
Reviewed-on: #14
2025-03-14 08:16:59 +00:00
5cc22c5648 Update template/stacks/core/ingress-apps/argocd-server.yaml 2025-03-13 16:16:49 +00:00
2f5a263511 Update template/stacks/core/argocd/values.yaml 2025-03-13 16:08:10 +00:00
d8867b9e3a Update template/stacks/ref-implementation/backstage/manifests/install.yaml 2025-03-13 10:16:04 +00:00
415576c2cb unnecessary rule deleted 2025-03-13 10:26:56 +01:00
1e5fa94c47 rules in alloy's values.yaml adjusted 2025-03-13 10:19:45 +01:00
8f621647f5 rule {
source_labels = ["__meta_kubernetes_pod_name", "__meta_kubernetes_pod_container_name"]
          action = "replace"
          target_label = "__path__"
          replacement = "/var/log/containers/$1_$2.log"
        }
2025-03-13 10:08:59 +01:00
74a77bfa3b Update template/stacks/ref-implementation/backstage/manifests/install.yaml 2025-03-13 09:00:38 +00:00
3293f9cf5a Update template/stacks/ref-implementation/backstage/manifests/install.yaml 2025-03-13 08:33:06 +00:00
75f40e070c promtail references replaces with alloy in dashboard_loki_container.yaml 2025-03-12 15:55:41 +01:00
b462804f29 loki.source.kubernetes "all_pod_logs" {
targets    = discovery.relabel.pod_logs.output
        forward_to = [loki.write.local_loki.receiver]
      }
2025-03-12 15:28:20 +01:00
fbb5aeb32b forward_to = [loki.write.local_loki.receiver] 2025-03-12 15:20:35 +01:00
687322525b values.yaml for alloy edited 2025-03-12 15:18:59 +01:00
1682302b69 "#" are not allowed in config.alloy in values.yaml 2025-03-12 15:04:59 +01:00
8f62875529 config.alloy adjusted in values.yaml 2025-03-12 14:53:01 +01:00
ddaf06b29c loki reference changes 2025-03-12 14:39:36 +01:00
180b74697a config.alloy in values.yaml adjusted 2025-03-12 14:30:37 +01:00
3a5df11604 alloy implementation commented out 2025-03-12 14:22:29 +01:00
81e85ff518 config.alloy added to the values 2025-03-12 14:22:11 +01:00
dd7cd2fa91 alloy.uiPathPrefix: "/alloy" added 2025-03-12 13:47:07 +01:00
71fbdcb5e0 alloy implementation 2025-03-12 13:37:16 +01:00
0d49c582f5 template/stacks/ref-implementation/backstage/manifests/install.yaml aktualisiert 2025-03-11 11:25:06 +00:00
303d7b3a7e Update template/stacks/ref-implementation/backstage-templates/entities/spring-petclinic/skeleton/.github/workflows/maven-build.yml 2025-03-08 12:50:23 +00:00
1ab8119063 Fixed kubectl download on Linux ARM64 VMs 2025-03-07 20:28:39 +00:00
f81a550064 Merge pull request 'IPCEICIS-764_grafana_sso' (#10) from IPCEICIS-764_grafana_sso into development
Reviewed-on: #10
2025-03-06 09:24:13 +00:00
Richard Robert Reitz
a9c69d6c24 adjusted retry backoff time 2025-03-04 19:23:19 +01:00
Richard Robert Reitz
c2cb410af8 Merge branch 'development' into IPCEICIS-764_grafana_sso 2025-03-04 19:21:48 +01:00
2698432809 Merge pull request 'faster_backstage_start' (#11) from faster_backstage_start into development
Reviewed-on: #11
2025-03-04 18:20:52 +00:00
Richard Robert Reitz
d0cce6916d fixed argocd version 2025-03-04 19:06:11 +01:00
Richard Robert Reitz
aba4a4a088 shortened retry backoff 2025-03-04 19:03:36 +01:00
Richard Robert Reitz
4ae8f6fd15 shortened retry backoff 2025-03-04 18:49:55 +01:00
Your Name
1198250861 Merge branch 'development' into IPCEICIS-764_grafana_sso 2025-03-04 11:55:17 +01:00
9bbf171980 Merge pull request 'Make PetClinic Great Again' (#9) from runner-tags into development
Reviewed-on: #9
2025-03-03 17:18:10 +00:00
d95ba7c12c
chore(petclinic): Removed unused workflow
Disabled tests in maven workflow as there are currently dind problems
2025-03-03 16:37:18 +01:00
8a38aee529
feat(runner): Added ubuntu-latest runner tag 2025-03-03 15:21:46 +01:00
Richard Robert Reitz
1ef1029e1f Added Grafana admin account 2025-03-02 17:26:29 +01:00
Richard Robert Reitz
63a694d17c Removed Grafana admin account 2025-03-02 17:09:02 +01:00
Richard Robert Reitz
6eb52e654c Refactored external secret for grafana keycloak client secret 2025-03-02 15:46:06 +01:00
Richard Robert Reitz
ec31f98889 Added external secret for grafana keycloak client secret 2025-03-02 15:28:48 +01:00
Richard Robert Reitz
2d3ebadd50 Simplified Keycloaks Grafana config 2025-03-02 14:52:08 +01:00
Richard Robert Reitz
b58e373da9 Added email to Keycloak users and upgraded ArgoCD again as it requires more work 2025-03-02 14:19:07 +01:00
Richard Robert Reitz
688795ffad Added more Grafana client config to Keycloak 2025-03-02 13:46:20 +01:00
Richard Robert Reitz
e02d4bb272 Added more Grafana client config to Keycloak 2025-03-02 13:27:51 +01:00
Richard Robert Reitz
efa3a6e4dc Added ArgoCD sync retry to Grafana 2025-03-02 13:18:04 +01:00
Richard Robert Reitz
65c5321ce6 Added Grafana client config to Keycloak 2025-03-02 13:11:38 +01:00
Richard Robert Reitz
ce6c51eea9 Enhanced grafana yaml 2025-03-02 10:47:25 +01:00
0f8282ead6 Update template/stacks/monitoring/kube-prometheus/values.yaml 2025-02-28 14:08:07 +00:00
88d599a691 Update template/stacks/monitoring/kube-prometheus/values.yaml 2025-02-28 13:30:29 +00:00
9016831286 Update template/stacks/core/argocd.yaml 2025-02-28 11:15:56 +00:00
168286cfce Update template/stacks/core/argocd.yaml 2025-02-28 11:07:21 +00:00
9cc9b864a2 Update template/stacks/core/argocd.yaml 2025-02-28 11:04:21 +00:00
265af3acff Update template/stacks/core/argocd.yaml 2025-02-28 11:01:07 +00:00
f311da6ac2 Merge pull request 'created an own variable for domain gitea' (#8) from dns_splitting into development
Reviewed-on: #8
2025-02-26 21:25:30 +00:00
Richard Robert Reitz
8c84170ca2 created an own variable for domain gitea 2025-02-24 23:10:05 +01:00
b3306647c9 Merge pull request 'dind_support' (#7) from dind_support into development
Reviewed-on: #7
2025-02-23 14:58:40 +00:00
Richard Robert Reitz
e3f1899983 Fixed forgejo url in runner 2025-02-23 15:36:13 +01:00
Richard Robert Reitz
394dc9f400 Activated DinD in forgejo-runner 2025-02-23 11:09:12 +01:00
cc34792edb Update template/stacks/ref-implementation/forgejo-runner/values.yaml 2025-02-21 21:24:29 +00:00
159 changed files with 3411 additions and 44654 deletions

1
.gitignore vendored Normal file
View file

@ -0,0 +1 @@
/.history

View file

@ -1,7 +1,3 @@
# edpbuilder stacks # edpbuilder stacks
This repository contains the building blocks to instanciate Internal Developer Platform's. This repository contains the building blocks to instantiate Internal Developer Platforms.
### Install edpbuilder
To get started, you need to install [edpbuilder](https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW/edpbuilder).

View file

@ -12,8 +12,8 @@ spec:
name: in-cluster name: in-cluster
namespace: argocd namespace: argocd
source: source:
path: registry path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/registry"
repoURL: 'https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder' repoURL: "https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}"
targetRevision: HEAD targetRevision: HEAD
project: default project: default
syncPolicy: syncPolicy:

View file

@ -12,8 +12,8 @@ spec:
name: in-cluster name: in-cluster
namespace: argocd namespace: argocd
source: source:
path: stacks/core path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/core"
repoURL: 'https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder' repoURL: "https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}"
targetRevision: HEAD targetRevision: HEAD
project: default project: default
syncPolicy: syncPolicy:

View file

@ -1,7 +1,7 @@
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: monitoring name: forgejo
namespace: argocd namespace: argocd
labels: labels:
env: dev env: dev
@ -12,8 +12,8 @@ spec:
name: in-cluster name: in-cluster
namespace: argocd namespace: argocd
source: source:
path: stacks/monitoring path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/forgejo"
repoURL: 'https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder' repoURL: "https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}"
targetRevision: HEAD targetRevision: HEAD
project: default project: default
syncPolicy: syncPolicy:

View file

@ -1,7 +1,7 @@
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: second-cluster name: observability-client
namespace: argocd namespace: argocd
labels: labels:
env: dev env: dev
@ -12,8 +12,8 @@ spec:
name: in-cluster name: in-cluster
namespace: argocd namespace: argocd
source: source:
path: stacks/second-cluster path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/observability-client"
repoURL: 'https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder' repoURL: "https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}"
targetRevision: HEAD targetRevision: HEAD
project: default project: default
syncPolicy: syncPolicy:

View file

@ -1,7 +1,7 @@
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: ref-implementation name: observability
namespace: argocd namespace: argocd
labels: labels:
env: dev env: dev
@ -12,8 +12,8 @@ spec:
name: in-cluster name: in-cluster
namespace: argocd namespace: argocd
source: source:
path: stacks/ref-implementation path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/observability"
repoURL: 'https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder' repoURL: "https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}"
targetRevision: HEAD targetRevision: HEAD
project: default project: default
syncPolicy: syncPolicy:

View file

@ -1,7 +1,7 @@
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: local-backup name: otc
namespace: argocd namespace: argocd
labels: labels:
env: dev env: dev
@ -12,8 +12,8 @@ spec:
name: in-cluster name: in-cluster
namespace: argocd namespace: argocd
source: source:
path: stacks/local-backup path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/otc"
repoURL: 'https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder' repoURL: "https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}"
targetRevision: HEAD targetRevision: HEAD
project: default project: default
syncPolicy: syncPolicy:

View file

@ -12,16 +12,24 @@ spec:
selfHeal: true selfHeal: true
syncOptions: syncOptions:
- CreateNamespace=true - CreateNamespace=true
retry:
limit: -1
destination: destination:
name: in-cluster name: in-cluster
namespace: argocd namespace: argocd
sources: sources:
- repoURL: https://github.com/argoproj/argo-helm - repoURL: https://edp.buildth.ing/DevFW-CICD/argocd-helm.git
path: charts/argo-cd path: charts/argo-cd
targetRevision: argo-cd-7.7.5 # TODO: RIRE Can be updated when https://github.com/argoproj/argo-cd/issues/20790 is fixed and merged
# As logout make problems, it is suggested to switch from path based routing to an own argocd domain,
# similar to the CNOE amazon reference implementation and in our case, Forgejo
targetRevision: argo-cd-7.8.14-depends
helm: helm:
valueFiles: valueFiles:
- $values/stacks/core/argocd/values.yaml - $values/{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/core/argocd/values.yaml
- repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder - repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD targetRevision: HEAD
ref: values ref: values
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/core/argocd/manifests"

File diff suppressed because it is too large Load diff

View file

@ -4,11 +4,10 @@ metadata:
annotations: annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTP nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2 cert-manager.io/cluster-issuer: main
nginx.ingress.kubernetes.io/use-regex: "true"
{{{ if eq .Env.CLUSTER_TYPE "osc" }}} {{{ if eq .Env.CLUSTER_TYPE "osc" }}}
dns.gardener.cloud/class: garden dns.gardener.cloud/class: garden
dns.gardener.cloud/dnsnames: {{{ .Env.DOMAIN }}} dns.gardener.cloud/dnsnames: {{{ .Env.DOMAIN_ARGOCD }}}
dns.gardener.cloud/ttl: "600" dns.gardener.cloud/ttl: "600"
{{{ end }}} {{{ end }}}
name: argocd-server name: argocd-server
@ -16,7 +15,7 @@ metadata:
spec: spec:
ingressClassName: nginx ingressClassName: nginx
rules: rules:
- host: {{{ .Env.DOMAIN }}} - host: {{{ .Env.DOMAIN_ARGOCD }}}
http: http:
paths: paths:
- backend: - backend:
@ -24,9 +23,9 @@ spec:
name: argocd-server name: argocd-server
port: port:
number: 80 number: 80
path: /argocd(/|$)(.*) path: /
pathType: ImplementationSpecific pathType: Prefix
tls: tls:
- hosts: - hosts:
- {{{ .Env.DOMAIN }}} - {{{ .Env.DOMAIN_ARGOCD }}}
secretName: argocd-net-tls secretName: argocd-net-tls

View file

@ -1,10 +1,9 @@
global: global:
domain: {{{ .Env.DOMAIN }}} domain: {{{ .Env.DOMAIN_ARGOCD }}}
configs: configs:
params: params:
server.insecure: true server.insecure: true
server.basehref: /argocd
cm: cm:
application.resourceTrackingMethod: annotation application.resourceTrackingMethod: annotation
timeout.reconciliation: 60s timeout.reconciliation: 60s
@ -20,6 +19,7 @@ configs:
clusters: clusters:
- "*" - "*"
accounts.provider-argocd: apiKey accounts.provider-argocd: apiKey
url: https://{{{ .Env.DOMAIN_ARGOCD }}}
rbac: rbac:
policy.csv: 'g, provider-argocd, role:admin' policy.csv: 'g, provider-argocd, role:admin'

View file

@ -1,23 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: crossplane-compositions
namespace: argocd
labels:
env: dev
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
destination:
name: in-cluster
namespace: crossplane-system
source:
path: stacks/core/crossplane-compositions
repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
directory:
recurse: true

View file

@ -1,30 +0,0 @@
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: edfbuilders.edfbuilder.crossplane.io
spec:
connectionSecretKeys:
- kubeconfig
group: edfbuilder.crossplane.io
names:
kind: EDFBuilder
listKind: EDFBuilderList
plural: edfbuilders
singular: edfbuilders
versions:
- name: v1alpha1
served: true
referenceable: true
schema:
openAPIV3Schema:
description: A EDFBuilder is a composite resource that represents a K8S Cluster with edfbuilder Installed
type: object
properties:
spec:
type: object
properties:
repoURL:
type: string
description: URL to ArgoCD stack of stacks repo
required:
- repoURL

View file

@ -1,23 +0,0 @@
{{{ if eq .Env.CLUSTER_TYPE "kind" }}}
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: crossplane-providers
namespace: argocd
labels:
env: dev
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
destination:
name: in-cluster
namespace: crossplane-system
source:
path: stacks/core/crossplane-providers
repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
{{{ end }}}

View file

@ -1,9 +0,0 @@
apiVersion: pkg.crossplane.io/v1
kind: Function
metadata:
name: crossplane-contrib-function-patch-and-transform
spec:
package: xpkg.upbound.io/crossplane-contrib/function-patch-and-transform:v0.7.0
packagePullPolicy: IfNotPresent # Only download the package if it isnt in the cache.
revisionActivationPolicy: Automatic # Otherwise our Provider never gets activate & healthy
revisionHistoryLimit: 1

View file

@ -1,14 +0,0 @@
apiVersion: argocd.crossplane.io/v1alpha1
kind: ProviderConfig
metadata:
name: argocd-provider
spec:
serverAddr: argocd-server.argocd.svc.cluster.local:80
insecure: true
plainText: true
credentials:
source: Secret
secretRef:
namespace: crossplane-system
name: argocd-credentials
key: authToken

View file

@ -1,9 +0,0 @@
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-argocd
spec:
package: xpkg.upbound.io/crossplane-contrib/provider-argocd:v0.9.1
packagePullPolicy: IfNotPresent # Only download the package if it isnt in the cache.
revisionActivationPolicy: Automatic # Otherwise our Provider never gets activate & healthy
revisionHistoryLimit: 1

View file

@ -1,14 +0,0 @@
apiVersion: kind.crossplane.io/v1alpha1
kind: ProviderConfig
metadata:
name: kind-provider
spec:
credentials:
source: Secret
secretRef:
namespace: crossplane-system
name: kind-credentials
key: credentials
endpoint:
# the url is managed by crossplane-edfbuilder
url: https://DOCKER_HOST:SERVER_PORT/api/v1/kindserver

View file

@ -1,9 +0,0 @@
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-kind
spec:
package: forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/devfw-cicd/provider-kind:v0.1.0
packagePullPolicy: IfNotPresent # Only download the package if it isnt in the cache.
revisionActivationPolicy: Automatic # Otherwise our Provider never gets activate & healthy
revisionHistoryLimit: 1

View file

@ -1,9 +0,0 @@
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-shell
spec:
package: forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/devfw-cicd/provider-shell:v0.1.1
packagePullPolicy: IfNotPresent # Only download the package if it isnt in the cache.
revisionActivationPolicy: Automatic # Otherwise our Provider never gets activate & healthy
revisionHistoryLimit: 1

View file

@ -1,23 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: crossplane
namespace: argocd
labels:
env: dev
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
destination:
name: in-cluster
namespace: crossplane-system
source:
chart: crossplane
repoURL: https://charts.crossplane.io/stable
targetRevision: 1.18.0
helm:
releaseName: crossplane

View file

@ -1,27 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: forgejo
namespace: argocd
labels:
env: dev
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
destination:
name: in-cluster
namespace: gitea
sources:
- repoURL: https://code.forgejo.org/forgejo-helm/forgejo-helm.git
path: .
targetRevision: v10.1.1
helm:
valueFiles:
- $values/stacks/core/forgejo/values.yaml
- repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
ref: values

View file

@ -1,619 +0,0 @@
---
# Source: forgejo/templates/gitea/config.yaml
apiVersion: v1
kind: Secret
metadata:
name: forgejo-inline-config
namespace: "gitea"
labels:
helm.sh/chart: forgejo-0.0.0
app: forgejo
app.kubernetes.io/name: forgejo
app.kubernetes.io/instance: forgejo
app.kubernetes.io/version: "9.0.2"
version: "9.0.2"
app.kubernetes.io/managed-by: Helm
type: Opaque
stringData:
_generals_: |-
APP_NAME=Forgejo: Beyond coding. We forge.
RUN_MODE=prod
cache: |-
ADAPTER=memory
HOST=
database: DB_TYPE=sqlite3
indexer: ISSUE_INDEXER_TYPE=db
metrics: ENABLED=false
queue: |-
CONN_STR=
TYPE=level
repository: ROOT=/data/git/gitea-repositories
security: INSTALL_LOCK=true
server: |-
APP_DATA_PATH=/data
DOMAIN=gitea.runner.c-one-infra.de
ENABLE_PPROF=false
HTTP_PORT=3000
PROTOCOL=http
ROOT_URL=https://gitea.runner.c-one-infra.de:443
SSH_DOMAIN=gitea.runner.c-one-infra.de
SSH_LISTEN_PORT=2222
SSH_PORT=22
START_SSH_SERVER=true
session: |-
PROVIDER=memory
PROVIDER_CONFIG=
---
# Source: forgejo/templates/gitea/config.yaml
apiVersion: v1
kind: Secret
metadata:
name: forgejo
labels:
helm.sh/chart: forgejo-0.0.0
app: forgejo
app.kubernetes.io/name: forgejo
app.kubernetes.io/instance: forgejo
app.kubernetes.io/version: "9.0.2"
version: "9.0.2"
app.kubernetes.io/managed-by: Helm
type: Opaque
stringData:
assertions: |
config_environment.sh: |-
#!/usr/bin/env bash
set -euo pipefail
function env2ini::log() {
printf "${1}\n"
}
function env2ini::read_config_to_env() {
local section="${1}"
local line="${2}"
if [[ -z "${line}" ]]; then
# skip empty line
return
fi
# 'xargs echo -n' trims all leading/trailing whitespaces and a trailing new line
local setting="$(awk -F '=' '{print $1}' <<< "${line}" | xargs echo -n)"
if [[ -z "${setting}" ]]; then
env2ini::log ' ! invalid setting'
exit 1
fi
local value=''
local regex="^${setting}(\s*)=(\s*)(.*)"
if [[ $line =~ $regex ]]; then
value="${BASH_REMATCH[3]}"
else
env2ini::log ' ! invalid setting'
exit 1
fi
env2ini::log " + '${setting}'"
if [[ -z "${section}" ]]; then
export "FORGEJO____${setting^^}=${value}" # '^^' makes the variable content uppercase
return
fi
local masked_section="${section//./_0X2E_}" # '//' instructs to replace all matches
masked_section="${masked_section//-/_0X2D_}"
export "FORGEJO__${masked_section^^}__${setting^^}=${value}" # '^^' makes the variable content uppercase
}
function env2ini::reload_preset_envs() {
env2ini::log "Reloading preset envs..."
while read -r line; do
if [[ -z "${line}" ]]; then
# skip empty line
return
fi
# 'xargs echo -n' trims all leading/trailing whitespaces and a trailing new line
local setting="$(awk -F '=' '{print $1}' <<< "${line}" | xargs echo -n)"
if [[ -z "${setting}" ]]; then
env2ini::log ' ! invalid setting'
exit 1
fi
local value=''
local regex="^${setting}(\s*)=(\s*)(.*)"
if [[ $line =~ $regex ]]; then
value="${BASH_REMATCH[3]}"
else
env2ini::log ' ! invalid setting'
exit 1
fi
env2ini::log " + '${setting}'"
export "${setting^^}=${value}" # '^^' makes the variable content uppercase
done < "/tmp/existing-envs"
rm /tmp/existing-envs
}
function env2ini::process_config_file() {
local config_file="${1}"
local section="$(basename "${config_file}")"
if [[ $section == '_generals_' ]]; then
env2ini::log " [ini root]"
section=''
else
env2ini::log " ${section}"
fi
while read -r line; do
env2ini::read_config_to_env "${section}" "${line}"
done < <(awk 1 "${config_file}") # Helm .toYaml trims the trailing new line which breaks line processing; awk 1 ... adds it back while reading
}
function env2ini::load_config_sources() {
local path="${1}"
if [[ -d "${path}" ]]; then
env2ini::log "Processing $(basename "${path}")..."
while read -d '' configFile; do
env2ini::process_config_file "${configFile}"
done < <(find "${path}" -type l -not -name '..data' -print0)
env2ini::log "\n"
fi
}
function env2ini::generate_initial_secrets() {
# These environment variables will either be
# - overwritten with user defined values,
# - initially used to set up Forgejo
# Anyway, they won't harm existing app.ini files
export FORGEJO__SECURITY__INTERNAL_TOKEN=$(gitea generate secret INTERNAL_TOKEN)
export FORGEJO__SECURITY__SECRET_KEY=$(gitea generate secret SECRET_KEY)
export FORGEJO__OAUTH2__JWT_SECRET=$(gitea generate secret JWT_SECRET)
export FORGEJO__SERVER__LFS_JWT_SECRET=$(gitea generate secret LFS_JWT_SECRET)
env2ini::log "...Initial secrets generated\n"
}
# save existing envs prior to script execution. Necessary to keep order of
# preexisting and custom envs
env | (grep -e '^FORGEJO__' || [[ $? == 1 ]]) > /tmp/existing-envs
# MUST BE CALLED BEFORE OTHER CONFIGURATION
env2ini::generate_initial_secrets
env2ini::load_config_sources '/env-to-ini-mounts/inlines/'
env2ini::load_config_sources '/env-to-ini-mounts/additionals/'
# load existing envs to override auto generated envs
env2ini::reload_preset_envs
env2ini::log "=== All configuration sources loaded ===\n"
# safety to prevent rewrite of secret keys if an app.ini already exists
if [ -f ${GITEA_APP_INI} ]; then
env2ini::log 'An app.ini file already exists. To prevent overwriting secret keys, these settings are dropped and remain unchanged:'
env2ini::log ' - security.INTERNAL_TOKEN'
env2ini::log ' - security.SECRET_KEY'
env2ini::log ' - oauth2.JWT_SECRET'
env2ini::log ' - server.LFS_JWT_SECRET'
unset FORGEJO__SECURITY__INTERNAL_TOKEN
unset FORGEJO__SECURITY__SECRET_KEY
unset FORGEJO__OAUTH2__JWT_SECRET
unset FORGEJO__SERVER__LFS_JWT_SECRET
fi
environment-to-ini -o $GITEA_APP_INI
---
# Source: forgejo/templates/gitea/init.yaml
apiVersion: v1
kind: Secret
metadata:
name: forgejo-init
namespace: "gitea"
labels:
helm.sh/chart: forgejo-0.0.0
app: forgejo
app.kubernetes.io/name: forgejo
app.kubernetes.io/instance: forgejo
app.kubernetes.io/version: "9.0.2"
version: "9.0.2"
app.kubernetes.io/managed-by: Helm
type: Opaque
stringData:
configure_gpg_environment.sh: |-
#!/usr/bin/env bash
set -eu
gpg --batch --import /raw/private.asc
init_directory_structure.sh: |-
#!/usr/bin/env bash
set -euo pipefail
set -x
mkdir -p /data/git/.ssh
chmod -R 700 /data/git/.ssh
[ ! -d /data/gitea/conf ] && mkdir -p /data/gitea/conf
# prepare temp directory structure
mkdir -p "${GITEA_TEMP}"
chmod ug+rwx "${GITEA_TEMP}"
configure_gitea.sh: |-
#!/usr/bin/env bash
set -euo pipefail
echo '==== BEGIN GITEA CONFIGURATION ===='
{ # try
gitea migrate
} || { # catch
echo "Forgejo migrate might fail due to database connection...This init-container will try again in a few seconds"
exit 1
}
function configure_admin_user() {
local full_admin_list=$(gitea admin user list --admin)
local actual_user_table=''
# We might have distorted output due to warning logs, so we have to detect the actual user table by its headline and trim output above that line
local regex="(.*)(ID\s+Username\s+Email\s+IsActive.*)"
if [[ "${full_admin_list}" =~ $regex ]]; then
actual_user_table=$(echo "${BASH_REMATCH[2]}" | tail -n+2) # tail'ing to drop the table headline
else
# This code block should never be reached, as long as the output table header remains the same.
# If this code block is reached, the regex doesn't match anymore and we probably have to adjust this script.
echo "ERROR: 'configure_admin_user' was not able to determine the current list of admin users."
echo " Please review the output of 'gitea admin user list --admin' shown below."
echo " If you think it is an issue with the Helm Chart provisioning, file an issue at https://gitea.com/gitea/helm-chart/issues."
echo "DEBUG: Output of 'gitea admin user list --admin'"
echo "--"
echo "${full_admin_list}"
echo "--"
exit 1
fi
local ACCOUNT_ID=$(echo "${actual_user_table}" | grep -E "\s+${GITEA_ADMIN_USERNAME}\s+" | awk -F " " "{printf \$1}")
if [[ -z "${ACCOUNT_ID}" ]]; then
local -a create_args
create_args=(--admin --username "${GITEA_ADMIN_USERNAME}" --password "${GITEA_ADMIN_PASSWORD}" --email "gitea@local.domain")
if [[ "${GITEA_ADMIN_PASSWORD_MODE}" = initialOnlyRequireReset ]]; then
create_args+=(--must-change-password=true)
else
create_args+=(--must-change-password=false)
fi
echo "No admin user '${GITEA_ADMIN_USERNAME}' found. Creating now..."
gitea admin user create "${create_args[@]}"
echo '...created.'
else
if [[ "${GITEA_ADMIN_PASSWORD_MODE}" = keepUpdated ]]; then
echo "Admin account '${GITEA_ADMIN_USERNAME}' already exist. Running update to sync password..."
local -a change_args
change_args=(--username "${GITEA_ADMIN_USERNAME}" --password "${GITEA_ADMIN_PASSWORD}" --must-change-password=false)
gitea admin user change-password "${change_args[@]}"
echo '...password sync done.'
else
echo "Admin account '${GITEA_ADMIN_USERNAME}' already exist, but update mode is set to '${GITEA_ADMIN_PASSWORD_MODE}'. Skipping."
fi
fi
}
configure_admin_user
function configure_ldap() {
echo 'no ldap configuration... skipping.'
}
configure_ldap
function configure_oauth() {
echo 'no oauth configuration... skipping.'
}
configure_oauth
echo '==== END GITEA CONFIGURATION ===='
---
# Source: forgejo/templates/gitea/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: gitea-shared-storage
namespace: "gitea"
annotations:
helm.sh/resource-policy: keep
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
---
# Source: forgejo/templates/gitea/http-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: forgejo-http
namespace: "gitea"
labels:
helm.sh/chart: forgejo-0.0.0
app: forgejo
app.kubernetes.io/name: forgejo
app.kubernetes.io/instance: forgejo
app.kubernetes.io/version: "9.0.2"
version: "9.0.2"
app.kubernetes.io/managed-by: Helm
annotations:
{}
spec:
type: ClusterIP
clusterIP: None
ports:
- name: http
port: 3000
targetPort:
selector:
app.kubernetes.io/name: forgejo
app.kubernetes.io/instance: forgejo
---
# Source: forgejo/templates/gitea/ssh-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: forgejo-ssh
namespace: "gitea"
labels:
helm.sh/chart: forgejo-0.0.0
app: forgejo
app.kubernetes.io/name: forgejo
app.kubernetes.io/instance: forgejo
app.kubernetes.io/version: "9.0.2"
version: "9.0.2"
app.kubernetes.io/managed-by: Helm
annotations:
{}
spec:
type: NodePort
externalTrafficPolicy: Local
ports:
- name: ssh
port: 22
targetPort: 2222
protocol: TCP
nodePort: 32222
selector:
app.kubernetes.io/name: forgejo
app.kubernetes.io/instance: forgejo
---
# Source: forgejo/templates/gitea/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: forgejo
namespace: "gitea"
annotations:
labels:
helm.sh/chart: forgejo-0.0.0
app: forgejo
app.kubernetes.io/name: forgejo
app.kubernetes.io/instance: forgejo
app.kubernetes.io/version: "9.0.2"
version: "9.0.2"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 100%
selector:
matchLabels:
app.kubernetes.io/name: forgejo
app.kubernetes.io/instance: forgejo
template:
metadata:
annotations:
checksum/config: bd40d1abc2e2692fafe0821e9345e231261f194a19fbee5aef7dd9cdfc106596
labels:
helm.sh/chart: forgejo-0.0.0
app: forgejo
app.kubernetes.io/name: forgejo
app.kubernetes.io/instance: forgejo
app.kubernetes.io/version: "9.0.2"
version: "9.0.2"
app.kubernetes.io/managed-by: Helm
spec:
securityContext:
fsGroup: 1000
initContainers:
- name: init-directories
image: "code.forgejo.org/forgejo/forgejo:9.0.2-rootless"
imagePullPolicy: IfNotPresent
command: ["/usr/sbin/init_directory_structure.sh"]
env:
- name: GITEA_APP_INI
value: /data/gitea/conf/app.ini
- name: GITEA_CUSTOM
value: /data/gitea
- name: GITEA_WORK_DIR
value: /data
- name: GITEA_TEMP
value: /tmp/gitea
volumeMounts:
- name: init
mountPath: /usr/sbin
- name: temp
mountPath: /tmp
- name: data
mountPath: /data
securityContext:
{}
resources:
limits: {}
requests:
cpu: 100m
memory: 128Mi
- name: init-app-ini
image: "code.forgejo.org/forgejo/forgejo:9.0.2-rootless"
imagePullPolicy: IfNotPresent
command: ["/usr/sbin/config_environment.sh"]
env:
- name: GITEA_APP_INI
value: /data/gitea/conf/app.ini
- name: GITEA_CUSTOM
value: /data/gitea
- name: GITEA_WORK_DIR
value: /data
- name: GITEA_TEMP
value: /tmp/gitea
volumeMounts:
- name: config
mountPath: /usr/sbin
- name: temp
mountPath: /tmp
- name: data
mountPath: /data
- name: inline-config-sources
mountPath: /env-to-ini-mounts/inlines/
securityContext:
{}
resources:
limits: {}
requests:
cpu: 100m
memory: 128Mi
- name: configure-gitea
image: "code.forgejo.org/forgejo/forgejo:9.0.2-rootless"
command: ["/usr/sbin/configure_gitea.sh"]
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1000
env:
- name: GITEA_APP_INI
value: /data/gitea/conf/app.ini
- name: GITEA_CUSTOM
value: /data/gitea
- name: GITEA_WORK_DIR
value: /data
- name: GITEA_TEMP
value: /tmp/gitea
- name: HOME
value: /data/gitea/git
- name: GITEA_ADMIN_USERNAME
valueFrom:
secretKeyRef:
key: username
name: gitea-credential
- name: GITEA_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: gitea-credential
- name: GITEA_ADMIN_PASSWORD_MODE
value: keepUpdated
volumeMounts:
- name: init
mountPath: /usr/sbin
- name: temp
mountPath: /tmp
- name: data
mountPath: /data
resources:
limits: {}
requests:
cpu: 100m
memory: 128Mi
terminationGracePeriodSeconds: 60
containers:
- name: forgejo
image: "code.forgejo.org/forgejo/forgejo:9.0.2-rootless"
imagePullPolicy: IfNotPresent
env:
# SSH Port values have to be set here as well for openssh configuration
- name: SSH_LISTEN_PORT
value: "2222"
- name: SSH_PORT
value: "22"
- name: GITEA_APP_INI
value: /data/gitea/conf/app.ini
- name: GITEA_CUSTOM
value: /data/gitea
- name: GITEA_WORK_DIR
value: /data
- name: GITEA_TEMP
value: /tmp/gitea
- name: TMPDIR
value: /tmp/gitea
- name: HOME
value: /data/gitea/git
ports:
- name: ssh
containerPort: 2222
- name: http
containerPort: 3000
livenessProbe:
failureThreshold: 10
initialDelaySeconds: 200
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: http
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: http
timeoutSeconds: 1
resources:
{}
securityContext:
{}
volumeMounts:
- name: temp
mountPath: /tmp
- name: data
mountPath: /data
volumes:
- name: init
secret:
secretName: forgejo-init
defaultMode: 110
- name: config
secret:
secretName: forgejo
defaultMode: 110
- name: inline-config-sources
secret:
secretName: forgejo-inline-config
- name: temp
emptyDir: {}
- name: data
persistentVolumeClaim:
claimName: gitea-shared-storage

View file

@ -1,55 +0,0 @@
redis-cluster:
enabled: false
postgresql:
enabled: false
postgresql-ha:
enabled: false
persistence:
enabled: true
size: 5Gi
test:
enabled: false
gitea:
admin:
existingSecret: gitea-credential
config:
database:
DB_TYPE: sqlite3
session:
PROVIDER: memory
cache:
ADAPTER: memory
queue:
TYPE: level
server:
DOMAIN: 'gitea.{{{ .Env.DOMAIN }}}'
ROOT_URL: 'https://gitea.{{{ .Env.DOMAIN }}}:443'
service:
ssh:
type: NodePort
nodePort: 32222
externalTrafficPolicy: Local
image:
pullPolicy: "IfNotPresent"
# Overrides the image tag whose default is the chart appVersion.
#tag: "8.0.3"
# Adds -rootless suffix to image name
rootless: true
forgejo:
runner:
enabled: true
image:
tag: latest
# replicas: 3
config:
runner:
labels:
- docker:docker://node:16-bullseye
- self-hosted:docker://ghcr.io/catthehacker/ubuntu:act-22.04
- ubuntu-22.04:docker://ghcr.io/catthehacker/ubuntu:act-22.04

View file

@ -1,22 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ingress-apps
namespace: argocd
labels:
example: ref-implementation
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
server: "https://kubernetes.default.svc"
source:
repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
path: "stacks/core/ingress-apps"
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -1,31 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
name: argo-workflows-ingress
namespace: argo
spec:
ingressClassName: nginx
rules:
- host: localhost
http:
paths:
- backend:
service:
name: argo-server
port:
name: web
path: /argo-workflows(/|$)(.*)
pathType: ImplementationSpecific
- host: {{{ .Env.DOMAIN }}}
http:
paths:
- backend:
service:
name: argo-server
port:
name: web
path: /argo-workflows(/|$)(.*)
pathType: ImplementationSpecific

View file

@ -1,28 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backstage
namespace: backstage
spec:
ingressClassName: nginx
rules:
- host: localhost
http:
paths:
- backend:
service:
name: backstage
port:
name: http
path: /
pathType: Prefix
- host: {{{ .Env.DOMAIN }}}
http:
paths:
- backend:
service:
name: backstage
port:
name: http
path: /
pathType: Prefix

View file

@ -1,18 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fibonacci-service
namespace: fibonacci-app
spec:
ingressClassName: nginx
rules:
- host: {{{ .Env.DOMAIN }}}
http:
paths:
- backend:
service:
name: fibonacci-service
port:
number: 9090
path: /fibonacci
pathType: Prefix

View file

@ -1,28 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: keycloak-ingress-localhost
namespace: keycloak
spec:
ingressClassName: nginx
rules:
- host: localhost
http:
paths:
- backend:
service:
name: keycloak
port:
name: http
path: /keycloak
pathType: ImplementationSpecific
- host: {{{ .Env.DOMAIN }}}
http:
paths:
- backend:
service:
name: keycloak
port:
name: http
path: /keycloak
pathType: ImplementationSpecific

View file

@ -1,18 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kube-prometheus-stack-grafana
namespace: monitoring
spec:
ingressClassName: nginx
rules:
- host: {{{ .Env.DOMAIN }}}
http:
paths:
- backend:
service:
name: kube-prometheus-stack-grafana
port:
number: 80
path: /grafana
pathType: Prefix

View file

@ -1,24 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio-console
namespace: minio-backup
{{{ if eq .Env.CLUSTER_TYPE "osc" }}}
annotations:
dns.gardener.cloud/class: garden
dns.gardener.cloud/dnsnames: minio-backup.{{{ .Env.DOMAIN }}}
dns.gardener.cloud/ttl: "600"
{{{ end }}}
spec:
ingressClassName: nginx
rules:
- host: minio-backup.{{{ .Env.DOMAIN }}}
http:
paths:
- backend:
service:
name: minio-console
port:
number: 9001
path: /
pathType: Prefix

View file

@ -1,24 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: openbao
namespace: openbao
{{{ if eq .Env.CLUSTER_TYPE "osc" }}}
annotations:
dns.gardener.cloud/class: garden
dns.gardener.cloud/dnsnames: openbao.{{{ .Env.DOMAIN }}}
dns.gardener.cloud/ttl: "600"
{{{ end }}}
spec:
ingressClassName: nginx
rules:
- host: openbao.{{{ .Env.DOMAIN }}}
http:
paths:
- backend:
service:
name: openbao
port:
number: 8200
path: /
pathType: Prefix

View file

@ -1,816 +0,0 @@
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
allow-snippet-annotations: "true"
proxy-buffer-size: "32k"
use-forwarded-headers: "true"
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
# Omit Ingress status permissions if `--update-status` is disabled.
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
resourceNames:
- ingress-nginx-leader
verbs:
- get
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-metrics.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-metrics
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: metrics
port: 10254
protocol: TCP
targetPort: metrics
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: NodePort
ipFamilyPolicy: SingleStack
ipFamilies:
- IPv4
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
replicas: 1
revisionHistoryLimit: 10
strategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
minReadySeconds: 0
template:
metadata:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: registry.k8s.io/ingress-nginx/controller:v1.11.3@sha256:d56f135b6462cfc476447cfe564b83a45e8bb7da2774963b00d12161112270b7
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --enable-ssl-passthrough
- --publish-status-address=localhost
securityContext:
runAsNonRoot: true
runAsUser: 101
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
readOnlyRootFilesystem: false
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
- name: https
containerPort: 443
protocol: TCP
hostPort: 443
- name: metrics
containerPort: 10254
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
ingress-ready: "true"
kubernetes.io/os: linux
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Equal
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Equal
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 0
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: nginx
spec:
controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/controller-poddisruptionbudget.yaml
# PDB is not supported for DaemonSets.
# https://github.com/kubernetes/kubernetes/issues/108124
---
# Source: ingress-nginx/templates/controller-servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
release: ingress-nginx
spec:
endpoints:
- port: metrics
interval: 30s
namespaceSelector:
matchNames:
- ingress-nginx
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
webhooks:
- name: validate.nginx.ingress.kubernetes.io
matchPolicy: Equivalent
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
sideEffects: None
admissionReviewVersions:
- v1
clientConfig:
service:
name: ingress-nginx-controller-admission
namespace: ingress-nginx
port: 443
path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx-admission
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
namespace: ingress-nginx
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65532
seccompProfile:
type: RuntimeDefault
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
namespace: ingress-nginx
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-4.11.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "1.11.3"
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f
imagePullPolicy: IfNotPresent
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65532
seccompProfile:
type: RuntimeDefault
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux

View file

@ -1,49 +0,0 @@
controller:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
ingressClassResource:
name: nginx
# added for idpbuilder
allowSnippetAnnotations: true
# added for idpbuilder
config:
proxy-buffer-size: 32k
use-forwarded-headers: "true"
# monitoring nginx
metrics:
enabled: true
serviceMonitor:
additionalLabels:
release: "ingress-nginx"
enabled: true
{{{ if eq .Env.CLUSTER_TYPE "kind" }}}
hostPort:
enabled: true
terminationGracePeriodSeconds: 0
service:
type: NodePort
nodeSelector:
ingress-ready: "true"
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Equal"
effect: "NoSchedule"
publishService:
enabled: false
extraArgs:
publish-status-address: localhost
# added for idpbuilder
enable-ssl-passthrough: ""
{{{ end }}}

View file

@ -1,7 +1,7 @@
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: backstage name: forgejo-runner
namespace: argocd namespace: argocd
labels: labels:
env: dev env: dev
@ -9,17 +9,16 @@ metadata:
- resources-finalizer.argocd.argoproj.io - resources-finalizer.argocd.argoproj.io
spec: spec:
project: default project: default
source:
repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
path: "stacks/ref-implementation/backstage/manifests"
destination:
server: "https://kubernetes.default.svc"
namespace: backstage
syncPolicy: syncPolicy:
syncOptions:
- CreateNamespace=true
automated: automated:
selfHeal: true selfHeal: true
syncOptions:
- CreateNamespace=true
retry: retry:
limit: -1 limit: -1
destination:
server: "https://kubernetes.default.svc"
source:
repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/forgejo/forgejo-runner"

View file

@ -0,0 +1,104 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: forgejo-runner
name: forgejo-runner
namespace: gitea
spec:
# Two replicas means that if one is busy, the other can pick up jobs.
replicas: 1
selector:
matchLabels:
app: forgejo-runner
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: forgejo-runner
spec:
restartPolicy: Always
volumes:
- name: docker-certs
emptyDir: {}
- name: runner-data
emptyDir: {}
# Initialise our configuration file using offline registration
# https://forgejo.org/docs/v1.21/admin/actions/#offline-registration
initContainers:
- name: runner-register
image: code.forgejo.org/forgejo/runner:6.3.1
command:
- "sh"
- "-c"
- |
forgejo-runner \
register \
--no-interactive \
--token ${RUNNER_SECRET} \
--name ${RUNNER_NAME} \
--instance ${FORGEJO_INSTANCE_URL} \
--labels docker:docker://node:20-bookworm,ubuntu-22.04:docker://edp.buildth.ing/devfw-cicd/catthehackerubuntu:act-22.04,ubuntu-latest:docker://edp.buildth.ing/devfw-cicd/catthehackerubuntu:act-22.04
env:
- name: RUNNER_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RUNNER_SECRET
valueFrom:
secretKeyRef:
name: forgejo-runner-token
key: token
- name: FORGEJO_INSTANCE_URL
value: https://{{{ .Env.DOMAIN_GITEA }}}
volumeMounts:
- name: runner-data
mountPath: /data
containers:
- name: runner
image: code.forgejo.org/forgejo/runner:6.3.1
command:
- "sh"
- "-c"
- |
while ! nc -z 127.0.0.1 2376 </dev/null; do
echo 'waiting for docker daemon...';
sleep 5;
done
forgejo-runner generate-config > config.yml ;
sed -i -e "s|privileged: .*|privileged: true|" config.yml
sed -i -e "s|network: .*|network: host|" config.yml ;
sed -i -e "s|^ envs:$$| envs:\n DOCKER_HOST: tcp://127.0.0.1:2376\n DOCKER_TLS_VERIFY: 1\n DOCKER_CERT_PATH: /certs/client|" config.yml ;
sed -i -e "s|^ options:| options: -v /certs/client:/certs/client|" config.yml ;
sed -i -e "s| valid_volumes: \[\]$$| valid_volumes:\n - /certs/client|" config.yml ;
/bin/forgejo-runner --config config.yml daemon
securityContext:
allowPrivilegeEscalation: true
privileged: true
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
env:
- name: DOCKER_HOST
value: tcp://localhost:2376
- name: DOCKER_CERT_PATH
value: /certs/client
- name: DOCKER_TLS_VERIFY
value: "1"
volumeMounts:
- name: docker-certs
mountPath: /certs
- name: runner-data
mountPath: /data
- name: daemon
image: docker:28.0.4-dind
env:
- name: DOCKER_TLS_CERTDIR
value: /certs
securityContext:
privileged: true
volumeMounts:
- name: docker-certs
mountPath: /certs

View file

@ -0,0 +1,38 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: forgejo-server
namespace: argocd
labels:
env: dev
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
retry:
limit: -1
destination:
name: in-cluster
namespace: gitea
sources:
- repoURL: https://edp.buildth.ing/DevFW-CICD/forgejo-helm.git
path: .
# first check out the desired version (example v9.0.0): https://code.forgejo.org/forgejo-helm/forgejo-helm/src/tag/v9.0.0/Chart.yaml
# (note that the chart version is not the same as the forgejo application version, which is specified in the above Chart.yaml file)
# then use the devops pipeline and select development, forgejo and the desired version (example v9.0.0):
# https://edp.buildth.ing/DevFW-CICD/devops-pipelines/actions?workflow=update-helm-depends.yaml&actor=0&status=0
# finally update the desired version here and include "-depends", it is created by the devops pipeline.
# why do we have an added "-depends" tag? it resolves rate limitings when downloading helm OCI dependencies
targetRevision: v12.0.0-depends
helm:
valueFiles:
- $values/{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/forgejo/forgejo-server/values.yaml
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
ref: values
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/forgejo/forgejo-server/manifests"

View file

@ -4,27 +4,28 @@ metadata:
annotations: annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 512m nginx.ingress.kubernetes.io/proxy-body-size: 512m
cert-manager.io/cluster-issuer: main
{{{ if eq .Env.CLUSTER_TYPE "osc" }}} {{{ if eq .Env.CLUSTER_TYPE "osc" }}}
dns.gardener.cloud/class: garden dns.gardener.cloud/class: garden
dns.gardener.cloud/dnsnames: gitea.{{{ .Env.DOMAIN }}} dns.gardener.cloud/dnsnames: {{{ .Env.DOMAIN_GITEA }}}
dns.gardener.cloud/ttl: "600" dns.gardener.cloud/ttl: "600"
{{{ end }}} {{{ end }}}
name: forgejo name: forgejo-server
namespace: gitea namespace: gitea
spec: spec:
ingressClassName: nginx ingressClassName: nginx
rules: rules:
- host: gitea.{{{ .Env.DOMAIN }}} - host: {{{ .Env.DOMAIN_GITEA }}}
http: http:
paths: paths:
- backend: - backend:
service: service:
name: forgejo-http name: forgejo-server-http
port: port:
number: 3000 number: 3000
path: / path: /
pathType: Prefix pathType: Prefix
tls: tls:
- hosts: - hosts:
- gitea.{{{ .Env.DOMAIN }}} - {{{ .Env.DOMAIN_GITEA }}}
secretName: forgejo-net-tls secretName: forgejo-net-tls

View file

@ -0,0 +1,180 @@
# We use recreate to make sure only one instance with one version is running, because Forgejo might break or data gets inconsistant.
strategy:
type: Recreate
redis-cluster:
enabled: false
redis:
enabled: false
postgresql:
enabled: false
postgresql-ha:
enabled: false
persistence:
enabled: true
size: 200Gi
annotations:
everest.io/crypt-key-id: {{{ .Env.PVC_KMS_KEY_ID }}}
test:
enabled: false
deployment:
env:
- name: SSL_CERT_DIR
value: /etc/ssl/forgejo
extraVolumeMounts:
- mountPath: /etc/ssl/forgejo
name: custom-database-certs-volume
readOnly: true
extraVolumes:
- name: custom-database-certs-volume
secret:
secretName: custom-database-certs
gitea:
additionalConfigFromEnvs:
- name: FORGEJO__storage__MINIO_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: forgejo-cloud-credentials
key: access-key
- name: FORGEJO__storage__MINIO_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: forgejo-cloud-credentials
key: secret-key
- name: FORGEJO__queue__CONN_STR
valueFrom:
secretKeyRef:
name: redis-forgejo-cloud-credentials
key: connection-string
- name: FORGEJO__session__PROVIDER_CONFIG
valueFrom:
secretKeyRef:
name: redis-forgejo-cloud-credentials
key: connection-string
- name: FORGEJO__cache__HOST
valueFrom:
secretKeyRef:
name: redis-forgejo-cloud-credentials
key: connection-string
- name: FORGEJO__database__HOST
valueFrom:
secretKeyRef:
name: postgres-forgejo-cloud-credentials
key: host_port
- name: FORGEJO__database__NAME
valueFrom:
secretKeyRef:
name: postgres-forgejo-cloud-credentials
key: database
- name: FORGEJO__database__USER
valueFrom:
secretKeyRef:
name: postgres-forgejo-cloud-credentials
key: username
- name: FORGEJO__database__PASSWD
valueFrom:
secretKeyRef:
name: postgres-forgejo-cloud-credentials
key: password
- name: FORGEJO__indexer__ISSUE_INDEXER_CONN_STR
valueFrom:
secretKeyRef:
name: elasticsearch-cloud-credentials
key: connection-string
- name: FORGEJO__mailer__PASSWD
valueFrom:
secretKeyRef:
name: email-user-credentials
key: connection-string
admin:
existingSecret: gitea-credential
config:
APP_NAME: 'EDP'
APP_SLOGAN: 'Build your thing in minutes'
indexer:
ISSUE_INDEXER_ENABLED: true
ISSUE_INDEXER_TYPE: elasticsearch
# TODO next
REPO_INDEXER_ENABLED: false
# REPO_INDEXER_TYPE: meilisearch # not yet working
storage:
MINIO_ENDPOINT: obs.eu-de.otc.t-systems.com:443
STORAGE_TYPE: minio
MINIO_LOCATION: eu-de
MINIO_BUCKET: edp-forgejo-{{{ .Env.CLUSTER_ENVIRONMENT }}}
MINIO_USE_SSL: true
queue:
TYPE: redis
session:
PROVIDER: redis
cache:
ENABLED: true
ADAPTER: redis
service:
DISABLE_REGISTRATION: true
other:
SHOW_FOOTER_VERSION: false
SHOW_FOOTER_TEMPLATE_LOAD_TIME: false
database:
DB_TYPE: postgres
SSL_MODE: verify-ca
server:
DOMAIN: '{{{ .Env.DOMAIN_GITEA }}}'
ROOT_URL: 'https://{{{ .Env.DOMAIN_GITEA }}}:443'
mailer:
ENABLED: true
USER: ipcei-cis-devfw@mms-support.de
PROTOCOL: smtps
FROM: '"IPCEI CIS DevFW" <ipcei-cis-devfw@mms-support.de>'
SMTP_ADDR: mail.mms-support.de
SMTP_PORT: 465
service:
ssh:
type: LoadBalancer
nodePort: 32222
externalTrafficPolicy: Cluster
annotations:
kubernetes.io/elb.id: {{{ .Env.LOADBALANCER_ID }}}
image:
pullPolicy: "IfNotPresent"
# Overrides the image tag whose default is the chart appVersion.
#tag: "8.0.3"
# Adds -rootless suffix to image name
# rootless: true
fullOverride: forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/devfw-cicd/edp-forgejo:prerelease-v11-0-1-rootless
forgejo:
runner:
enabled: true
image:
tag: latest
# replicas: 3
config:
runner:
labels:
- docker:docker://node:16-bullseye
- self-hosted:docker://ghcr.io/catthehacker/ubuntu:act-22.04
- ubuntu-22.04:docker://ghcr.io/catthehacker/ubuntu:act-22.04
- ubuntu-latest:docker://ghcr.io/catthehacker/ubuntu:act-22.04

View file

@ -1,126 +0,0 @@
# Local Backup with Velero and Minio
This is example is adapted from the original icpbuilder stack.
The two significant changes from the original were made:
* disabled `hostPath` mount to persist backups within kind, since backups do not work sufficiently in this example due to PVC issues, see below.
* renamed `minio` namespace to `minio-backup` so it does not collide with other minio examples.
Within kind, it can only backup kubernetes objects. Data from PVC's is skipped, see below why.
[Velero](https://velero.io/) requires some compatible storage providers as its backup target. This local installation uses [MinIO](https://min.io/) as an example.
However, MinIO is not officially supported by Velero but works due to S3 compatibility.
The current setup does NOT persist backups but stores them in MinIO's PVCs. Proper backups should configure external storage, see [Supported Providers](https://velero.io/docs/main/supported-providers/).
## Installation
The stack is installed as part of the `./example.sh` run.
In order to persist a local backup you have to mount a local directory within `main.go`:
```yaml
nodes:
- role: control-plane
extraMounts:
- hostPath: /some/path/backup # replace with your own path
containerPath: /backup
```
Kind creates the directory on the host but you might have to adjust the permissions, otherwise the minio pod fails to start.
## Using it
After the installation velero and minio should be visible in ArgoCD.
During the installation credentials for minio are generated and shared with velero. You can access them manually:
```bash
kubectl -n minio-backup get secret root-creds -o go-template='{{ range $key, $value := .data }}{{ printf "%s: %s\n" $key ($value | base64decode) }}{{ end }}'
# example output
# rootPassword: aKKZzLnyry6OYZts17vMTf32H5ghFL4WYgu6bHujm
# rootUser: ge8019yksArb7BICt3MLY9
```
A bucket in minio was created and velero uses it for its backups by default, see helm `values.yaml` files.
### Backup and Restore
Backups and subsequent restores can be scheduled by either using the velero cli or by creating CRD objects.
Check the `./demo` directory for equivalent CRD manifests.
Create a backup of the backstage namespace, see `schedule` task for more permanent setups:
```shell
velero backup create backstage-backup --include-namespaces backstage
```
There are more options to create a fine granular backup and to set the backup storage.
See velero's docs for details.
Check the backup with:
```shell
velero backup get
```
To get more details on the backup you need to be able to connect to velero's backup storage, i.e. minio.
Using `kubefwd` here helps a lot (this is not necessary for restore).
```shell
kubefwd services -n minio-backup
```
More details with `describe` and `logs`:
```shell
velero backup describe backstage-backup --details
velero backup logs backstage-backup
```
Restore the backup into the original namespace, you might want to delete the existing namespace beforehand:
```shell
kubectl delete namespace backstage
velero restore create --from-backup backstage-backup
```
When restoring, velero does not replace existing objects in the backup target.
ArgoCD does pickup on the changes and also validates that the backup is in sync.
## Issues with Persistent Volumes
Velero has no issue to backup kubernetes objects like Deployments, ConfigMaps, etc. since they are just yaml/json definitions.
Volumes containing data are, however, more complex. The preferred type of backup are kubernetes' VolumeSnapshots as they consistently store the state
of a volume at a given point in time in an atomic action. Those snapshots live within the cluster and are subsequently downloaded into one of velero's
storage backends for safekeeping.
However, VolumeSnapshots are only possible on storage backends that support them via CSI drivers.
Backends like `nfs` or `hostPath` do NOT support them. Here, velero uses an alternative method
called [File System Backups](https://velero.io/docs/main/file-system-backup/).
In essence, this a simple copy operation based on the file system. Even though
this uses more sophisticated tooling under the hood, i.e. kopia, it is not
possible to create a backup in an atomic transaction. Thus, the resulting backup
might be inconsistent.
Furthermore, for file system backups to work velero installs a node-agent as a
DaemonSet on each Kubernetes node. The agent is aware of the node's internal
storage and accesses the directories on the host directly to copy the files.
This is not supported for hostPath volumes as they mount an arbitrary path
on the host. In theory, a backup is possible but due extra config and security
considerations intentionally skipped. Kind's local-path provider storage uses
a hostPath and is thus not supported for any kind of backup.
## TODOs
* The MinIO -backup installation is only intended as an example and must either
be configured properly or replaced.
* The current example does not automatically schedule backups.
* velero chart must be properly parameterized

View file

@ -1,9 +0,0 @@
# velero backup create backstage-backup --include-namespaces backstage
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backstage-backup
namespace: velero
spec:
includedNamespaces:
- 'backstage'

View file

@ -1,10 +0,0 @@
# velero restore create --from-backup backstage-backup
apiVersion: velero.io/v1
kind: Restore
metadata:
name: backstage-backup
namespace: velero
spec:
backupName: backstage-backup
includedNamespaces:
- 'backstage'

View file

@ -1,33 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: minio
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
sources:
- repoURL: 'https://charts.min.io'
targetRevision: 5.0.15
helm:
releaseName: minio
valueFiles:
- $values/stacks/local-backup/minio/helm/values.yaml
chart: minio
- repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
ref: values
- repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
path: "stacks/local-backup/minio/manifests"
destination:
server: "https://kubernetes.default.svc"
namespace: minio-backup
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true

View file

@ -1,17 +0,0 @@
replicas: 1
mode: standalone
resources:
requests:
memory: 128Mi
persistence:
enabled: true
storageClass: standard
size: 512Mi
# volumeName: backup # re-enable this to mount a local host path, see minio-pv.yaml
buckets:
- name: edfbuilder-backups
existingSecret: root-creds

View file

@ -1,13 +0,0 @@
# re-enable this config to mount a local host path, see `../helm/values.yaml`
# apiVersion: v1
# kind: PersistentVolume
# metadata:
# name: backup
# spec:
# storageClassName: standard
# accessModes:
# - ReadWriteOnce
# capacity:
# storage: 512Mi
# hostPath:
# path: /backup

View file

@ -1,154 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: secret-sync
namespace: minio-backup
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-20"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-sync
namespace: minio-backup
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-20"
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-sync
namespace: minio-backup
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-20"
subjects:
- kind: ServiceAccount
name: secret-sync
namespace: minio-backup
roleRef:
kind: Role
name: secret-sync
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-sync
namespace: velero
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-20"
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-sync
namespace: velero
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-20"
subjects:
- kind: ServiceAccount
name: secret-sync
namespace: minio-backup
roleRef:
kind: Role
name: secret-sync
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
name: secret-sync
namespace: minio-backup
annotations:
argocd.argoproj.io/hook: PostSync
spec:
template:
metadata:
generateName: secret-sync
spec:
serviceAccountName: secret-sync
restartPolicy: Never
containers:
- name: kubectl
image: docker.io/bitnami/kubectl
command: ["/bin/bash", "-c"]
args:
- |
set -e
kubectl get secrets -n minio-backup root-creds -o json > /tmp/secret
ACCESS=$(jq -r '.data.rootUser | @base64d' /tmp/secret)
SECRET=$(jq -r '.data.rootPassword | @base64d' /tmp/secret)
echo \
"apiVersion: v1
kind: Secret
metadata:
name: secret-key
namespace: velero
type: Opaque
stringData:
aws: |
[default]
aws_access_key_id=${ACCESS}
aws_secret_access_key=${SECRET}
" > /tmp/secret.yaml
kubectl apply -f /tmp/secret.yaml
---
apiVersion: batch/v1
kind: Job
metadata:
name: minio-root-creds
namespace: minio-backup
annotations:
argocd.argoproj.io/hook: Sync
argocd.argoproj.io/sync-wave: "-10"
spec:
template:
metadata:
generateName: minio-root-creds
spec:
serviceAccountName: secret-sync
restartPolicy: Never
containers:
- name: kubectl
image: docker.io/bitnami/kubectl
command: ["/bin/bash", "-c"]
args:
- |
kubectl get secrets -n minio-backup root-creds
if [ $? -eq 0 ]; then
exit 0
fi
set -e
NAME=$(openssl rand -base64 24)
PASS=$(openssl rand -base64 36)
echo \
"apiVersion: v1
kind: Secret
metadata:
name: root-creds
namespace: minio-backup
type: Opaque
stringData:
rootUser: "${NAME}"
rootPassword: "${PASS}"
" > /tmp/secret.yaml
kubectl apply -f /tmp/secret.yaml

View file

@ -1,31 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: velero
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
sources:
- repoURL: 'https://vmware-tanzu.github.io/helm-charts'
targetRevision: 8.0.0
helm:
releaseName: velero
valueFiles:
- $values/stacks/local-backup/velero/helm/values.yaml
chart: velero
- repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
ref: values
destination:
server: "https://kubernetes.default.svc"
namespace: velero
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: true
selfHeal: true

View file

@ -1,25 +0,0 @@
resources:
requests:
memory: 128Mi
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:v1.11.0
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
# snapshotsEnabled: false # create snapshot crd?
# deployNodeAgent: true # install node agent as daemonset for file system backups?
configuration:
# defaultVolumesToFsBackup: true # backup pod volumes via fsb without explicit annotation?
backupStorageLocation:
- name: default
provider: aws
bucket: edfbuilder-backups
credential:
name: secret-key # this key is created within the minio-backup/secret-sync and injected into the velero namespace
key: aws
config:
region: minio
s3Url: http://minio.minio-backup.svc.cluster.local:9000 # internal resolution, external access for velero cli via fwd
s3ForcePathStyle: "true"

View file

@ -1,25 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana-dashboards
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
path: "stacks/monitoring/kube-prometheus/dashboards"
destination:
server: "https://kubernetes.default.svc"
namespace: monitoring
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true
retry:
limit: -1

View file

@ -1,30 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kube-prometheus-stack
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true # do not copy metdata, since (because of its large size) it can lead to sync failure
destination:
name: in-cluster
namespace: monitoring
sources:
- repoURL: https://github.com/prometheus-community/helm-charts
path: charts/kube-prometheus-stack
targetRevision: HEAD
helm:
valueFiles:
- $values/stacks/monitoring/kube-prometheus/values.yaml
- repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
ref: values

View file

@ -1,268 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-dashboard-1
labels:
grafana_dashboard: "1"
data:
k8s-dashboard-01.json: |
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 1,
"links": [
],
"panels": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 0
},
"id": 5,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"expr": "{app=\"crossplane\"}",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: App crossplane",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 8
},
"id": 4,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"expr": "{app=\"argo-server\"}",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: App argo-server",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 16
},
"id": 3,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"expr": "{app=\"forgejo\"}",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: App forgejo",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 24
},
"id": 2,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"expr": "{app=\"backstage\"}",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: App backstage",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 32
},
"id": 1,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"expr": "{app=\"shoot-control-plane\"}",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: App shoot-control-plane",
"type": "logs"
}
],
"preload": false,
"schemaVersion": 40,
"tags": [
],
"templating": {
"list": [
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {
},
"timezone": "browser",
"title": "Loki Logs: Apps",
"uid": "ee4iuluru756of",
"version": 2,
"weekStart": ""
}

View file

@ -1,845 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-dashboard-2
labels:
grafana_dashboard: "1"
data:
k8s-dashboard-02.json: |
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 30,
"links": [
],
"panels": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 0
},
"id": 19,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"server\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component server",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 8
},
"id": 17,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"repo-server\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component repo-server",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 16
},
"id": 16,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"redis\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component redis",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 24
},
"id": 15,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"query-frontend\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component query-frontend",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 32
},
"id": 14,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"querier\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component querier",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 40
},
"id": 13,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"prometheus-operator-webhook\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component prometheus-operator-webhook",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 48
},
"id": 12,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"prometheus-operator\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component prometheus-operator",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 56
},
"id": 11,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"metrics\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component metrics",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 64
},
"id": 10,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"kube-scheduler\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component kube-scheduler",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 72
},
"id": 9,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"kube-controller-manager\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component kube-controller-manager",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 80
},
"id": 8,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"kube-apiserver\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component kube-apiserver",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 88
},
"id": 7,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"ingester\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component ingester",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 96
},
"id": 6,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"gateway\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component gateway",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 104
},
"id": 5,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"etcd\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component etcd",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 112
},
"id": 4,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"distributor\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component distributor",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 120
},
"id": 3,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"controller\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component controller",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 128
},
"id": 2,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"cloud-infrastructure-controller\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component cloud-infrastructure-controller",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 136
},
"id": 1,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{component=\"applicationset-controller\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Component application-controller",
"type": "logs"
}
],
"preload": false,
"schemaVersion": 40,
"tags": [
],
"templating": {
"list": [
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {
},
"timezone": "browser",
"title": "Loki Logs: Components",
"uid": "ae4zuyp1kui9sc",
"version": 2,
"weekStart": ""
}

View file

@ -1,537 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-dashboard-3
labels:
grafana_dashboard: "1"
data:
k8s-dashboard-03.json: |
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 31,
"links": [
],
"panels": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 0
},
"id": 11,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{container=\"repo-server\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Container repo-server",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 8
},
"id": 10,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{container=\"promtail\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Container promtail",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 16
},
"id": 9,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{container=\"prometheus\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Container prometheus",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 24
},
"id": 8,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{container=\"postgres\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Container postgres",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 32
},
"id": 7,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{container=\"kube-prometheus-stack\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Container kube-prometheus-stack",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 40
},
"id": 6,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{container=\"keycloak\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Container keycloak",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 48
},
"id": 5,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{container=\"grafana\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Container grafana",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 56
},
"id": 4,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{container=\"forgejo\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Container forgejo",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 64
},
"id": 3,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{container=\"crossplane\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Container crossplane",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 72
},
"id": 2,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{container=\"backstage\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Container backstage",
"type": "logs"
},
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"fieldConfig": {
"defaults": {
},
"overrides": [
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 80
},
"id": 1,
"options": {
"dedupStrategy": "none",
"enableLogDetails": true,
"prettifyLogMessage": false,
"showCommonLabels": false,
"showLabels": false,
"showTime": false,
"sortOrder": "Descending",
"wrapLogMessage": false
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "loki",
"uid": "P8E80F9AEF21F6940"
},
"editorMode": "builder",
"expr": "{container=\"argo-server\"} |= ``",
"queryType": "range",
"refId": "A"
}
],
"title": "Logs: Container argo-server",
"type": "logs"
}
],
"preload": false,
"schemaVersion": 40,
"tags": [
],
"templating": {
"list": [
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {
},
"timezone": "browser",
"title": "Loki Logs: Container",
"uid": "ee50bcaehmv40e",
"version": 2,
"weekStart": ""
}

View file

@ -1,45 +0,0 @@
grafana:
namespaceOverride: "monitoring"
admin:
existingSecret: "kube-prometheus-stack-grafana-admin-password"
userKey: admin-user
passwordKey: admin-password
defaultDashboardsTimezone: Europe/Berlin
additionalDataSources:
- name: Loki
type: loki
url: http://loki-loki-distributed-gateway.monitoring:80
# syncPolicy:
# syncOptions:
# - ServerSideApply=true
sidecar:
dashboards:
enabled: true
label: grafana_dashboard
folder: /tmp/dashboards
updateIntervalSeconds: 10
folderAnnotation: grafana_folder
provider:
allowUiUpdates: true
foldersFromFilesStructure: true
grafana.ini:
server:
domain: {{{ .Env.DOMAIN }}}
root_url: "%(protocol)s://%(domain)s/grafana"
serve_from_sub_path: true
serviceMonitor:
# If true, a ServiceMonitor CRD is created for a prometheus operator https://github.com/coreos/prometheus-operator
enabled: true
#monitoring nginx
prometheus:
prometheusSpec:
podMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelectorNilUsesHelmValues: false

View file

@ -1,34 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: loki
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
destination:
name: in-cluster
namespace: monitoring
sources:
- repoURL: https://github.com/grafana/helm-charts
path: charts/loki-distributed
targetRevision: HEAD
helm:
valueFiles:
- $values/stacks/monitoring/loki/values.yaml
- repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
ref: values
## consider using the following version, if it works again
#- repoURL: https://github.com/grafana/loki
# path: production/helm/loki

View file

@ -1,7 +0,0 @@
loki:
commonConfig:
replication_factor: 1
auth_enabled: false
# storageConfig:
# filesystem: null

View file

@ -1,29 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: promtail
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
destination:
name: in-cluster
namespace: monitoring
sources:
- repoURL: https://github.com/grafana/helm-charts
path: charts/promtail
targetRevision: HEAD
helm:
valueFiles:
- $values/stacks/monitoring/promtail/values.yaml
- repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
ref: values

View file

@ -1,45 +0,0 @@
# -- Overrides the chart's name
nameOverride: null
# -- Overrides the chart's computed fullname
fullnameOverride: null
global:
# -- Allow parent charts to override registry hostname
imageRegistry: ""
# -- Allow parent charts to override registry credentials
imagePullSecrets: []
daemonset:
# -- Deploys Promtail as a DaemonSet
enabled: true
autoscaling:
# -- Creates a VerticalPodAutoscaler for the daemonset
enabled: false
deployment:
# -- Deploys Promtail as a Deployment
enabled: false
config:
enabled: true
logLevel: info
logFormat: logfmt
serverPort: 3101
clients:
- url: http://loki-loki-distributed-gateway/loki/api/v1/push
scrape_configs:
- job_name: authlog
static_configs:
- targets:
- authlog
labels:
job: authlog
__path__: /logs/auth.log
- job_name: syslog
static_configs:
- targets:
- syslog
labels:
job: syslog
__path__: /logs/syslog

View file

@ -0,0 +1,29 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metrics-server
namespace: argocd
labels:
env: dev
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
retry:
limit: -1
destination:
name: in-cluster
namespace: observability
sources:
- chart: metrics-server
repoURL: https://kubernetes-sigs.github.io/metrics-server/
targetRevision: 3.12.2
helm:
valueFiles:
- $values/{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/observability-client/metrics-server/values.yaml
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
ref: values

View file

@ -0,0 +1,4 @@
metrics:
enabled: true
serviceMonitor:
enabled: true

View file

@ -0,0 +1,29 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: vector
namespace: argocd
labels:
env: dev
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
retry:
limit: -1
destination:
name: in-cluster
namespace: observability
sources:
- chart: vector
repoURL: https://helm.vector.dev
targetRevision: 0.43.0
helm:
valueFiles:
- $values/{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/observability-client/vector/values.yaml
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
ref: values

View file

@ -0,0 +1,68 @@
# -- Enable deployment of vector
role: Agent
dataDir: /vector-data-dir
resources: {}
args:
- -w
- --config-dir
- /etc/vector/
env:
- name: VECTOR_USER
valueFrom:
secretKeyRef:
name: simple-user-secret
key: username
- name: VECTOR_PASSWORD
valueFrom:
secretKeyRef:
name: simple-user-secret
key: password
containerPorts:
- name: prom-exporter
containerPort: 9090
protocol: TCP
service:
enabled: false
customConfig:
data_dir: /vector-data-dir
api:
enabled: false
address: 0.0.0.0:8686
playground: true
sources:
k8s:
type: kubernetes_logs
internal_metrics:
type: internal_metrics
transforms:
parser:
type: remap
inputs: [k8s]
source: |
._msg = parse_json(.message) ?? .message
del(.message)
# Add the cluster environment to the log event
.cluster_environment = "{{{ .Env.CLUSTER_ENVIRONMENT }}}"
sinks:
vlogs:
type: elasticsearch
inputs: [parser]
endpoints:
- https://{{{ .Env.DOMAIN_O12Y }}}/insert/elasticsearch/
auth:
strategy: basic
user: ${VECTOR_USER}
password: ${VECTOR_PASSWORD}
mode: bulk
api_version: v8
compression: gzip
healthcheck:
enabled: false
request:
headers:
AccountID: "0"
ProjectID: "0"
query:
_msg_field: _msg
_time_field: _time
_stream_fields: cluster_environment,kubernetes.container_name,kubernetes.namespace

View file

@ -0,0 +1,30 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: vm-client
namespace: argocd
labels:
env: dev
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
destination:
name: in-cluster
namespace: observability
sources:
- chart: victoria-metrics-k8s-stack
repoURL: https://victoriametrics.github.io/helm-charts/
targetRevision: 0.48.1
helm:
valueFiles:
- $values/{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/observability-client/vm-client-stack/values.yaml
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
ref: values
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/observability-client/vm-client-stack/manifests"

View file

@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
metadata:
name: simple-user-secret
namespace: observability
type: Opaque
stringData:
username: simple-user
password: simple-password

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,25 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana-operator
namespace: argocd
labels:
env: dev
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true
destination:
name: in-cluster
namespace: observability
sources:
- chart: grafana-operator
repoURL: ghcr.io/grafana/helm-charts
targetRevision: v5.18.0
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/observability/grafana-operator/manifests"

View file

@ -0,0 +1,9 @@
apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDashboard
metadata:
name: argocd
spec:
instanceSelector:
matchLabels:
dashboards: "grafana"
url: "https://raw.githubusercontent.com/argoproj/argo-cd/refs/heads/master/examples/dashboard.json"

View file

@ -0,0 +1,36 @@
apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana
metadata:
name: grafana
labels:
dashboards: "grafana"
spec:
persistentVolumeClaim:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
ingress:
metadata:
annotations:
cert-manager.io/cluster-issuer: main
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- host: grafana.{{{ .Env.DOMAIN }}}
http:
paths:
- backend:
service:
name: grafana-service
port:
number: 3000
path: /
pathType: Prefix
tls:
- hosts:
- grafana.{{{ .Env.DOMAIN }}}
secretName: grafana-net-tls

View file

@ -0,0 +1,9 @@
apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDashboard
metadata:
name: ingress-nginx
spec:
instanceSelector:
matchLabels:
dashboards: "grafana"
url: "https://raw.githubusercontent.com/adinhodovic/ingress-nginx-mixin/refs/heads/main/dashboards_out/ingress-nginx-overview.json"

View file

@ -0,0 +1,9 @@
apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDashboard
metadata:
name: victoria-logs
spec:
instanceSelector:
matchLabels:
dashboards: "grafana"
url: "https://raw.githubusercontent.com/VictoriaMetrics/VictoriaMetrics/refs/heads/master/dashboards/vm/victorialogs.json"

View file

@ -0,0 +1,31 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: o12y
namespace: argocd
labels:
env: dev
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true
destination:
name: in-cluster
namespace: observability
sources:
- chart: victoria-metrics-k8s-stack
repoURL: https://victoriametrics.github.io/helm-charts/
targetRevision: 0.48.1
helm:
valueFiles:
- $values/{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/observability/victoria-k8s-stack/values.yaml
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
ref: values
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/observability/victoria-k8s-stack/manifests"

View file

@ -0,0 +1,24 @@
apiVersion: operator.victoriametrics.com/v1beta1
kind: VLogs
metadata:
name: victorialogs
namespace: observability
spec:
retentionPeriod: "12"
removePvcAfterDelete: true
storageMetadata:
annotations:
everest.io/crypt-key-id: {{{ .Env.PVC_KMS_KEY_ID }}}
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
resources:
requests:
memory: 500Mi
cpu: 500m
limits:
memory: 10Gi
cpu: 2

View file

@ -0,0 +1,15 @@
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMUser
metadata:
name: simple-user
namespace: observability
spec:
username: simple-user
password: simple-password
targetRefs:
- static:
url: http://vmsingle-o12y:8429
paths: ["/api/v1/write"]
- static:
url: http://vlogs-victorialogs:9428
paths: ["/insert/elasticsearch/.*"]

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,14 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: main
spec:
acme:
email: admin@think-ahead.tech
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cluster-issuer-account-key
solvers:
- http01:
ingress:
ingressClassName: nginx

View file

@ -0,0 +1,4 @@
crds:
enabled: true
replicaCount: 1

View file

@ -0,0 +1,32 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
namespace: argocd
labels:
env: dev
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
retry:
limit: -1
destination:
name: in-cluster
namespace: cert-manager
sources:
- chart: cert-manager
repoURL: https://charts.jetstack.io
targetRevision: v1.17.2
helm:
valueFiles:
- $values/{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/otc/cert-manager/values.yaml
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
ref: values
- repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/otc/cert-manager/manifests"

View file

@ -12,16 +12,18 @@ spec:
selfHeal: true selfHeal: true
syncOptions: syncOptions:
- CreateNamespace=true - CreateNamespace=true
retry:
limit: -1
destination: destination:
name: in-cluster name: in-cluster
namespace: ingress-nginx namespace: ingress-nginx
sources: sources:
- repoURL: https://github.com/kubernetes/ingress-nginx - repoURL: https://edp.buildth.ing/DevFW-CICD/ingress-nginx-helm.git
path: charts/ingress-nginx path: charts/ingress-nginx
targetRevision: helm-chart-4.11.3 targetRevision: helm-chart-4.12.1-depends
helm: helm:
valueFiles: valueFiles:
- $values/stacks/core/ingress-nginx/values.yaml - $values/{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/otc/ingress-nginx/values.yaml
- repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder - repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD targetRevision: HEAD
ref: values ref: values

View file

@ -0,0 +1,31 @@
controller:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
service:
annotations:
kubernetes.io/elb.class: union
kubernetes.io/elb.port: '80'
kubernetes.io/elb.id: {{{ .Env.LOADBALANCER_ID }}}
kubernetes.io/elb.ip: {{{ .Env.LOADBALANCER_IP }}}
ingressClassResource:
name: nginx
# added for idpbuilder
allowSnippetAnnotations: true
# added for idpbuilder
config:
proxy-buffer-size: 32k
use-forwarded-headers: "true"
# monitoring nginx
metrics:
enabled: true
serviceMonitor:
additionalLabels:
release: "ingress-nginx"
enabled: true

View file

@ -1,25 +1,25 @@
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: fibonacci-app name: storageclass
namespace: argocd namespace: argocd
labels: labels:
env: dev example: otc
finalizers: finalizers:
- resources-finalizer.argocd.argoproj.io - resources-finalizer.argocd.argoproj.io
spec: spec:
project: default
source:
repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
path: "stacks/ref-implementation/fibonacci-app"
destination: destination:
namespace: default
server: "https://kubernetes.default.svc" server: "https://kubernetes.default.svc"
namespace: fibonacci-app source:
repoURL: https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}
targetRevision: HEAD
path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/otc/storageclass"
project: default
syncPolicy: syncPolicy:
syncOptions:
- CreateNamespace=true
automated: automated:
selfHeal: true selfHeal: true
syncOptions:
- CreateNamespace=true
retry: retry:
limit: -1 limit: -1

View file

@ -0,0 +1,18 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
name: default
parameters:
kubernetes.io/description: ""
kubernetes.io/hw:passthrough: "true"
kubernetes.io/storagetype: BS
kubernetes.io/volumetype: SATA
kubernetes.io/zone: eu-de-02
provisioner: flexvolume-huawei.com/fuxivol
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true

View file

@ -1,146 +0,0 @@
# Reference implementation
This example creates a local version of the CNOE reference implementation.
## Prerequisites
Ensure you have the following tools installed on your computer.
**Required**
- [idpbuilder](https://github.com/cnoe-io/idpbuilder/releases/latest): version `0.3.0` or later
- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl): version `1.27` or later
- Your computer should have at least 6 GB RAM allocated to Docker. If you are on Docker Desktop, see [this guide](https://docs.docker.com/desktop/settings/mac/).
**Optional**
- AWS credentials: Access Key and secret Key. If you want to create AWS resources in one of examples below.
## Installation
**_NOTE:_**
- If you'd like to run this in your web browser through Codespaces, please follow [the instructions here](./codespaces.md) to install instead.
- _This example assumes that you run the reference implementation with the default port configguration of 8443 for the idpBuilder.
If you happen to configure a different host or port for the idpBuilder, the manifests in the reference example need to be updated
and be configured with the new host and port. you can use the [replace.sh](replace.sh) to change the port as desired prior to applying the manifest as instructed in the command above._
```bash
idpbuilder create --use-path-routing \
--package https://github.com/cnoe-io/stacks//ref-implementation
```
This will take ~6 minutes for everything to come up. To track the progress, you can go to the [ArgoCD UI](https://{{{ .Env.DOMAIN }}}:8443/argocd/applications).
### What was installed?
1. **Argo Workflows** to enable workflow orchestrations.
1. **Backstage** as the UI for software catalog and templating. Source is available [here](https://github.com/cnoe-io/backstage-app).
1. **External Secrets** to generate secrets and coordinate secrets between applications.
1. **Keycloak** as the identity provider for applications.
1. **Spark Operator** to demonstrate an example Spark workload through Backstage.
If you don't want to install a package above, you can remove the ArgoCD Application file corresponding to the package you want to remove.
For example, if you want to remove Spark Operator, you can delete [this file](./spark-operator.yaml).
The only package that cannot be removed this way is Keycloak because other packages rely on it.
#### Accessing UIs
- Argo CD: https://{{{ .Env.DOMAIN }}}:8443/argocd
- Argo Workflows: https://{{{ .Env.DOMAIN }}}:8443/argo-workflows
- Backstage: https://{{{ .Env.DOMAIN }}}:8443/
- Gitea: https://{{{ .Env.DOMAIN }}}:8443/gitea
- Keycloak: https://{{{ .Env.DOMAIN }}}:8443/keycloak/admin/master/console/
# Using it
For this example, we will walk through a few demonstrations. Once applications are ready, go to the [backstage URL](https://{{{ .Env.DOMAIN }}}:8443).
Click on the Sign-In button, you will be asked to log into the Keycloak instance. There are two users set up in this
configuration, and their password can be retrieved with the following command:
```bash
idpbuilder get secrets
```
Use the username **`user1`** and the password value given by `USER_PASSWORD` field to login to the backstage instance.
`user1` is an admin user who has access to everything in the cluster, while `user2` is a regular user with limited access.
Both users use the same password retrieved above.
If you want to create a new user or change existing users:
1. Go to the [Keycloak UI](https://{{{ .Env.DOMAIN }}}:8443/keycloak/admin/master/console/).
Login with the username `cnoe-admin`. Password is the `KEYCLOAK_ADMIN_PASSWORD` field from the command above.
2. Select `cnoe` from the realms drop down menu.
3. Select users tab.
## Basic Deployment
Let's start by deploying a simple application to the cluster through Backstage.
Click on the `Create...` button on the left, then select the `Create a Basic Deployment` template.
![img.png](images/backstage-templates.png)
In the next screen, type `demo` for the name field, then click Review, then Create.
Once steps run, click the Open In Catalog button to go to the entity page.
![img.png](images/basic-template-flow.png)
In the demo entity page, you will notice a ArgoCD overview card associated with this entity.
You can click on the ArgoCD Application name to see more details.
![img.png](images/demo-entity.png)
### What just happened?
1. Backstage created [a git repository](https://{{{ .Env.DOMAIN }}}:8443/gitea/giteaAdmin/demo), then pushed templated contents to it.
2. Backstage created [an ArgoCD Application](https://{{{ .Env.DOMAIN }}}:8443/argocd/applications/argocd/demo?) and pointed it to the git repository.
3. Backstage registered the application as [a component](https://{{{ .Env.DOMAIN }}}:8443/gitea/giteaAdmin/demo/src/branch/main/catalog-info.yaml) in Backstage.
4. ArgoCD deployed the manifests stored in the repo to the cluster.
5. Backstage retrieved application health from ArgoCD API, then displayed it.
![image.png](images/basic-deployment.png)
## Argo Workflows and Spark Operator
In this example, we will deploy a simple Apache Spark job through Argo Workflows.
Click on the `Create...` button on the left, then select the `Basic Argo Workflow with a Spark Job` template.
![img.png](images/backstage-templates-spark.png)
Type `demo2` for the name field, then click create. You will notice that the Backstage templating steps are very similar to the basic example above.
Click on the Open In Catalog button to go to the entity page.
![img.png](images/demo2-entity.png)
Deployment processes are the same as the first example. Instead of deploying a pod, we deployed a workflow to create a Spark job.
In the entity page, there is a card for Argo Workflows, and it should say running or succeeded.
You can click the name in the card to go to the Argo Workflows UI to view more details about this workflow run.
When prompted to log in, click the login button under single sign on. Argo Workflows is configured to use SSO with Keycloak allowing you to login with the same credentials as Backstage login.
Note that Argo Workflows are not usually deployed this way. This is just an example to show you how you can integrate workflows, backstage, and spark.
Back in the entity page, you can view more details about Spark jobs by navigating to the Spark tab.
## Application with cloud resources.
To deploy cloud resources, you can follow any of the instructions below:
- [Cloud resource deployments via Crossplane](../crossplane-integrations/)
- [Cloud resource deployments via Terraform](../terraform-integrations/)
## Notes
- In these examples, we have used the pattern of creating a new repository for every app, then having ArgoCD deploy it.
This is done for convenience and demonstration purposes only. There are alternative actions that you can use.
For example, you can create a PR to an existing repository, create a repository but not deploy them yet, etc.
- If Backstage's pipelining and templating mechanisms is too simple, you can use more advanced workflow engines like Tekton or Argo Workflows.
You can invoke them in Backstage templates, then track progress similar to how it was described above.

View file

@ -1,25 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argo-workflows
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
path: "stacks/ref-implementation/argo-workflows/manifests/dev"
destination:
server: "https://kubernetes.default.svc"
namespace: argo
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true
retry:
limit: -1

View file

@ -1,2 +0,0 @@
resources:
- install.yaml

View file

@ -1,20 +0,0 @@
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: keycloak-oidc
namespace: argo
spec:
secretStoreRef:
name: keycloak
kind: ClusterSecretStore
target:
name: keycloak-oidc
data:
- secretKey: client-id
remoteRef:
key: keycloak-clients
property: ARGO_WORKFLOWS_CLIENT_ID
- secretKey: secret-key
remoteRef:
key: keycloak-clients
property: ARGO_WORKFLOWS_CLIENT_SECRET

View file

@ -1,7 +0,0 @@
resources:
- ../base
- external-secret.yaml
- sa-admin.yaml
patches:
- path: patches/cm-argo-workflows.yaml
- path: patches/deployment-argo-server.yaml

View file

@ -1,26 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: workflow-controller-configmap
namespace: argo
data:
config: |
sso:
insecureSkipVerify: true
issuer: https://{{{ .Env.DOMAIN }}}/keycloak/realms/cnoe
clientId:
name: keycloak-oidc
key: client-id
clientSecret:
name: keycloak-oidc
key: secret-key
redirectUrl: https://{{{ .Env.DOMAIN }}}:443/argo-workflows/oauth2/callback
rbac:
enabled: true
scopes:
- openid
- profile
- email
- groups
nodeEvents:
enabled: false

View file

@ -1,30 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: argo-server
namespace: argo
annotations:
argocd.argoproj.io/sync-wave: "20"
spec:
template:
spec:
containers:
- name: argo-server
readinessProbe:
httpGet:
path: /
port: 2746
scheme: HTTP
env:
- name: BASE_HREF
value: "/argo-workflows/"
args:
- server
- --configmap=workflow-controller-configmap
- --auth-mode=client
- --auth-mode=sso
- "--secure=false"
- "--loglevel"
- "info"
- "--log-format"
- "text"

View file

@ -1,32 +0,0 @@
# Used by users in the admin group
# TODO Need to tighten up permissions.
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: argo
annotations:
workflows.argoproj.io/rbac-rule: "'admin' in groups"
workflows.argoproj.io/rbac-rule-precedence: "10"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argo-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin
namespace: argo
---
apiVersion: v1
kind: Secret
metadata:
name: admin.service-account-token
annotations:
kubernetes.io/service-account.name: admin
namespace: argo
type: kubernetes.io/service-account-token

View file

@ -1,27 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: backstage-templates
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://gitea.{{{ .Env.DOMAIN }}}/giteaAdmin/edfbuilder
targetRevision: HEAD
path: "stacks/ref-implementation/backstage-templates/entities"
directory:
exclude: 'catalog-info.yaml'
destination:
server: "https://kubernetes.default.svc"
namespace: backstage
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true
retry:
limit: -1

View file

@ -1,48 +0,0 @@
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: ${{values.name}}-bucket
description: Stores things
annotations:
argocd/app-name: ${{values.name | dump}}
spec:
type: s3-bucket
owner: guests
---
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: ${{values.name | dump}}
description: This is for testing purposes
annotations:
backstage.io/techdocs-ref: dir:.
backstage.io/kubernetes-label-selector: 'entity-id=${{values.name}}'
backstage.io/kubernetes-namespace: default
argocd/app-name: ${{values.name | dump}}
links:
- url: https://gitea.{{{ .Env.DOMAIN }}}:443
title: Repo URL
icon: github
spec:
owner: guests
lifecycle: experimental
type: service
system: ${{values.name | dump}}
dependsOn:
- resource:default/${{values.name}}-bucket
---
apiVersion: backstage.io/v1alpha1
kind: System
metadata:
name: ${{values.name | dump}}
description: An example system for demonstration purposes
annotations:
backstage.io/techdocs-ref: dir:.
links:
- url: https://github.com/cnoe-io/stacks/tree/main/ref-implementation
title: CNOE Repo
icon: github
spec:
owner: guests
lifecycle: experimental
type: service

View file

@ -1,46 +0,0 @@
[![Codespell][codespell-badge]][codespell-link]
[![E2E][e2e-badge]][e2e-link]
[![Go Report Card][report-badge]][report-link]
[![Commit Activity][commit-activity-badge]][commit-activity-link]
# IDP Builder
Internal development platform binary launcher.
> **WORK IN PROGRESS**: This tool is in a pre-release stage and is under active development.
## About
Spin up a complete internal developer platform using industry standard technologies like Kubernetes, Argo, and backstage with only Docker required as a dependency.
This can be useful in several ways:
* Create a single binary which can demonstrate an IDP reference implementation.
* Use within CI to perform integration testing.
* Use as a local development environment for platform engineers.
## Getting Started
Checkout our [documentation website](https://cnoe.io/docs/reference-implementation/installations/idpbuilder) for getting started with idpbuilder.
## Community
- If you have questions or concerns about this tool, please feel free to reach out to us on the [CNCF Slack Channel](https://cloud-native.slack.com/archives/C05TN9WFN5S).
- You can also join our community meetings to meet the team and ask any questions. Checkout [this calendar](https://calendar.google.com/calendar/embed?src=064a2adfce866ccb02e61663a09f99147f22f06374e7a8994066bdc81e066986%40group.calendar.google.com&ctz=America%2FLos_Angeles) for more information.
## Contribution
Checkout the [contribution doc](./CONTRIBUTING.md) for contribution guidelines and more information on how to set up your local environment.
<!-- JUST BADGES & LINKS -->
[codespell-badge]: https://github.com/cnoe-io/idpbuilder/actions/workflows/codespell.yaml/badge.svg
[codespell-link]: https://github.com/cnoe-io/idpbuilder/actions/workflows/codespell.yaml
[e2e-badge]: https://github.com/cnoe-io/idpbuilder/actions/workflows/e2e.yaml/badge.svg
[e2e-link]: https://github.com/cnoe-io/idpbuilder/actions/workflows/e2e.yaml
[report-badge]: https://goreportcard.com/badge/github.com/cnoe-io/idpbuilder
[report-link]: https://goreportcard.com/report/github.com/cnoe-io/idpbuilder
[commit-activity-badge]: https://img.shields.io/github/commit-activity/m/cnoe-io/idpbuilder
[commit-activity-link]: https://github.com/cnoe-io/idpbuilder/pulse

View file

@ -1,16 +0,0 @@
![cnoe logo](./images/cnoe-logo.png)
# Example Basic Application
Thanks for trying out this demo! In this example, we deployed a simple application with a S3 bucket using Crossplane.
### idpbuilder
Checkout the idpbuilder website: https://cnoe.io/docs/reference-implementation/installations/idpbuilder
Checkout the idpbuilder repository: https://github.com/cnoe-io/idpbuilder
## Crossplane
Checkout the Crossplane website: https://www.crossplane.io/

View file

@ -1,3 +0,0 @@
module ${{ values.name }}
go 1.19

View file

@ -1,34 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx

Some files were not shown because too many files have changed in this diff Show more