diff --git a/docs/technical-documentation/decisions/0001-pipeline-tools.md b/docs/technical-documentation/decisions/0001-pipeline-tools.md new file mode 100644 index 0000000..311ab33 --- /dev/null +++ b/docs/technical-documentation/decisions/0001-pipeline-tools.md @@ -0,0 +1,126 @@ +# CI/CD pipeline tools for composable pipeline + +## Context and Problem Statement + +In order to build a composable pipeline that provides a golden path and reusable components, we need to define the tools that will be used to execute the pipeline. + +ArgoCD is considered set in stone as the tool to manage the deployment of applications. However, the tools to compose and execute the pipeline are still up for debate. + +> Note: The pipeline will use many other tools to perform certain actions such as testing, building, and deploying. This ADR is focused on the tools that will be used to compose and execute the pipeline itself. + +In general, there are 2 decisions to make: + +- What tools should we use to execute the pipeline? +- What tools should we use to compose the pipeline? + +The following use-cases should be considered for this decision: + +- **User who wants to manage their own runners (???)** +- User who only wants to use our golden path +- User who wants to use our golden path and add custom actions +- User who wants to use their own templates and import some of our actions +- User who wants to import an existing GitHub repository with a pipeline + +## Considered Options + +- Argo Workflows + Events +- Argo Workflows + Events + Additional Composition tool +- Forgejo Actions +- Forgejo Actions + Additional Composition tool +- Dagger (as Engine) +- Shuttle (as Engine) + +## Decision Outcome + +TBD + +## Pros and Cons of the Options + +### Argo Workflows + Events + +**Pro** + +- integration with ArgoCD +- ability to trigger additional workflows based on events. +- level of maturity and community support. + +**Con** + +- Ability to self-host runners? +- way how composition for pipelines works (based on Kubernetes CRDs) + - Templates must be available in the cluster where the pipelines are executed, so any imported templates must be applied into the cluster before the pipeline can be executed and cannot simply reference a repository + - This makes it difficult to import existing templates from other repositories when using self-hosted runners + - This also makes it difficult to use our golden path, or at least we will need to provide a way to import our golden path into the cluster + - This also makes the split of every component has its own repo very difficult +- additional UI to manage the pipeline +- Additional complexity + +### Argo Workflows + Events + Additional Composition tool + +**Pro** + +- Composability can be offloaded to another tool + +**Con** + +- All cons of the previous option (except composability) +- Additional complexity by adding another tool + +### Forgejo Actions + +**Pro** + +- tight integration with GitHub Actions providing a familiar interface for developers and a vast catalog of actions to choose from +- ability to compose pipelines without relying on another tool +- Self-hosting of runners possible +- every component can have its own repository and use different tools (e.g. written in go, bash, python etc.) + +**Con** + +- level of maturity - will require additional investments to provide a production-grade system + +### Forgejo Actions + Additional Tool + +**Pro** + +- may be possible to use GitHub actions alongside another tool + +**Con** + +- additional complexity by adding another tool + +### Shuttle + +**Pro** + +- Possibility to clearly define interfaces for pipeline steps +- Relatively simple + +**Con** + +- basically backed by only one company +- **centralized templates**, so no mechanism for composing pipelines from multiple repositories + +### Dagger + +**Pro** + +- Pipeline as code + - if it runs it should run anywhere and produce the "same" / somewhat stable results + - build environments are defined within containers / the dagger config. Dagger is the only dependency one has to install on a machine +- DX is extremely nice, especially if you have to debug (image) builds, also type safety due to the ability to code your build in a strong language +- additional tooling, like trivy, is added to a build pipeline with low effort due to containers and existing plugin/wrappers +- you can create complex test environments similar to test containers and docker compose + +**Con** + +- relies heavily containers, which might not be available some environments (due to policy etc), it also has an effect on reproducibility and verifiability +- as a dev you need to properly understand containers +- dagger engine has to run privileged locally and/or in the cloud which might be a blocker or at least a big pain in the ... + +**Suggestion Patrick** + +- dagger is a heavy weight and might not be as productive in a dev workflow as it seems (setup lsp etc) +- it might be too opinionated to force on teams, especially since it is not near mainstream enough, community might be too small +- it feels like dagger gets you 95% of the way, but the remaining 5% are a real struggle +- if we like it, we should check the popularity in the dev community before further considering as it has a direct impact on teams and their preferences diff --git a/docs/technical-documentation/decisions/README.md b/docs/technical-documentation/decisions/README.md new file mode 100644 index 0000000..ae4dc5a --- /dev/null +++ b/docs/technical-documentation/decisions/README.md @@ -0,0 +1,5 @@ +# ADRs + +Architecture Decision Records (ADRs) are a way to capture the important architectural decisions made during the development of a project. They are a way to document the context, the decision, and the consequences of the decision. They are a way to keep track of the architectural decisions made in a project and to communicate them to the team. + +The [Markdown Architectural Decision Records](https://adr.github.io/madr/) (MADR) format is a simple and easy-to-use format for writing ADRs in Markdown. diff --git a/docs/technical-documentation/decisions/_adr-template.md b/docs/technical-documentation/decisions/_adr-template.md new file mode 100644 index 0000000..fa87ccc --- /dev/null +++ b/docs/technical-documentation/decisions/_adr-template.md @@ -0,0 +1,67 @@ + + +# {short title, representative of solved problem and found solution} + +## Context and Problem Statement + +{Describe the context and problem statement, e.g., in free form using two to three sentences or in the form of an illustrative story. You may want to articulate the problem in form of a question and add links to collaboration boards or issue management systems.} + + +## Decision Drivers + +* {decision driver 1, e.g., a force, facing concern, …} +* {decision driver 2, e.g., a force, facing concern, …} +* … + +## Considered Options + +* {title of option 1} +* {title of option 2} +* {title of option 3} +* … + +## Decision Outcome + +Chosen option: "{title of option 1}", because {justification. e.g., only option, which meets k.o. criterion decision driver | which resolves force {force} | … | comes out best (see below)}. + + +### Consequences + +* Good, because {positive consequence, e.g., improvement of one or more desired qualities, …} +* Bad, because {negative consequence, e.g., compromising one or more desired qualities, …} +* … + + +### Confirmation + +{Describe how the implementation of/compliance with the ADR can/will be confirmed. Are the design that was decided for and its implementation in line with the decision made? E.g., a design/code review or a test with a library such as ArchUnit can help validate this. Not that although we classify this element as optional, it is included in many ADRs.} + + +## Pros and Cons of the Options + +### {title of option 1} + + +{example | description | pointer to more information | …} + +* Good, because {argument a} +* Good, because {argument b} + +* Neutral, because {argument c} +* Bad, because {argument d} +* … + +### {title of other option} + +{example | description | pointer to more information | …} + +* Good, because {argument a} +* Good, because {argument b} +* Neutral, because {argument c} +* Bad, because {argument d} +* … + + +## More Information + +{You might want to provide additional evidence/confidence for the decision outcome here and/or document the team agreement on the decision and/or define when/how this decision the decision should be realized and if/when it should be re-visited. Links to other decisions and resources might appear here as well.} diff --git a/docs/technical-documentation/project/_index.md b/docs/technical-documentation/project/_index.md new file mode 100644 index 0000000..97f0dad --- /dev/null +++ b/docs/technical-documentation/project/_index.md @@ -0,0 +1,6 @@ +--- +title: Project +weight: 5 +description: How we organize work and proceed as team, which decisions we made, what outputs and outcomes we have +--- + diff --git a/docs/technical-documentation/project/bootstrapping/_index.md b/docs/technical-documentation/project/bootstrapping/_index.md new file mode 100644 index 0000000..9e07af4 --- /dev/null +++ b/docs/technical-documentation/project/bootstrapping/_index.md @@ -0,0 +1,7 @@ +--- +title: Bootstrapping Infrastructure +weight: 30 +description: The cluster and the installed applications in the bootstrapping cluster +--- + +In order to be able to do useful work, we do need a number of applications right away. We're deploying these manually so we have the necessary basis for our work. Once the framework has been developed far enough, we will deploy this infrastructure with the framework itself. diff --git a/docs/technical-documentation/project/bootstrapping/backup/_index.md b/docs/technical-documentation/project/bootstrapping/backup/_index.md new file mode 100644 index 0000000..c9dd005 --- /dev/null +++ b/docs/technical-documentation/project/bootstrapping/backup/_index.md @@ -0,0 +1,84 @@ +--- +title: Backup of the Bootstrapping Cluster +weight: 30 +description: Backup and Restore of the Contents of the Bootstrapping Cluster +--- + +## Velero + +We are using [Velero](https://velero.io/) for backup and restore of the deployed applications. + +## Installing Velero Tools + +To manage a Velero install in a cluster, you need to have Velero command line tools installed locally. Please follow the instructions for [Basic Install](https://velero.io/docs/v1.9/basic-install). + +## Setting Up Velero For a Cluster + +Installing and configuring Velero for a cluster can be accomplished with the CLI. + +1. Create a file with the credentials for the S3 compatible bucket that is storing the backups, for example `credentials.ini`. + +```ini +[default] +aws_access_key_id = "Access Key Value" +aws_secret_access_key = "Secret Key Value" +``` + +2. Install Velero in the cluster + +``` +velero install \ + --provider aws \ + --plugins velero/velero-plugin-for-aws:v1.2.1 \ + --bucket osc-backup \ + --secret-file ./credentials.ini \ + --use-volume-snapshots=false \ + --use-node-agent=true \ + --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=https://obs.eu-de.otc.t-systems.com +``` + +3. Delete `credentials.ini`, it is not needed anymore (a secret has been created in the cluster). +4. Create a schedule to back up the relevant resources in the cluster: +``` +velero schedule create devfw-bootstrap --schedule="23 */2 * * *" "--include-namespaces=forgejo" +``` + +## Working with Velero + +You can now use Velero to create backups, restore them, or perform other operations. Please refer to the [Velero Documentation](https://velero.io/docs/main/backup-reference/). + +To list all currently available backups: +``` +velero backup get +``` + +## Setting up a Service Account for Access to the OTC Object Storage Bucket + +We are using the S3-compatible Open Telekom Cloud Object Storage service to make available some storage for the backups. We picked OTC specifically because we're not using it for anything else, so no matter what catastrophy we create in Open Sovereign Cloud, the backups should be safe. + +### Create an Object Storage Service Bucket + +1. Log in to the [OTC Console with the correct tenant](https://auth.otc.t-systems.com/authui/federation/websso?domain_id=81e7dbd7ec9f4b03a58120dfaa61d3db&idp=ADFS_MMS_OTC00000000001000113497&protocol=saml). +1. Navigate to [Object Storage Service](https://console.otc.t-systems.com/obs/?agencyId=WEXsFwkkVdGYULIrZT1icF4nmHY1dgX2®ion=eu-de&locale=en-us#/obs/manager/buckets). +1. Click Create Bucket in the upper right hand corner. Give your bucket a name. No further configuration should be necessary. + +### Create a Service User to Access the Bucket + +1. Log in to the [OTC Console with the correct tenant](https://auth.otc.t-systems.com/authui/federation/websso?domain_id=81e7dbd7ec9f4b03a58120dfaa61d3db&idp=ADFS_MMS_OTC00000000001000113497&protocol=saml). +1. Navigate to [Identity and Access Management](https://console.otc.t-systems.com/iam/?agencyId=WEXsFwkkVdGYULIrZT1icF4nmHY1dgX2®ion=eu-de&locale=en-us#/iam/users). +1. Navigate to User Groups, and click Create User Group in the upper right hand corner. +1. Enter a suitable name ("OSC Cloud Backup") and click OK. +1. In the group list, locate the group just created and click its name. +1. Click Authorize to add the necessary roles. Enter "OBS" in the search box to filter for Object Storage roles. +1. Select "OBS OperateAccess", if there are two roles, select them both. +1. **2024-10-15** Also select the "OBS Administrator" role. It is unclear why the "OBS OperateAccess" role is not sufficient, but without the admin role, the service user will not have write access to the bucket. +1. Click Next to save the roles, then click OK to confirm, then click Finish. +1. Navigate to Users, and click Create User in the upper right hand corner. +1. Give the user a sensible name ("ipcei-cis-devfw-osc-backups"). +1. Disable Management console access +1. Enable Access key, disable Password, disable Login protection. +1. Click Next +1. Pick the group created earlier. +1. Download the access key when prompted. + +The access key is a CSV file with the Access Key and the Secret Key listed in the second line. diff --git a/docs/technical-documentation/project/conceptual-onboarding/1_intro/_index.md b/docs/technical-documentation/project/conceptual-onboarding/1_intro/_index.md new file mode 100644 index 0000000..9fa9723 --- /dev/null +++ b/docs/technical-documentation/project/conceptual-onboarding/1_intro/_index.md @@ -0,0 +1,49 @@ +--- +title: Introduction +weight: 1 +description: The 5-step storyflow of this Onboarding chapter +--- + +{{% pageinfo color="info" %}} +## Summary + +This onboarding section is for you when are new to IPCEI-CIS subproject 'Edge Developer Framework (EDF)' and you want to know about +* its context to 'Platform Engineering' +* and why we think it's the stuff we need to care about in the EDF +{{% /pageinfo %}} + + +## Storyline of our current project plan (2024) + +1. We have the ['Edge Developer Framework'](./edgel-developer-framework/) +2. We think the solution for EDF is in relation to ['Platforming' (Digital Platforms)](./platforming/) + 1. The next evolution after DevOps + 2. Gartner predicts 80% of SWE companies to have platforms in 2026 + 3. Platforms have a history since roundabout 2019 + 4. CNCF has a working group which created capabilities and a maturity model +3. Platforms evolve - nowadys there are [Platform Orchestrators](./orchestrators/) + 1. Humanitec set up a Reference Architecture + 2. There is this 'Orchestrator' thing - declaratively describe, customize and change platforms! +4. Mapping our assumptions to the [CNOE solution](./cnoe/) + 1. CNOE is a hot candidate to help and fulfill our platform building + 2. CNOE aims to embrace change and customization! +5. [Showtime CNOE](./cnoe-showtime/) + + +## Please challenge this story! + +Please do not think this story and the underlying assumptions are carved in stone! + +1. Don't miss to further investigate and truely understand [**EDF specification needs**](./edgel-developer-framework/) +2. Don't miss to further investigate and truely understand [**Platform capabilities on top of DevOps**](./platforming/) +3. Don't miss to further investigate and truely understand [**Platform orchestration**](./orchestrators/) +3. Don't miss to further investigate and truely understand [**specific orchestratiing solutions like CNOE**](./cnoe/) + +## Your role as 'Framework Engineer' in the Domain Architecture + +Pls be aware of the the following domain and task structure of our mission: + +![](./conclusio/images/modern.png) + + + diff --git a/docs/technical-documentation/project/conceptual-onboarding/2_edge-developer-framework/_index.md b/docs/technical-documentation/project/conceptual-onboarding/2_edge-developer-framework/_index.md new file mode 100644 index 0000000..8da5935 --- /dev/null +++ b/docs/technical-documentation/project/conceptual-onboarding/2_edge-developer-framework/_index.md @@ -0,0 +1,56 @@ +--- +title: Edge Developer Framework +weight: 2 +description: Driving requirements for a platform +--- + +{{% pageinfo color="info" %}} +## Summary + +The 'Edge Developer Framework' is both the project and the product we are working for. Out of the leading 'Portfolio Document' +we derive requirements which are ought to be fulfilled by Platform Engineering. + +**This is our claim!** + +{{% /pageinfo %}} + +## What are the specifications we know from the IPCEI-CIS Project Portfolio document + +> Reference: IPCEI-CIS Project Portfolio +> Version 5.9, November 17, 2023 + +### DTAG´s IPCEI-CIS Project Portfolio (p.12) + +e. Development of DTAG/TSI Edge Developer Framework + +* Goal: All developed innovations must be accessible to developer communities in a **highly user-friendly and easy way** + +### Development of DTAG/TSI Edge Developer Framework (p.14) +| capability | major novelties ||| +| -- | -- | -- | -- | +| e.1. Edge Developer full service framework (SDK + day1 +day2 support for edge installations) | Adaptive CI/CD pipelines for heterogeneous edge environments | Decentralized and self healing deployment and management | edge-driven monitoring and analytics | +| e.2. Built-in sustainability optimization in Edge developer framework | sustainability optimized edge-aware CI/CD tooling | sustainability-optimized configuration management | sustainability-optimized efficient deployment strategies | +| e.3. Sustainable-edge management-optimized user interface for edge developers | sustainability optimized User Interface | Ai-Enabled intelligent experience | AI/ML-based automated user experience testing and optimization | + +### DTAG objectives & contributions (p.27) + +DTAG will also focus on developing an easy-to-use **Edge Developer framework for software +developers** to **manage the whole lifecycle of edge applications**, i.e. for **day-0-, day-1- and up to day-2- +operations**. With this DTAG will strongly enable the ecosystem building for the entire IPCEI-CIS edge to +cloud continuum and ensure openness and accessibility for anyone or any company to make use and +further build on the edge to cloud continuum. Providing the use of the tool framework via an open-source approach will further reduce entry barriers and enhance the openness and accessibility for anyone or +any organization (see innovations e.). + +### WP Deliverables (p.170) + +e.1 Edge developer full-service framework + +This tool set and related best practices and guidelines will **adapt, enhance and further innovate DevOps principles** and +their related, necessary supporting technologies according to the specific requirements and constraints associated with edge or edge cloud development, in order to keep the healthy and balanced innovation path on both sides, +the (software) development side and the operations side in the field of DevOps. + +{{% pageinfo color="info" %}} +### What comes next? + +[Next](../platforming/) we'll see how these requirements seem to be fulfilled by platforms! +{{% /pageinfo %}} diff --git a/docs/technical-documentation/project/conceptual-onboarding/3_platforming/_index.md b/docs/technical-documentation/project/conceptual-onboarding/3_platforming/_index.md new file mode 100644 index 0000000..6a41b34 --- /dev/null +++ b/docs/technical-documentation/project/conceptual-onboarding/3_platforming/_index.md @@ -0,0 +1,112 @@ +--- +title: Platform Engineering aka Platforming +linktitle: Platforming +weight: 3 +description: DevOps is dead - long live next level DevOps in platforms +--- + + +{{% pageinfo color="info" %}} +## Summary + +Since 2010 we have DevOps. This brings increasing delivery speed and efficiency at scale. +But next we got high 'cognitive loads' for developers and production congestion due to engineering lifecycle complexity. +So we need on top of DevOps an instrumentation to ensure and enforce speed, quality, security in modern, cloud native software development. This instrumentation is called 'golden paths' in intenal develoepr platforms (IDP). +{{% /pageinfo %}} + + +## History of Platform Engineering + +Let's start with a look into the history of platform engineering. A good starting point is [Humanitec](https://humanitec.com/), as they nowadays are one of the biggest players (['the market leader in IDPs.'](https://internaldeveloperplatform.org/#how-we-curate-this-site)) in platform engineering. + +They create lots of [beautiful articles and insights](https://humanitec.com/blog), their own [platform products](https://humanitec.com/products/) and [basic concepts for the platform architecture](https://humanitec.com/platform-engineering) (we'll meet this later on!). + +https://platformengineering.org/blog/the-story-of-platform-engineering + + +### Further nice reference to the raise of platforms + +* [What we **call** a Platform](https://martinfowler.com/articles/talk-about-platforms.html) +* [Platforms and the **cloud native** connection](https://developers.redhat.com/articles/2024/05/06/)what-platform-engineering-and-why-do-we-need-it#why_we_need_platform_engineering +* [Platforms and **microservices**](https://orkohunter.net/blog/a-brief-history-of-platform-engineering) +* [Platforms in the **product** perspective](https://softwareengineeringdaily.com/2020/02/13/setting-the-stage-for-platform-engineering/) + + +## Benefit of Platform Engineering, Capabilities + +In [The Evolution of Platform Engineering](https://www.linkedin.com/pulse/evolution-platform-engineering-gaurav-goel) the interconnection inbetween DevOps, Cloud Native, and the Rise of Platform Engineering is nicely presented and summarizes: + +> Platform engineering can be broadly defined as the discipline of designing and building toolchains and workflows that enable self-service capabilities for software engineering organizations + +When looking at these 'capabilities', we have CNCF itself: + +### CNCF Working group / White paper defines layerwed capabilities + +There is a CNCF working group which provides the definition of [Capabilities of platforms](https://tag-app-delivery.cncf.io/whitepapers/platforms/#capabilities-of-platforms) and shows a first idea of the layered architecture of platforms as **service layer for developers** ("product and application teams"): + + + + +> Important: As Platform engineer also notice the [platform-eng-maturity-model](https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/) + +### Platform Engineering Team + +Or, in another illustration for the platform as a developer service interface, which also defines the **'Platform Engineering Team'** inbetween: + +https://medium.com/@bijit211987/what-is-platform-engineering-and-how-it-reduce-cognitive-load-on-developers-ac7805603925 + +## How to set up Platforms + +As we now have evidence about the nescessity of platforms, their capabilities and benefit as self service layer for developers, we want to thin about how to build them. + +First of all some important wording to motivate the important term 'internal developer platfoms' (we will go into this deeper in the next section [with the general architecture](../orchestrators/)), which is clear today, but took years to evolve and [is still some amount if effort to jump in](https://humanitec.com/blog/wtf-internal-developer-platform-vs-internal-developer-portal-vs-paas): + +* Platform: As defined above: A product which serves software engineering teams +* Platform Engineering: Creating such a product +* Internal Developer Platforms (IDP): A platform for developers :-) +* Internal Developer Portal: The entry point of a developer to such an IDP + +### CNCF mapping from capabilities to (CNCF) projects/tools + +[Capabilities of platforms](https://tag-app-delivery.cncf.io/whitepapers/platforms/#capabilities-of-platforms) + +### Ecosystems in InternalDeveloperPlatform + +Build or buy - this is also in pltaform engineering a tweaked discussion, which one of the oldest player answers like this with some oppinioated internal capability structuring: + +[internaldeveloperplatform.org[(https://internaldeveloperplatform.org/platform-tooling/) + + +{{% pageinfo color="info" %}} +### What comes next? + +[Next](../orchestrators/) we'll see how these concepts got structured! +{{% /pageinfo %}} + +## Addendum + +### Digital Platform defintion from [What we **call** a Platform](https://martinfowler.com/articles/talk-about-platforms.html) + +> Words are hard, it seems. ‘Platform’ is just about the most ambiguous term we could use for an approach that is super-important for increasing delivery speed and efficiency at scale. Hence the title of this article, here is what I’ve been talking about most recently. +\ +Definitions for software and hardware platforms abound, generally describing an operating environment upon which applications can execute and which provides reusable capabilities such as file systems and security. +\ +Zooming out, at an organisational level a ‘digital platform’ has similar characteristics - an operating environment which teams can build upon to deliver product features to customers more quickly, supported by reusable capabilities. +\ +A digital platform is a foundation of self-service APIs, tools, services, knowledge and support which are arranged as a compelling internal product. Autonomous delivery teams can make use of the platform to deliver product features at a higher pace, with reduced co-ordination. + +### Myths :-) - What are platforms _not_ + +[common-myths-about-platform-engineering](https://cloud.google.com/blog/products/application-development/common-myths-about-platform-engineering?hl=en) + +### Platform Teams + +This is about you :-), the platform engineering team: + +[how-to-build-your-platform-engineering-team](https://platformengineering.org/blog/how-to-build-your-platform-engineering-team) + +#### in comparison: devops vs sre vs platform + +https://www.qovery.com/blog/devops-vs-platform-engineering-is-there-a-difference/ + +![teams-in-comparison](teams.png) diff --git a/docs/technical-documentation/project/conceptual-onboarding/3_platforming/humanitec-history.png b/docs/technical-documentation/project/conceptual-onboarding/3_platforming/humanitec-history.png new file mode 100644 index 0000000..b20f277 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/3_platforming/humanitec-history.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/3_platforming/platform-self-services.webp b/docs/technical-documentation/project/conceptual-onboarding/3_platforming/platform-self-services.webp new file mode 100644 index 0000000..2645bad Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/3_platforming/platform-self-services.webp differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/3_platforming/platforms-def.drawio.png b/docs/technical-documentation/project/conceptual-onboarding/3_platforming/platforms-def.drawio.png new file mode 100644 index 0000000..09461d3 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/3_platforming/platforms-def.drawio.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/3_platforming/teams.png b/docs/technical-documentation/project/conceptual-onboarding/3_platforming/teams.png new file mode 100644 index 0000000..6824c0a Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/3_platforming/teams.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/_index.md b/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/_index.md new file mode 100644 index 0000000..11f4446 --- /dev/null +++ b/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/_index.md @@ -0,0 +1,53 @@ +--- +title: Orchestrators +weight: 4 +description: Next level platforming is orchestrating platforms +--- + +{{% pageinfo color="info" %}} +## Summary + +When defining and setting up platforms next two intrinsic problems arise: +1. it is not declarative and automated +2. it is not or least not easily changable + +Thus the technology of 'Platform Orchestrating' emerged recently, in late 2023. + +{{% /pageinfo %}} + +## Platform reference architecture + +An interesting difference between the CNCF white paper building blocks and them from Internaldeveloperplatforms is the part [**orchestrators**](https://internaldeveloperplatform.org/platform-orchestrators/). + +This is something extremely new since late 2023 - the rise of the automation of platform engineering! + +It was Humanitec creating a definition of platform architecture, as they needed to defien what they are building with their 'orchestrator': + +https://developer.humanitec.com/introduction/overview/ + +## Declarative Platform Orchestration + +Based on the refence architecture you next can build - or let's say 'orchestrate' - different platform implementations, based on declarative definitions of the platform design. + +https://humanitec.com/reference-architectures + +https://humanitec.com/blog/aws-azure-and-gcp-open-source-reference-architectures-to-start-your-mvp + +> Hint: There is a [slides tool provided by McKinsey](https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures) to set up your own platform deign based on the reference architecture + + +{{% pageinfo color="info" %}} +### What comes next? + +[Next](../cnoe/) we'll see how we are going to do platform orchestration with CNOE! +{{% /pageinfo %}} + +## Addendum + +## Building blocks from Humanitecs 'state-of-platform-engineering-report-volume-2' + +You remember the [capability mappings from the time before orchestration](../platforming)? Here we have a [similar setup based on Humanitecs platform engineering status ewhite paper](https://humanitec.com/whitepapers/state-of-platform-engineering-report-volume-2): + +https://humanitec.com/whitepapers/state-of-platform-engineering-report-volume-2 Whitepaper_ State of Platform Engineering Report.pdf + + diff --git a/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/platform-architectures.webp b/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/platform-architectures.webp new file mode 100644 index 0000000..a53ed68 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/platform-architectures.webp differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/platform-tooling-humanitec-platform-report-2024.PNG b/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/platform-tooling-humanitec-platform-report-2024.PNG new file mode 100644 index 0000000..9e240de Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/platform-tooling-humanitec-platform-report-2024.PNG differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/vendor-neutral-idp-final.gif b/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/vendor-neutral-idp-final.gif new file mode 100644 index 0000000..4306f37 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/4_orchestrators/vendor-neutral-idp-final.gif differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/_index.md b/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/_index.md new file mode 100644 index 0000000..3788735 --- /dev/null +++ b/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/_index.md @@ -0,0 +1,61 @@ +--- +title: CNOE +weight: 5 +description: Our top candidate for a platform orchestrator +--- + +{{% pageinfo color="info" %}} +## Summary + +In late 2023 platform orchestration raised - the discipline of declarativley dinfing, building, orchestarting and reconciling building blocks of (digital) platforms. + +The famost one ist the platform orchestrator from Humanitec. They provide lots of concepts and access, also open sourced tools and schemas. But they do not have open sourced the ocheastartor itself. + +Thus we were looking for open source means for platform orchestrating and found [CNOE](https://cnoe.io). +{{% /pageinfo %}} + +## Requirements for an Orchestrator + +When we want to set up a [complete platform](../platforming/platforms-def.drawio.png) we expect to have +* a **schema** which defines the platform, its ressources and internal behaviour +* a **dynamic configuration or templating mechanism** to provide a concrete specification of a platform +* a **deployment mechanism** to deploy and reconcile the platform + +This is what [CNOE delivers](https://cnoe.io/docs/intro/approach): + +> For seamless transition into a CNOE-compliant delivery pipeline, CNOE will aim at delivering **"packaging specifications"**, **"templating mechanisms"**, as well as **"deployer technologies"**, an example of which is enabled via the idpBuilder tool we have released. The combination of templates, specifications, and deployers allow for bundling and then unpacking of CNOE recommended tools into **a user's DevOps environment**. This enables teams to share and deliver components that are deemed to be the best tools for the job. + +## CNOE (capabilities) architecture + +### Architecture + +CNOE architecture looks a bit different than the reference architecture from Humanitec, but this just a matter of details and arrangement: + +> Hint: **This has to be further investigated!** This is subject to an Epic. + +https://cnoe.io/ + +### Capabilities + +You have a [definition of all the capabilities here](https://cnoe.io/docs/category/technology-capabilities): + +> Hint: **This has to be further investigated!** This is subject to an Epic. + +https://cnoe.io/docs/category/technology-capabilities + +## Stacks + +CNOE calls the [schema and templating mechnanism 'stacks'](https://github.com/cnoe-io/stacks): + +> Hint: **This has to be further investigated!** This is subject to an Epic. + +There are already some example stacks: + + + + +{{% pageinfo color="info" %}} +### What comes next? + +[Next](../cnoe-showtime/) we'll see how a CNOE stacked Internal Developer Platform is deployed on you local laptop! +{{% /pageinfo %}} diff --git a/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/cnoe-architecture.png b/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/cnoe-architecture.png new file mode 100644 index 0000000..444c3b7 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/cnoe-architecture.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/cnoe-capabilities.png b/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/cnoe-capabilities.png new file mode 100644 index 0000000..573d9b4 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/cnoe-capabilities.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/cnoe-stacks.png b/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/cnoe-stacks.png new file mode 100644 index 0000000..0bf3871 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/5_cnoe/cnoe-stacks.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/_index.md b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/_index.md new file mode 100644 index 0000000..ab7be8e --- /dev/null +++ b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/_index.md @@ -0,0 +1,580 @@ +--- +title: CNOE Showtime +weight: 6 +description: CNOE hands on +--- + +{{% pageinfo color="info" %}} +## Summary + +CNOE is a 'Platform Engineering Framework' (Danger: Our wording!) - it is open source and locally runnable. + +It consists of the orchestrator 'idpbuilder' and both of some predefined building blocks and also some predefined platform configurations. + +{{% /pageinfo %}} + + +## Orchestrator 'idpbuilder', initial run + +The orchestrator in CNOE is called 'idpbuilder'. It is [locally installable binary](https://cnoe.io/docs/reference-implementation/installations/idpbuilder/quick-start) + +A typipcal first setup ist described here: https://cnoe.io/docs/reference-implementation/technology + +```bash +# this is a local linux shell + +# check local installation +type idpbuilder +idpbuilder is /usr/local/bin/idpbuilder + +# check version +idpbuilder version +idpbuilder 0.8.0-nightly.20240914 go1.22.7 linux/amd64 + +# do some completion and aliasing +source <(idpbuilder completion bash) +alias ib=idpbuilder +complete -F __start_idpbuilder ib + +# check and remove all existing kind clusters +kind delete clusters --all +kind get clusters +# sth. like 'No kind clusters found.' + +# run +$ib create --use-path-routing --log-level debug --package-dir https://github.com/cnoe-io/stacks//ref-implementation +``` + +You get output like + +```bash +stl@ubuntu-vpn:~/git/mms/ipceicis-developerframework$ idpbuilder create +Oct 1 10:09:18 INFO Creating kind cluster logger=setup +Oct 1 10:09:18 INFO Runtime detected logger=setup provider=docker +########################### Our kind config ############################ +# Kind kubernetes release images https://github.com/kubernetes-sigs/kind/releases +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: "kindest/node:v1.30.0" + labels: + ingress-ready: "true" + extraPortMappings: + - containerPort: 443 + hostPort: 8443 + protocol: TCP + +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gitea.cnoe.localtest.me:8443"] + endpoint = ["https://gitea.cnoe.localtest.me"] + [plugins."io.containerd.grpc.v1.cri".registry.configs."gitea.cnoe.localtest.me".tls] + insecure_skip_verify = true + +######################### config end ############################ +``` + +## Show time steps + +> Goto https://cnoe.io/docs/reference-implementation/installations/idpbuilder/usage, and follow the flow + +### Prepare a k8s cluster with kind + +You may have seen: when starting `idpbuilder` without an existing cluster named `localdev` it first creates this cluster by calling `kind`with an internally defined config. + +It's an important feature of idpbuilder that it will set up on an existing cluster `localdev` when called several times in a row e.g. to append new packages to the cluster. + +That's why we here first create the kind cluster `localdev`itself: + +```bash +cat << EOF | kind create cluster --name localdev --config=- +# Kind kubernetes release images https://github.com/kubernetes-sigs/kind/releases +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: "kindest/node:v1.30.0" + labels: + ingress-ready: "true" + extraPortMappings: + - containerPort: 443 + hostPort: 8443 + protocol: TCP + +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gitea.cnoe.localtest.me:8443"] + endpoint = ["https://gitea.cnoe.localtest.me"] + [plugins."io.containerd.grpc.v1.cri".registry.configs."gitea.cnoe.localtest.me".tls] + insecure_skip_verify = true +``` + +```bash +# alternatively, if you have the kind config as file: +kind create cluster --name localdev --config kind-config.yaml +``` + +#### Output + +A typical raw kind setup kubernetes cluster would look like this with respect to running pods: + +> be sure all pods are in status 'running' + +```bash +stl@ubuntu-vpn:~/git/mms/idpbuilder$ k get pods -A +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system coredns-76f75df574-lb7jz 1/1 Running 0 15s +kube-system coredns-76f75df574-zm2wp 1/1 Running 0 15s +kube-system etcd-localdev-control-plane 1/1 Running 0 27s +kube-system kindnet-8qhd5 1/1 Running 0 13s +kube-system kindnet-r4d6n 1/1 Running 0 15s +kube-system kube-apiserver-localdev-control-plane 1/1 Running 0 27s +kube-system kube-controller-manager-localdev-control-plane 1/1 Running 0 27s +kube-system kube-proxy-vrh64 1/1 Running 0 15s +kube-system kube-proxy-w8ks9 1/1 Running 0 13s +kube-system kube-scheduler-localdev-control-plane 1/1 Running 0 27s +local-path-storage local-path-provisioner-6f8956fb48-6fvt2 1/1 Running 0 15s +``` + +### First run: Start with core applications, 'core package' + +Now we run idpbuilder the first time: + +``` +# now idpbuilder reuses the already existing cluster +# pls apply '--use-path-routing' otherwise as we discovered currently the service resolving inside the cluster won't work +ib create --use-path-routing +``` + +#### Output + +##### idpbuilder log + +```bash +stl@ubuntu-vpn:~/git/mms/idpbuilder$ ib create --use-path-routing +Oct 1 10:32:50 INFO Creating kind cluster logger=setup +Oct 1 10:32:50 INFO Runtime detected logger=setup provider=docker +Oct 1 10:32:50 INFO Cluster already exists logger=setup cluster=localdev +Oct 1 10:32:50 INFO Adding CRDs to the cluster logger=setup +Oct 1 10:32:51 INFO Setting up CoreDNS logger=setup +Oct 1 10:32:51 INFO Setting up TLS certificate logger=setup +Oct 1 10:32:51 INFO Creating localbuild resource logger=setup +Oct 1 10:32:51 INFO Starting EventSource controller=gitrepository controllerGroup=idpbuilder.cnoe.io controllerKind=GitRepository source=kind source: *v1alpha1.GitRepository +Oct 1 10:32:51 INFO Starting Controller controller=gitrepository controllerGroup=idpbuilder.cnoe.io controllerKind=GitRepository +Oct 1 10:32:51 INFO Starting EventSource controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild source=kind source: *v1alpha1.Localbuild +Oct 1 10:32:51 INFO Starting Controller controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild +Oct 1 10:32:51 INFO Starting EventSource controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage source=kind source: *v1alpha1.CustomPackage +Oct 1 10:32:51 INFO Starting Controller controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage +Oct 1 10:32:51 INFO Starting workers controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild worker count=1 +Oct 1 10:32:51 INFO Starting workers controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage worker count=1 +Oct 1 10:32:51 INFO Starting workers controller=gitrepository controllerGroup=idpbuilder.cnoe.io controllerKind=GitRepository worker count=1 +Oct 1 10:32:54 INFO Waiting for Deployment my-gitea to become ready controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=6fc396d4-e0d5-4c80-aaee-20c1bbffea54 +Oct 1 10:32:54 INFO Waiting for Deployment ingress-nginx-controller to become ready controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=6fc396d4-e0d5-4c80-aaee-20c1bbffea54 +Oct 1 10:33:24 INFO Waiting for Deployment my-gitea to become ready controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=6fc396d4-e0d5-4c80-aaee-20c1bbffea54 +Oct 1 10:33:24 INFO Waiting for Deployment ingress-nginx-controller to become ready controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=6fc396d4-e0d5-4c80-aaee-20c1bbffea54 +Oct 1 10:33:54 INFO Waiting for Deployment my-gitea to become ready controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=6fc396d4-e0d5-4c80-aaee-20c1bbffea54 +Oct 1 10:34:24 INFO installing bootstrap apps to ArgoCD controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=6fc396d4-e0d5-4c80-aaee-20c1bbffea54 +Oct 1 10:34:24 INFO expected annotation, cnoe.io/last-observed-cli-start-time, not found controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=6fc396d4-e0d5-4c80-aaee-20c1bbffea54 +Oct 1 10:34:24 INFO Checking if we should shutdown controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=6fc396d4-e0d5-4c80-aaee-20c1bbffea54 +Oct 1 10:34:25 INFO installing bootstrap apps to ArgoCD controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=0667fa85-af1c-403f-bcd9-16ff8f2fad7e +Oct 1 10:34:25 INFO expected annotation, cnoe.io/last-observed-cli-start-time, not found controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=0667fa85-af1c-403f-bcd9-16ff8f2fad7e +Oct 1 10:34:25 INFO Checking if we should shutdown controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=0667fa85-af1c-403f-bcd9-16ff8f2fad7e +Oct 1 10:34:40 INFO installing bootstrap apps to ArgoCD controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=ec720aeb-02cd-4974-a991-cf2f19ce8536 +Oct 1 10:34:40 INFO Checking if we should shutdown controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=ec720aeb-02cd-4974-a991-cf2f19ce8536 +Oct 1 10:34:40 INFO Shutting Down controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=ec720aeb-02cd-4974-a991-cf2f19ce8536 +Oct 1 10:34:40 INFO Stopping and waiting for non leader election runnables +Oct 1 10:34:40 INFO Stopping and waiting for leader election runnables +Oct 1 10:34:40 INFO Shutdown signal received, waiting for all workers to finish controller=gitrepository controllerGroup=idpbuilder.cnoe.io controllerKind=GitRepository +Oct 1 10:34:40 INFO Shutdown signal received, waiting for all workers to finish controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage +Oct 1 10:34:40 INFO All workers finished controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage +Oct 1 10:34:40 INFO Shutdown signal received, waiting for all workers to finish controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild +Oct 1 10:34:40 INFO All workers finished controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild +Oct 1 10:34:40 INFO All workers finished controller=gitrepository controllerGroup=idpbuilder.cnoe.io controllerKind=GitRepository +Oct 1 10:34:40 INFO Stopping and waiting for caches +Oct 1 10:34:40 INFO Stopping and waiting for webhooks +Oct 1 10:34:40 INFO Stopping and waiting for HTTP servers +Oct 1 10:34:40 INFO Wait completed, proceeding to shutdown the manager + + +########################### Finished Creating IDP Successfully! ############################ + + +Can Access ArgoCD at https://cnoe.localtest.me:8443/argocd +Username: admin +Password can be retrieved by running: idpbuilder get secrets -p argocd +``` + +##### ArgoCD applications + +When running idpbuilder 'barely' (without package option) you get the 'core applications' deployed in your cluster: + +```bash +stl@ubuntu-vpn:~/git/mms/ipceicis-developerframework$ k get applications -A +NAMESPACE NAME SYNC STATUS HEALTH STATUS +argocd argocd Synced Healthy +argocd gitea Synced Healthy +argocd nginx Synced Healthy +``` + +##### ArgoCD UI + +Open ArgoCD locally: + +https://cnoe.localtest.me:8443/argocd + +![alt text](image.png) + +Next find the provided credentials for ArgoCD (here: argocd-initial-admin-secret): + +```bash +stl@ubuntu-vpn:~/git/mms/idpbuilder$ ib get secrets +--------------------------- +Name: argocd-initial-admin-secret +Namespace: argocd +Data: + password : 2MoMeW30wSC9EraF + username : admin +--------------------------- +Name: gitea-credential +Namespace: gitea +Data: + password : LI$T?o>N{-<|{^dm$eTg*gni1(2:Y0@q344yqQIS + username : giteaAdmin +``` + +In ArgoCD you will see the deployed three applications of the core package: + +![alt text](image-1.png) + +### Second run: Append 'package1' from the CNOE-stacks repo + +CNOE provides example packages in `https://github.com/cnoe-io/stacks.git`. Having cloned this repo you can locally refer to theses packages: + +```bash +stl@ubuntu-vpn:~/git/mms/cnoe-stacks$ git remote -v +origin https://github.com/cnoe-io/stacks.git (fetch) +origin https://github.com/cnoe-io/stacks.git (push) +``` + +```bash +stl@ubuntu-vpn:~/git/mms/cnoe-stacks$ ls -al +total 64 +drwxr-xr-x 12 stl stl 4096 Sep 28 13:55 . +drwxr-xr-x 26 stl stl 4096 Sep 30 11:53 .. +drwxr-xr-x 8 stl stl 4096 Sep 28 13:56 .git +drwxr-xr-x 4 stl stl 4096 Jul 29 10:57 .github +-rw-r--r-- 1 stl stl 11341 Sep 28 09:12 LICENSE +-rw-r--r-- 1 stl stl 1079 Sep 28 13:55 README.md +drwxr-xr-x 4 stl stl 4096 Jul 29 10:57 basic +drwxr-xr-x 4 stl stl 4096 Sep 14 15:54 crossplane-integrations +drwxr-xr-x 3 stl stl 4096 Aug 13 14:52 dapr-integration +drwxr-xr-x 3 stl stl 4096 Sep 14 15:54 jupyterhub +drwxr-xr-x 6 stl stl 4096 Aug 16 14:36 local-backup +drwxr-xr-x 3 stl stl 4096 Aug 16 14:36 localstack-integration +drwxr-xr-x 8 stl stl 4096 Sep 28 13:02 ref-implementation +drwxr-xr-x 2 stl stl 4096 Aug 16 14:36 terraform-integrations + +stl@ubuntu-vpn:~/git/mms/cnoe-stacks$ ls -al basic/ +total 20 +drwxr-xr-x 4 stl stl 4096 Jul 29 10:57 . +drwxr-xr-x 12 stl stl 4096 Sep 28 13:55 .. +-rw-r--r-- 1 stl stl 632 Jul 29 10:57 README.md +drwxr-xr-x 3 stl stl 4096 Jul 29 10:57 package1 +drwxr-xr-x 2 stl stl 4096 Jul 29 10:57 package2 + +stl@ubuntu-vpn:~/git/mms/cnoe-stacks$ ls -al basic/package1 +total 16 +drwxr-xr-x 3 stl stl 4096 Jul 29 10:57 . +drwxr-xr-x 4 stl stl 4096 Jul 29 10:57 .. +-rw-r--r-- 1 stl stl 655 Jul 29 10:57 app.yaml +drwxr-xr-x 2 stl stl 4096 Jul 29 10:57 manifests + +stl@ubuntu-vpn:~/git/mms/cnoe-stacks$ ls -al basic/package2 +total 16 +drwxr-xr-x 2 stl stl 4096 Jul 29 10:57 . +drwxr-xr-x 4 stl stl 4096 Jul 29 10:57 .. +-rw-r--r-- 1 stl stl 498 Jul 29 10:57 app.yaml +-rw-r--r-- 1 stl stl 500 Jul 29 10:57 app2.yaml +``` + +#### Output + +Now we run idpbuilder the second time with `-p basic/package1` + +##### idpbuilder log + +```bash +stl@ubuntu-vpn:~/git/mms/cnoe-stacks$ ib create --use-path-routing -p basic/package1 +Oct 1 12:09:27 INFO Creating kind cluster logger=setup +Oct 1 12:09:27 INFO Runtime detected logger=setup provider=docker +Oct 1 12:09:27 INFO Cluster already exists logger=setup cluster=localdev +Oct 1 12:09:28 INFO Adding CRDs to the cluster logger=setup +Oct 1 12:09:28 INFO Setting up CoreDNS logger=setup +Oct 1 12:09:28 INFO Setting up TLS certificate logger=setup +Oct 1 12:09:28 INFO Creating localbuild resource logger=setup +Oct 1 12:09:28 INFO Starting EventSource controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild source=kind source: *v1alpha1.Localbuild +Oct 1 12:09:28 INFO Starting Controller controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild +Oct 1 12:09:28 INFO Starting EventSource controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage source=kind source: *v1alpha1.CustomPackage +Oct 1 12:09:28 INFO Starting Controller controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage +Oct 1 12:09:28 INFO Starting EventSource controller=gitrepository controllerGroup=idpbuilder.cnoe.io controllerKind=GitRepository source=kind source: *v1alpha1.GitRepository +Oct 1 12:09:28 INFO Starting Controller controller=gitrepository controllerGroup=idpbuilder.cnoe.io controllerKind=GitRepository +Oct 1 12:09:28 INFO Starting workers controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild worker count=1 +Oct 1 12:09:28 INFO Starting workers controller=gitrepository controllerGroup=idpbuilder.cnoe.io controllerKind=GitRepository worker count=1 +Oct 1 12:09:28 INFO Starting workers controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage worker count=1 +Oct 1 12:09:29 INFO installing bootstrap apps to ArgoCD controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=0ed7ccc2-dec7-4ab8-909c-791a7d1b67a8 +Oct 1 12:09:29 INFO unknown field "status.history[0].initiatedBy" logger=KubeAPIWarningLogger +Oct 1 12:09:29 INFO Checking if we should shutdown controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=0ed7ccc2-dec7-4ab8-909c-791a7d1b67a8 +Oct 1 12:09:29 ERROR failed updating repo status controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage name=app-my-app namespace=idpbuilder-localdev namespace=idpbuilder-localdev name=app-my-app reconcileID=f9873560-5dcd-4e59-b6f7-ce5d1029ef3d err=Operation cannot be fulfilled on custompackages.idpbuilder.cnoe.io "app-my-app": the object has been modified; please apply your changes to the latest version and try again +Oct 1 12:09:29 ERROR Reconciler error controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage name=app-my-app namespace=idpbuilder-localdev namespace=idpbuilder-localdev name=app-my-app reconcileID=f9873560-5dcd-4e59-b6f7-ce5d1029ef3d err=updating argocd application object my-app: Operation cannot be fulfilled on applications.argoproj.io "my-app": the object has been modified; please apply your changes to the latest version and try again +Oct 1 12:09:31 INFO installing bootstrap apps to ArgoCD controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=531cc2d4-6250-493a-aca8-fecf048a608d +Oct 1 12:09:31 INFO Checking if we should shutdown controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=531cc2d4-6250-493a-aca8-fecf048a608d +Oct 1 12:09:44 INFO installing bootstrap apps to ArgoCD controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=022b9813-8708-4f4e-90d7-38f1e114c46f +Oct 1 12:09:44 INFO Checking if we should shutdown controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=022b9813-8708-4f4e-90d7-38f1e114c46f +Oct 1 12:10:00 INFO installing bootstrap apps to ArgoCD controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=79a85c21-42c1-41ec-ad03-2bb77aeae027 +Oct 1 12:10:00 INFO Checking if we should shutdown controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=79a85c21-42c1-41ec-ad03-2bb77aeae027 +Oct 1 12:10:00 INFO Shutting Down controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild name=localdev name=localdev reconcileID=79a85c21-42c1-41ec-ad03-2bb77aeae027 +Oct 1 12:10:00 INFO Stopping and waiting for non leader election runnables +Oct 1 12:10:00 INFO Stopping and waiting for leader election runnables +Oct 1 12:10:00 INFO Shutdown signal received, waiting for all workers to finish controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage +Oct 1 12:10:00 INFO Shutdown signal received, waiting for all workers to finish controller=gitrepository controllerGroup=idpbuilder.cnoe.io controllerKind=GitRepository +Oct 1 12:10:00 INFO All workers finished controller=custompackage controllerGroup=idpbuilder.cnoe.io controllerKind=CustomPackage +Oct 1 12:10:00 INFO Shutdown signal received, waiting for all workers to finish controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild +Oct 1 12:10:00 INFO All workers finished controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild +Oct 1 12:10:00 INFO All workers finished controller=gitrepository controllerGroup=idpbuilder.cnoe.io controllerKind=GitRepository +Oct 1 12:10:00 INFO Stopping and waiting for caches +Oct 1 12:10:00 INFO Stopping and waiting for webhooks +Oct 1 12:10:00 INFO Stopping and waiting for HTTP servers +Oct 1 12:10:00 INFO Wait completed, proceeding to shutdown the manager + + +########################### Finished Creating IDP Successfully! ############################ + + +Can Access ArgoCD at https://cnoe.localtest.me:8443/argocd +Username: admin +Password can be retrieved by running: idpbuilder get secrets -p argocd +``` + +##### ArgoCD applications + +Now we have additionally the 'my-app' deployed in the cluster: + +```bash +stl@ubuntu-vpn:~$ k get applications -A +NAMESPACE NAME SYNC STATUS HEALTH STATUS +argocd argocd Synced Healthy +argocd gitea Synced Healthy +argocd my-app Synced Healthy +argocd nginx Synced Healthy +``` + +##### ArgoCD UI + +![alt text](image-2.png) + +### Third run: Finally we append 'ref-implementation' from the CNOE-stacks repo + +We finally append the so called ['reference-implementation'](https://cnoe.io/docs/reference-implementation/integrations/reference-impl), which provides a real basic IDP: + +```bash +stl@ubuntu-vpn:~/git/mms/cnoe-stacks$ ib create --use-path-routing -p ref-implementation +``` + +##### ArgoCD applications + +```bash +stl@ubuntu-vpn:~$ k get applications -A +NAMESPACE NAME SYNC STATUS HEALTH STATUS +argocd argo-workflows Synced Healthy +argocd argocd Synced Healthy +argocd backstage Synced Healthy +argocd included-backstage-templates Synced Healthy +argocd external-secrets Synced Healthy +argocd gitea Synced Healthy +argocd keycloak Synced Healthy +argocd metric-server Synced Healthy +argocd my-app Synced Healthy +argocd nginx Synced Healthy +argocd spark-operator Synced Healthy +``` + +##### ArgoCD UI + +ArgoCD shows all provissioned applications: + +![alt text](image-3.png) + +##### Keycloak UI + +In our cluster there is also keycloak as IAM provisioned. +Login into Keycloak with 'cnoe-admin' and the KEYCLOAK_ADMIN_PASSWORD. + +These credentails are defined in the package: + +```bash +stl@ubuntu-vpn:~/git/mms/cnoe-stacks$ cat ref-implementation/keycloak/manifests/keycloak-config.yaml | grep -i admin + group-admin-payload.json: | + {"name":"admin"} + "/admin" + ADMIN_PASSWORD=$(cat /var/secrets/KEYCLOAK_ADMIN_PASSWORD) + --data-urlencode "username=cnoe-admin" \ + --data-urlencode "password=${ADMIN_PASSWORD}" \ +``` + +```bash +stl@ubuntu-vpn:~/git/mms/cnoe-stacks$ ib get secrets +--------------------------- +Name: argocd-initial-admin-secret +Namespace: argocd +Data: + password : 2MoMeW30wSC9EraF + username : admin +--------------------------- +Name: gitea-credential +Namespace: gitea +Data: + password : LI$T?o>N{-<|{^dm$eTg*gni1(2:Y0@q344yqQIS + username : giteaAdmin +--------------------------- +Name: keycloak-config +Namespace: keycloak +Data: + KC_DB_PASSWORD : k3-1kgxxd/X2Cw//pX-uKMsmgWogEz5YGnb5 + KC_DB_USERNAME : keycloak + KEYCLOAK_ADMIN_PASSWORD : zMSjv5eA0l/+0-MDAaaNe+rHRMrB2q0NssP- + POSTGRES_DB : keycloak + POSTGRES_PASSWORD : k3-1kgxxd/X2Cw//pX-uKMsmgWogEz5YGnb5 + POSTGRES_USER : keycloak + USER_PASSWORD : Kd+0+/BqPRAvnLPZO-L2o/6DoBrzUeMsr29U +``` + +![alt text](image-4.png) + + +##### Backstage UI + +As Backstage login you either can use the 'user1' with `USER_PASSWORD : Kd+0+/BqPRAvnLPZO-L2o/6DoBrzUeMsr29U` or you create a new user in keycloak + +![](image-6.png) + +We create user 'ipcei' and also set a password (in tab 'Credentials'): + +![alt text](image-7.png) + +Now we can log into backstage (rember: you could have already existing usr 'user1'): + +![alt text](image-8.png) + +and see the basic setup of the Backstage portal: + +![alt text](image-9.png) + +### Use a Golden Path: 'Basic Deployment' + +Now we want to use the Backstage portal as a developer. We create in Backstage our own platform based activity by using the golden path template 'Basic Deployment: + +![alt text](image-10.png) + +When we run it, we see 'golden path activities' + +![alt text](image-11.png) + +which finally result in a new catalogue entry: + +![alt text](image-12.png) + +#### Software development lifecycle + +When we follow the 'view source' link we are directly linked to the git repo of our newly created application: + +![alt text](image-13.png) + +Check it out by cloning into a local git repo (watch the GIT_SSL_NO_VERIFY=true env setting): + +```bash +stl@ubuntu-vpn:~/git/mms/idp-temporary$ GIT_SSL_NO_VERIFY=true git clone https://cnoe.localtest.me:8443/gitea/giteaAdmin/basicdeployment.git +Cloning into 'basicdeployment'... +remote: Enumerating objects: 10, done. +remote: Counting objects: 100% (10/10), done. +remote: Compressing objects: 100% (8/8), done. +remote: Total 10 (delta 0), reused 0 (delta 0), pack-reused 0 (from 0) +Receiving objects: 100% (10/10), 47.62 KiB | 23.81 MiB/s, done. + +stl@ubuntu-vpn:~/git/mms/idp-temporary$ cd basicdeployment/ + +stl@ubuntu-vpn:~/git/mms/idp-temporary/basicdeployment$ ll +total 24 +drwxr-xr-x 5 stl stl 4096 Oct 1 13:00 ./ +drwxr-xr-x 4 stl stl 4096 Oct 1 13:00 ../ +drwxr-xr-x 8 stl stl 4096 Oct 1 13:00 .git/ +-rw-r--r-- 1 stl stl 928 Oct 1 13:00 catalog-info.yaml +drwxr-xr-x 3 stl stl 4096 Oct 1 13:00 docs/ +drwxr-xr-x 2 stl stl 4096 Oct 1 13:00 manifests/ +``` + +#### Edit and change + +Change some things, like the decription and the replicas: + +![alt text](image-16.png) + +#### Push + +Push your changes, use the giteaAdmin user to authenticate: + +```bash +stl@ubuntu-vpn:~/git/mms/idp-temporary/basicdeployment$ ib get secrets +--------------------------- +Name: argocd-initial-admin-secret +Namespace: argocd +Data: + password : 2MoMeW30wSC9EraF + username : admin +--------------------------- +Name: gitea-credential +Namespace: gitea +Data: + password : LI$T?o>N{-<|{^dm$eTg*gni1(2:Y0@q344yqQIS + username : giteaAdmin +--------------------------- +Name: keycloak-config +Namespace: keycloak +Data: + KC_DB_PASSWORD : k3-1kgxxd/X2Cw//pX-uKMsmgWogEz5YGnb5 + KC_DB_USERNAME : keycloak + KEYCLOAK_ADMIN_PASSWORD : zMSjv5eA0l/+0-MDAaaNe+rHRMrB2q0NssP- + POSTGRES_DB : keycloak + POSTGRES_PASSWORD : k3-1kgxxd/X2Cw//pX-uKMsmgWogEz5YGnb5 + POSTGRES_USER : keycloak + USER_PASSWORD : Kd+0+/BqPRAvnLPZO-L2o/6DoBrzUeMsr29U +stl@ubuntu-vpn:~/git/mms/idp-temporary/basicdeployment$ GIT_SSL_NO_VERIFY=true git push +Username for 'https://cnoe.localtest.me:8443': giteaAdmin +Password for 'https://giteaAdmin@cnoe.localtest.me:8443': +Enumerating objects: 5, done. +Counting objects: 100% (5/5), done. +Delta compression using up to 8 threads +Compressing objects: 100% (3/3), done. +Writing objects: 100% (3/3), 382 bytes | 382.00 KiB/s, done. +Total 3 (delta 1), reused 0 (delta 0), pack-reused 0 +remote: . Processing 1 references +remote: Processed 1 references in total +To https://cnoe.localtest.me:8443/gitea/giteaAdmin/basicdeployment.git + 69244d6..1269617 main -> main +``` + +#### Wait for gitops magic: deployment into the 'production' cluster + +Next wait a bit until Gitops does its magic and our 'wanted' state in the repo gets automatically deployed to the 'production' cluster: + +![alt text](image-14.png) + +![alt text](image-15.png) + +{{% pageinfo color="info" %}} +### What comes next? + +The showtime of CNOE high level behaviour and usage scenarios is now finished. We setup an initial IDP and used a backstage golden path to init and deploy a simple application. + +[Last not least](../conclusio/) we want to sum up the whole way from Devops to 'Frameworking' (is this the correct wording???) +{{% /pageinfo %}} diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-1.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-1.png new file mode 100644 index 0000000..eba944a Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-1.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-10.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-10.png new file mode 100644 index 0000000..ab4c37b Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-10.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-11.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-11.png new file mode 100644 index 0000000..cae5bcf Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-11.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-12.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-12.png new file mode 100644 index 0000000..72ec49b Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-12.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-13.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-13.png new file mode 100644 index 0000000..8e79e48 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-13.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-14.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-14.png new file mode 100644 index 0000000..9cb9b8d Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-14.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-15.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-15.png new file mode 100644 index 0000000..b2cff61 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-15.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-16.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-16.png new file mode 100644 index 0000000..4d187bb Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-16.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-2.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-2.png new file mode 100644 index 0000000..e27fbb6 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-2.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-3.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-3.png new file mode 100644 index 0000000..e01f4f9 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-3.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-4.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-4.png new file mode 100644 index 0000000..7404b75 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-4.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-5.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-5.png new file mode 100644 index 0000000..259d6b5 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-5.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-6.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-6.png new file mode 100644 index 0000000..259d6b5 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-6.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-7.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-7.png new file mode 100644 index 0000000..f2b1f45 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-7.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-8.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-8.png new file mode 100644 index 0000000..29caa47 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-8.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-9.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-9.png new file mode 100644 index 0000000..31e1cbc Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image-9.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image.png b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image.png new file mode 100644 index 0000000..27f1fb4 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/6_cnoe-showtime/image.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/README.md b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/README.md new file mode 100644 index 0000000..769478d --- /dev/null +++ b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/README.md @@ -0,0 +1,18 @@ +// how to create/export c4 images: +// see also https://likec4.dev/tooling/cli/ + +docker run -it --rm --name likec4 --user node -v $PWD:/app node bash +npm install likec4 +exit + +docker commit likec4 likec4 +docker run -it --rm --user node -v $PWD:/app -p 5173:5173 likec4 bash + +// as root +npx playwright install-deps +npx playwright install + +npm install likec4 + +// render +node@e20899c8046f:/app/content/en/docs/project/onboarding$ ./node_modules/.bin/likec4 export png -o ./images . \ No newline at end of file diff --git a/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/_index.md b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/_index.md new file mode 100644 index 0000000..da262e3 --- /dev/null +++ b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/_index.md @@ -0,0 +1,33 @@ +--- +title: Conclusio +weight: 7 +description: 'Summary and final thoughts: Always challenge theses concepts, accumptions and claims!' +--- + +{{% pageinfo color="info" %}} +## Summary + +In the project 'Edge Developer Framework' we start with DevOps, set platforms on top to automate golden paths, and finally set 'frameworks' (aka Orchestrators') on top to have declarative,automated and reconcilable platforms. + +{{% /pageinfo %}} + + +## From Devops over Platform to Framework Engineering + +We come along from a quite well known, but already complex discipline called 'Platform Engineering', which is the next level devops. +On top of these two domains we now have 'Framework Engineering', i.e. buildung dynamic, orchestrated and reconciling platforms: + +| Classic Platform engineering | New: Framework Orchestration on top of Platforms | Your job: Framework Engineer | +| -- | -- | -- | +| | | | + +## The whole picture of engineering + +So always keep in mind that as as 'Framework Engineer' you + * include the skills of a platform and a devops engineer, + * you do Framework, Platform and Devops Engineering at the same time + * and your results have impact on Frameworks, Platforms and Devops tools, layers, processes. + + The following diamond is illustrating this: on top is you, on the bottom is our baseline 'DevOps' + + \ No newline at end of file diff --git a/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/domain-architecture.c4 b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/domain-architecture.c4 new file mode 100644 index 0000000..d4fd11a --- /dev/null +++ b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/domain-architecture.c4 @@ -0,0 +1,102 @@ +specification { + tag engineering + element domain + element engineer { + style { + shape person + } + } +} + +model { + + engineer framework-engineer 'Framework Engineer' 'Build and maintain one platform orchestrating framework'{ + style { + color: sky + } + -> framework-engineering + -> platform-engineer + } + + domain framework-engineering 'Framework Engineering' 'Building and maintaining frameworks'{ + #engineering + style { + color: sky + } + -> framework + -> platform-engineering + } + + domain framework '"Framework" (IPCEI wording!)' 'A platform defining system' { + style { + color: sky + } + -> platform + } + + engineer platform-engineer 'Platform Engineer' { + style { + color: indigo + } + -> platform-engineering + -> devops-engineer + } + + domain platform-engineering 'Platform Engineering' 'Building and maintaining platforms' { + #engineering + style { + color: indigo + } + -> platform + -> devops-engineering + } + + domain platform 'Platform' 'A Devops defining system' { + style { + color: indigo + } + -> devops + } + + engineer devops-engineer 'Devops Engineer' { + style { + color: amber + } + -> devops-engineering + } + + domain devops-engineering 'Devops Engineering' 'Building and maintaining devops means' { + #engineering + style { + color: amber + } + -> devops + } + domain devops 'Devops' 'A software lifecycle enabling tool and process setup' { + style { + color: amber + } + } + +} + +views { + view modern { + title 'Modern Devops' + description 'Devops is abstarcted by Platforms, Platforms are abstracted by Frameworks (IPCEI wording!)' + include element.kind==domain, element.kind==engineer + + } + + view layers { + include devops, platform, framework + } + + view layers-and-framework-engineer { + include devops, platform, framework, framework-engineering, framework-engineer + } + + view layers-and-platform-engineer { + include devops, platform, platform-engineering, platform-engineer + } +} \ No newline at end of file diff --git a/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/layers-and-framework-engineer.png b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/layers-and-framework-engineer.png new file mode 100644 index 0000000..8e9aad6 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/layers-and-framework-engineer.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/layers-and-platform-engineer.png b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/layers-and-platform-engineer.png new file mode 100644 index 0000000..09d4e03 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/layers-and-platform-engineer.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/layers.png b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/layers.png new file mode 100644 index 0000000..399a4d0 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/layers.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/modern.png b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/modern.png new file mode 100644 index 0000000..5cbaa69 Binary files /dev/null and b/docs/technical-documentation/project/conceptual-onboarding/7_conclusio/images/modern.png differ diff --git a/docs/technical-documentation/project/conceptual-onboarding/_index.md b/docs/technical-documentation/project/conceptual-onboarding/_index.md new file mode 100644 index 0000000..52cb204 --- /dev/null +++ b/docs/technical-documentation/project/conceptual-onboarding/_index.md @@ -0,0 +1,9 @@ +--- +title: 'Platform 101: Conceptual Onboarding ' +linktitle: Conceptual Onboarding +weight: 20 +description: How to conceptually onboard onto the Edge Developer Framework (EDF) requirements and the designed solution +--- + + + diff --git a/docs/technical-documentation/project/conceptual-onboarding/storyline.md b/docs/technical-documentation/project/conceptual-onboarding/storyline.md new file mode 100644 index 0000000..11d2997 --- /dev/null +++ b/docs/technical-documentation/project/conceptual-onboarding/storyline.md @@ -0,0 +1,28 @@ + +## Storyline + +1. We have the 'Developer Framework' +2. We think the solution for DF is 'Platforming' (Digital Platforms) + 1. The next evolution after DevOps + 2. Gartner predicts 80% of SWE companies to have platforms in 2026 + 3. Platforms have a history since roundabout 2019 + 4. CNCF has a working group which created capabilities and a maturity model +3. Platforms evolve - nowadys there are Platform Orchestrators + 1. Humanitec set up a Reference Architecture + 2. There is this 'Orchestrator' thing - declaratively describe, customize and change platforms! +4. Mapping our assumptions to solutions + 1. CNOE is a hot candidate to help and fulfill our platform building + 2. CNOE aims to embrace change and customization! +5. Showtime CNOE + + +## Challenges + +1. Don't miss to further investigate and truely understand **DF needs** +2. Don't miss to further investigate and truely understand **Platform capabilities** +3. Don't miss to further investigate and truely understand **Platform orchestration** +3. Don't miss to further investigate and truely understand **CNOE solution** + +## Architecture + + diff --git a/docs/technical-documentation/project/intro-stakeholder-workshop/DevOps-Lifecycle-1.jpg b/docs/technical-documentation/project/intro-stakeholder-workshop/DevOps-Lifecycle-1.jpg new file mode 100644 index 0000000..c40430a Binary files /dev/null and b/docs/technical-documentation/project/intro-stakeholder-workshop/DevOps-Lifecycle-1.jpg differ diff --git a/docs/technical-documentation/project/intro-stakeholder-workshop/DevOps-Lifecycle.jpg b/docs/technical-documentation/project/intro-stakeholder-workshop/DevOps-Lifecycle.jpg new file mode 100644 index 0000000..c40430a Binary files /dev/null and b/docs/technical-documentation/project/intro-stakeholder-workshop/DevOps-Lifecycle.jpg differ diff --git a/docs/technical-documentation/project/intro-stakeholder-workshop/_index.md b/docs/technical-documentation/project/intro-stakeholder-workshop/_index.md new file mode 100644 index 0000000..63b29e9 --- /dev/null +++ b/docs/technical-documentation/project/intro-stakeholder-workshop/_index.md @@ -0,0 +1,95 @@ +--- +title: Stakeholder Workshop Intro +weight: 50 +description: An overall eDF introduction for stakeholders +linktitle: Stakeholder Workshops +--- + + +## Edge Developer Framework Solution Overview + +> This section is derived from [conceptual-onboarding-intro](../conceptual-onboarding/1_intro/) + +1. As presented in the introduction: We have the ['Edge Developer Framework'](./edgel-developer-framework/). \ + In short the mission is: + * Build a european edge cloud IPCEI-CIS + * which contains typical layers infrastructure, platform, application + * and on top has a new layer 'developer platform' + * which delivers a **cutting edge developer experience** and enables **easy deploying** of applications onto the IPCEI-CIS +2. We think the solution for EDF is in relation to ['Platforming' (Digital Platforms)](../conceptual-onboarding/3_platforming/) + 1. The next evolution after DevOps + 2. Gartner predicts 80% of SWE companies to have platforms in 2026 + 3. Platforms have a history since roundabout 2019 + 4. CNCF has a working group which created capabilities and a maturity model +3. Platforms evolve - nowadys there are [Platform Orchestrators](../conceptual-onboarding/4_orchestrators/) + 1. Humanitec set up a Reference Architecture + 2. There is this 'Orchestrator' thing - declaratively describe, customize and change platforms! +4. Mapping our assumptions to the [CNOE solution](../conceptual-onboarding/5_cnoe/) + 1. CNOE is a hot candidate to help and fulfill our platform building + 2. CNOE aims to embrace change and customization! + + +## 2. Platforming as the result of DevOps + +### DevOps since 2010 + +![alt text](DevOps-Lifecycle.jpg) + +* from 'left' to 'right' - plan to monitor +* 'leftshift' +* --> turns out to be a right shift for developers with cognitive overload +* 'DevOps isd dead' -> we need Platforms + +### Platforming to provide 'golden paths' + +> don't mix up 'golden paths' with pipelines or CI/CD + +![alt text](../conceptual-onboarding/3_platforming/humanitec-history.png) + +#### Short list of platform using companies + +As [Gartner states](https://www.gartner.com/en/newsroom/press-releases/2023-11-28-gartner-hype-cycle-shows-ai-practices-and-platform-engineering-will-reach-mainstream-adoption-in-software-engineering-in-two-to-five-years): "By 2026, 80% of software engineering organizations will establish platform teams as internal providers of reusable services, components and tools for application delivery." + +Here is a small list of companies alrteady using IDPs: + +* Spotify +* Airbnb +* Zalando +* Uber +* Netflix +* Salesforce +* Google +* Booking.com +* Amazon +* Autodesk +* Adobe +* Cisco +* ... + +## 3 Platform building by 'Orchestrating' + +So the goal of platforming is to build a 'digital platform' which fits [this architecture](https://www.gartner.com/en/infrastructure-and-it-operations-leaders/topics/platform-engineering) ([Ref. in German)](https://www.gartner.de/de/artikel/was-ist-platform-engineering): + +![alt text](image.png) + +### Digital Platform blue print: Reference Architecture + +The blue print for such a platform is given by the reference architecture from Humanitec: + +[Platform Orchestrators](../conceptual-onboarding/4_orchestrators/) + +### Digital Platform builder: CNOE + +Since 2023 this is done by 'orchestrating' such platforms. One orchestrator is the [CNOE solution](../conceptual-onboarding/5_cnoe/), which highly inspired our approach. + +In our orchestartion engine we think in 'stacks' of 'packages' containing platform components. + + +## 4 Sticking all together: Our current platform orchestrating generated platform + +Sticking together the platforming orchestration concept, the reference architecture and the CNOE stack solution, [this is our current running platform minimum viable product](../plan-in-2024/image-2024-8-14_10-50-27.png). + +This will now be presented! Enjoy! + + + diff --git a/docs/technical-documentation/project/intro-stakeholder-workshop/devops.png b/docs/technical-documentation/project/intro-stakeholder-workshop/devops.png new file mode 100644 index 0000000..efa787a Binary files /dev/null and b/docs/technical-documentation/project/intro-stakeholder-workshop/devops.png differ diff --git a/docs/technical-documentation/project/intro-stakeholder-workshop/image.png b/docs/technical-documentation/project/intro-stakeholder-workshop/image.png new file mode 100644 index 0000000..ed09018 Binary files /dev/null and b/docs/technical-documentation/project/intro-stakeholder-workshop/image.png differ diff --git a/docs/technical-documentation/project/plan-in-2024/_index.md b/docs/technical-documentation/project/plan-in-2024/_index.md new file mode 100644 index 0000000..42c9e7f --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/_index.md @@ -0,0 +1,45 @@ +--- +title: Plan in 2024 +weight: 30 +description: The planned project workload in 2024 +--- + + +## First Blue Print in 2024 + +Our first architectural blue print for the IPCEI-CIS Developer Framework derives from Humanitecs Reference Architecture, see links in [Blog](../../blog/240823-archsession.md) + +![alt text](image-2024-8-14_10-50-27.png) + +## C4 Model + +> (sources see in ./ressources/architecture-c4) + +> How to use: install C4lite VSC exension and/or C4lite cli - then open *.c4 files in ./ressources/architecture-c4 + +First system landscape C4 model: + +![c4-model](./planes.png) + +## In Confluence + +https://confluence.telekom-mms.com/display/IPCEICIS/Architecture + + +## Dimensionierung Cloud für initiales DevFramework + +### 28.08.24, Stefan Bethke, Florian Fürstenberg, Stephan Lo + +1) zuerst viele DevFrameworkPlatformEngineers arbeiten lokal, mit zentralem Deployment nach OTC in **einen/max zwei** Control-Cluster +2) wir gehen anfangs von ca. 5 clustern aus +3) jeder cluster mit 3 Knoten/VM (in drei AvailabilityZones) +4) pro VM 4 CPU, 16 GB Ram, 50 GB Storage read/write once, PVCs 'ohne limit' +5) public IPs, plus Loadbalancer +6) Keycloak vorhanden +7) Wildcard Domain ?? --> Eher ja + +Next Steps: (Vorschlag: in den nächsten 2 Wochen) +1. Florian spezifiziert an Tobias +2. Tobias stellt bereit, kubeconfig kommt an uns +3. wir deployen + diff --git a/docs/technical-documentation/project/plan-in-2024/image-2024-8-14_10-50-27.png b/docs/technical-documentation/project/plan-in-2024/image-2024-8-14_10-50-27.png new file mode 100644 index 0000000..5383236 Binary files /dev/null and b/docs/technical-documentation/project/plan-in-2024/image-2024-8-14_10-50-27.png differ diff --git a/docs/technical-documentation/project/plan-in-2024/planes.png b/docs/technical-documentation/project/plan-in-2024/planes.png new file mode 100755 index 0000000..cbd844b Binary files /dev/null and b/docs/technical-documentation/project/plan-in-2024/planes.png differ diff --git a/docs/technical-documentation/project/plan-in-2024/poc/_assets/image-1.png b/docs/technical-documentation/project/plan-in-2024/poc/_assets/image-1.png new file mode 100644 index 0000000..cd3bfdf Binary files /dev/null and b/docs/technical-documentation/project/plan-in-2024/poc/_assets/image-1.png differ diff --git a/docs/technical-documentation/project/plan-in-2024/poc/_assets/image.png b/docs/technical-documentation/project/plan-in-2024/poc/_assets/image.png new file mode 100644 index 0000000..f5c6665 Binary files /dev/null and b/docs/technical-documentation/project/plan-in-2024/poc/_assets/image.png differ diff --git a/docs/technical-documentation/project/plan-in-2024/poc/_index.md b/docs/technical-documentation/project/plan-in-2024/poc/_index.md new file mode 100644 index 0000000..b49e260 --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/poc/_index.md @@ -0,0 +1,15 @@ +--- +title: PoC Structure +weight: 5 +description: Building plan of the PoC milestone (end 2024) output +--- + +Presented and approved on tuesday, 26.11.2024 within the team: + +![alt text](./_assets/image.png) + + +The use cases/application lifecycle and deployment flow is drawn here: https://confluence.telekom-mms.com/display/IPCEICIS/Proof+of+Concept+2024 + + +![alt text](./_assets/image-1.png) \ No newline at end of file diff --git a/docs/technical-documentation/project/plan-in-2024/streams/_index.md b/docs/technical-documentation/project/plan-in-2024/streams/_index.md new file mode 100644 index 0000000..d060a96 --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/streams/_index.md @@ -0,0 +1,52 @@ +--- +title: Workstreams +weight: 2 +--- + +This page is WiP (23.8.2024). + +> Continued discussion on 29th Aug 24 +> * idea: Top down mit SAFe, Value Streams +> * paralell dazu bottom up (die zB aus den technisch/operativen Tätigkeietn entstehen) +> * Scrum Master? +> * Claim: Self Service im Onboarding (BTW, genau das Versprechen vom Developer Framework) +> * Org-Struktur: Scrum of Scrum (?), max. 8,9 Menschen + +Stefan and Stephan try to solve the mission 'wir wollen losmachen'. + +**Solution Idea**: + +1. First we define a **rough overall structure (see 'streams')** and propose some initial **activities** (like user stories) within them. +1. Next we work in **iterative steps** and produce iteratively progress and knowledge and outcomes in these activities. +1. Next the **whole team** decides which are the next valuable steps + +## Overall Structure: Streams + +We discovered three **streams** in the first project steps (see also [blog](../../../blog/news/240823-archsession/_index.md)): + +1. Research, Fundamentals, Architecture +1. POCs (Applications, Platform-variants, ...) +1. Deployment, production-lifecycle + +```markmap +# +## Stream 'Fundamentals' +### [Platform-Definition](./fundamentals/platform-definition/) +### [CI/CD Definition](./fundamentals/cicd-definition/) +## Stream 'POC' +### [CNOE](./pocs/cnoe/) +### [Kratix](./pocs/kratix/) +### [SIA Asset](./pocs/sia-asset/) +### Backstage +### Telemetry +## Stream 'Deployment' +### [Forgejo](./deployment/forgejo/) +``` + +## DoR - Definition of Ready + +Bevor eine Aufgabe umgesetzt wird, muss ein Design vorhanden sein. + +Bezüglich der 'Bebauung' von Plaztform-Komponenten gilt für das Design: + +1) Die Zielstellung der Komponenet muss erfasst sein diff --git a/docs/technical-documentation/project/plan-in-2024/streams/deployment/_index.md b/docs/technical-documentation/project/plan-in-2024/streams/deployment/_index.md new file mode 100644 index 0000000..ed797a0 --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/streams/deployment/_index.md @@ -0,0 +1,14 @@ +--- +title: Deployment +weight: 3 +--- + +> **Mantra**: +> 1. Everything as Code. +> 1. Cloud natively deployable everywhere. +> 1. Ramping up and tearing down oftenly is a no-brainer. +> 1. Especially locally (whereby 'locally' means 'under my own control') + +## Entwurf (28.8.24) + +![Deployment 2024](./deployment.drawio.png) \ No newline at end of file diff --git a/docs/technical-documentation/project/plan-in-2024/streams/deployment/deployment.drawio.png b/docs/technical-documentation/project/plan-in-2024/streams/deployment/deployment.drawio.png new file mode 100644 index 0000000..a5f11b7 Binary files /dev/null and b/docs/technical-documentation/project/plan-in-2024/streams/deployment/deployment.drawio.png differ diff --git a/docs/technical-documentation/project/plan-in-2024/streams/deployment/forgejo/_index.md b/docs/technical-documentation/project/plan-in-2024/streams/deployment/forgejo/_index.md new file mode 100644 index 0000000..7e10216 --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/streams/deployment/forgejo/_index.md @@ -0,0 +1,36 @@ +--- +title: Activity 'Forgejo' +linkTitle: Forgejo +weight: 1 +--- + +> **WiP** Ich (Stephan) schreibe mal schnell einige Stichworte, was ich so von Stefan gehört habe: + +## Summary + +tbd + +## Rationale + +* ... +* Design: Deployment Architecture (Platform Code vs. Application Code) +* Design: Integration in Developer Workflow +* ... + +## Task + +* ... +* Runner +* Tenants +* User Management +* ... +* tbc + + +## Issues + +### 28.08.24, Forgejo in OTC (Planung Stefan, Florian, Stephan) + +* STBE deployed mit Helm in bereitgestelltes OTC-Kubernetes +* erstmal interne User Datenbank nutzen +* dann ggf. OIDC mit vorhandenem Keycloak in der OTC anbinden \ No newline at end of file diff --git a/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/_index.md b/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/_index.md new file mode 100644 index 0000000..fd8d2df --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/_index.md @@ -0,0 +1,18 @@ +--- +title: Fundamentals +weight: 1 +--- + +## References + +### Fowler / Thoughtworks + +* https://martinfowler.com/articles/talk-about-platforms.html + +* https://www.thoughtworks.com/what-we-do/platforms/digital-platform-strategy + +![alt text](image.png) + +### nice article about platform orchestration automation (introducing BACK stack) + +* https://dev.to/thenjdevopsguy/creating-your-platform-engineering-environment-4hpa \ No newline at end of file diff --git a/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/cicd-definition/_index.md b/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/cicd-definition/_index.md new file mode 100644 index 0000000..2565522 --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/cicd-definition/_index.md @@ -0,0 +1,30 @@ +--- +title: Activity 'CI/CD Definition' +linkTite: CI/CD Definition +weight: 2 +--- + +## Summary + +Der Produktionsprozess für Applikationen soll im Kontext von Gitops und Plattformen entworfen und mit einigen Workflowsystemen im Leerlauf implementiert werden. + +## Rationale + +In Gitops basierten Plattformen (Anm.: wie es zB. CNOE und Humanitec mit ArgoCD sind) trifft das klassische Verständnis von Pipelining mit finalem Pushing des fertigen Builds auf die Target-Plattform nicht mehr zu. + +D.h. in diesem fall is Argo CD = Continuous Delivery = Pulling des desired state auf die Target plattform. Eine pipeline hat hier keien Rechte mehr, single source of truth ist das 'Control-Git'. + +D.h. es stellen sich zwei Fragen: +1. Wie sieht der adaptierte Workflow aus, der die 'Single Source of Truth' im 'Control-Git' definiert? Was ist das gewünschte korrekte Wording? Was bedeuen CI und CD in diesem (neuen) Kontext ? Auf welchen Environmants laufen Steps (zB Funktionstest), die eben nicht mehr auf einer gitops-kontrollierten Stage laufen? +2. Wie sieht der Workflow aus für 'Events', die nach dem CI/CD in die single source of truth einfliessen? ZB. abnahmen auf einer Abnahme Stage, oder Integrationsprobleme auf einer test Stage + +## Task + +* Es sollen existierende, typische Pipelines hergenommen werden und auf die oben skizzierten Fragestellungen hin untersucht und angepasst werden. +* In lokalen Demo-Systemen (mit oder ohne CNOE aufgesetzt) sollen die Pipeline entwürfe dummyhaft dargestellt werden und luffähig sein. +* Für den POC sollen Workflow-Systeme wie Dagger, Argo Workflow, Flux, Forgejo Actions zum Einsatz kommen. + + +## Further ideas for POSs + +* see sample flows in https://docs.kubefirst.io/ \ No newline at end of file diff --git a/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/image.png b/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/image.png new file mode 100644 index 0000000..eddc81b Binary files /dev/null and b/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/image.png differ diff --git a/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/platform-definition/_index.md b/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/platform-definition/_index.md new file mode 100644 index 0000000..c4d21c9 --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/streams/fundamentals/platform-definition/_index.md @@ -0,0 +1,30 @@ +--- +title: Activity 'Platform Definition' +linkTitle: Platform Definition +weight: 1 +--- + +## Summary + +Das theoretische Fundament unserer Plattform-Architektur soll begründet und weitere wesentliche Erfahrungen anderer Player durch Recherche erhoben werden, so dass unser aktuelles Zielbild abgesichert ist. + +## Rationale + +Wir starten gerade auf der Bais des Referenzmodells zu Platform-Engineering von Gartner und Huamitec. +Es gibt viele weitere Grundlagen und Entwicklungen zu Platform Engineering. + +## Task + +* Zusammentragen, wer was federführend macht in der Plattform Domäne, vgl. auch Linkliste im [Blog](../../../../../blog/240823-archsession.md) +* Welche trendsettenden Plattformen gibt es? +* Beschreiben der Referenzarchitektur in unserem Sinn +* Begriffsbildung, Glossar erstellen (zB Stacks oder Ressource-Bundles) +* Architekturen erstellen mit Control Planes, Seedern, Targets, etc. die mal zusammenliegen, mal nicht +* Beschreibung der Wirkungsweise der Platform-Orchestration (Score, Kubevela, DSL, ... und Controlern hierzu) in verscheidenen Platform-Implemnetierungen +* Ableiten, wie sich daraus unser Zielbild und Strategie ergeben. +* Argumentation für unseren Weg zusammentragen. +* Best Practices und wichtige Tipps und Erfahrungen zusammentragen. + + + + diff --git a/docs/technical-documentation/project/plan-in-2024/streams/pocs/_index.md b/docs/technical-documentation/project/plan-in-2024/streams/pocs/_index.md new file mode 100644 index 0000000..fb81dfc --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/streams/pocs/_index.md @@ -0,0 +1,8 @@ +--- +title: POCs +weight: 2 +--- + +## Further ideas for POSs + +* see sample apps 'metaphor' in https://docs.kubefirst.io/ \ No newline at end of file diff --git a/docs/technical-documentation/project/plan-in-2024/streams/pocs/cnoe/_index.md b/docs/technical-documentation/project/plan-in-2024/streams/pocs/cnoe/_index.md new file mode 100644 index 0000000..1a48228 --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/streams/pocs/cnoe/_index.md @@ -0,0 +1,32 @@ +--- +title: Activity 'CNOE Investigation' +linkTitle: CNOE +weight: 1 +--- + + +## Summary + +Als designiertes Basis-Tool des Developer Frameworks sollen die Verwendung und die Möglichkeiten von CNOE zur Erweiterung analysiert werden. + +## Rationale + +CNOE ist das designierte Werkzeug zur Beschreibung und Ausspielung des Developer Frameworks. +Dieses Werkzeug gilt es zu erlernen, zu beschreiben und weiterzuentwickeln. +Insbesondere der Metacharkter des 'Software zur Bereitstellung von Bereitstellungssoftware für Software', d.h. der unterschiedlichen Ebenen für unterschiedliche Use Cases und Akteure soll klar verständlich und dokumentiert werden. Siehe hierzu auch das Webinar von Huamnitec und die Diskussion zu unterschiedlichen Bereitstellungsmethoden eines RedisCaches. + +## Task + +* CNOE deklarativ in lokalem und ggf. vorhandenem Cloud-Umfeld startbar machen +* Architektur von COE beschreiben, wesentliche Wording finden (zB Orchestrator, Stacks, Kompoennten-Deklaration, ...) +* Tests / validations durchführen +* eigene 'Stacks erstellen' (auch in Zusammenarbeit mit Applikations-POCs, zB. SIA und Telemetrie) +* Wording und Architektur von Activity ['Platform-Definition'](../../fundamentals/platform-definition/) beachten und challengen +* Alles, was startbar und lauffähig ist, soll möglichst vollautomatisch verscriptet und git dokumentiert in einem Repo liegen + +## Issues / Ideas / Improvements + +* k3d anstatt kind +* kind: ggf. issue mit kindnet, ersetzen durch Cilium + + diff --git a/docs/technical-documentation/project/plan-in-2024/streams/pocs/kratix/_index.md b/docs/technical-documentation/project/plan-in-2024/streams/pocs/kratix/_index.md new file mode 100644 index 0000000..fd65282 --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/streams/pocs/kratix/_index.md @@ -0,0 +1,19 @@ +--- +title: Activity 'Kratix Investigation' +linkTitle: Kratix +weight: 3 +--- + + +## Summary + +Ist [Kratix](https://www.kratix.io/) eine valide Alternative zu CNOE? + +## Rationale + + +## Task + + +## Issues / Ideas / Improvements + diff --git a/docs/technical-documentation/project/plan-in-2024/streams/pocs/sia-asset/_index.md b/docs/technical-documentation/project/plan-in-2024/streams/pocs/sia-asset/_index.md new file mode 100644 index 0000000..147f8a8 --- /dev/null +++ b/docs/technical-documentation/project/plan-in-2024/streams/pocs/sia-asset/_index.md @@ -0,0 +1,50 @@ +--- +title: Activity 'SIA Asset Golden Path Development' +linkTitle: SIA Asset +weight: 2 +--- + +## Summary + +Implementierung eines Golden Paths in einem CNOE/Backstage Stack für das existierende 'Composable SIA (Semasuite Integrator Asset)'. + +## Rationale + +Das SIA Asset ist eine Entwicklung des PC DC - es ist eine Composable Application die einen OnlineShop um die Möglichkeit der FAX-Bestellung erweitert. +Die Entwicklung begann im Januar 2024 mit einem Team von drei Menschen, davon zwei Nearshore, und hatte die typischen ersten Stufen - erst Applikationscode ohne Integration, dann lokale gemockte Integration, dann lokale echte Integration, dann Integration auf einer Integrationsumgebung, dann Produktion. Jedesmal bei Erklimmung der nächsten Stufe mit Erstellung von individuellem Build und Deploymentcode und Abwägungen, wie aufwändig nachhaltig und wie benutzbar das jeweilige Konstrukt sein sollte. +Ein CI/CD gibt es nicht, zu großer Aufwand für so ein kleines Projekt. + +Die Erwartung ist, dass so ein Projekt als 'Golden Path' abbildbar ist und die Entwicklung enorm bescheunigt. + +## Task + +* SIA 'auf die Platform heben' (was immer das bedeutet) +* Den Build-Code von SIA (die Applikation und einen Shop) in einen CI/CD Workflow transformieren + + +## References + +* https://platformengineering.org/blog/decoding-golden-paths-the-highway-for-your-developers + + +## Scenario (see IPCEICIS-363) + +```mermaid +graph TB + Developer[fa:fa-user developer] + PlatformDeliveryAndControlPlaneIDE[IDE] + subgraph LocalBox["localBox"] + LocalBox.EDF[Platform] + LocalBox.Local[local] + end + subgraph CloudGroup["cloudGroup"] + CloudGroup.Test[test] + CloudGroup.Prod[prod] + end + Developer -. "use preferred IDE as local code editing, building, testing, syncing tool" .-> PlatformDeliveryAndControlPlaneIDE + Developer -. "manage (in Developer Portal)" .-> LocalBox.EDF + PlatformDeliveryAndControlPlaneIDE -. "provide "code"" .-> LocalBox.EDF + LocalBox.EDF -. "provision" .-> LocalBox.Local + LocalBox.EDF -. "provision" .-> CloudGroup.Prod + LocalBox.EDF -. "provision" .-> CloudGroup.Test +``` \ No newline at end of file diff --git a/docs/technical-documentation/project/team-process/_assets/P1.png b/docs/technical-documentation/project/team-process/_assets/P1.png new file mode 100644 index 0000000..43d35b7 Binary files /dev/null and b/docs/technical-documentation/project/team-process/_assets/P1.png differ diff --git a/docs/technical-documentation/project/team-process/_assets/P2.png b/docs/technical-documentation/project/team-process/_assets/P2.png new file mode 100644 index 0000000..a7b9deb Binary files /dev/null and b/docs/technical-documentation/project/team-process/_assets/P2.png differ diff --git a/docs/technical-documentation/project/team-process/_assets/P3.png b/docs/technical-documentation/project/team-process/_assets/P3.png new file mode 100644 index 0000000..6e295a7 Binary files /dev/null and b/docs/technical-documentation/project/team-process/_assets/P3.png differ diff --git a/docs/technical-documentation/project/team-process/_assets/P4.png b/docs/technical-documentation/project/team-process/_assets/P4.png new file mode 100644 index 0000000..f72f222 Binary files /dev/null and b/docs/technical-documentation/project/team-process/_assets/P4.png differ diff --git a/docs/technical-documentation/project/team-process/_assets/P5.png b/docs/technical-documentation/project/team-process/_assets/P5.png new file mode 100644 index 0000000..eb3314a Binary files /dev/null and b/docs/technical-documentation/project/team-process/_assets/P5.png differ diff --git a/docs/technical-documentation/project/team-process/_assets/P6.png b/docs/technical-documentation/project/team-process/_assets/P6.png new file mode 100644 index 0000000..f66aa56 Binary files /dev/null and b/docs/technical-documentation/project/team-process/_assets/P6.png differ diff --git a/docs/technical-documentation/project/team-process/_assets/P7.png b/docs/technical-documentation/project/team-process/_assets/P7.png new file mode 100644 index 0000000..b25b487 Binary files /dev/null and b/docs/technical-documentation/project/team-process/_assets/P7.png differ diff --git a/docs/technical-documentation/project/team-process/_assets/P8.png b/docs/technical-documentation/project/team-process/_assets/P8.png new file mode 100644 index 0000000..bcd074f Binary files /dev/null and b/docs/technical-documentation/project/team-process/_assets/P8.png differ diff --git a/docs/technical-documentation/project/team-process/_assets/image.png b/docs/technical-documentation/project/team-process/_assets/image.png new file mode 100644 index 0000000..ee73f3d Binary files /dev/null and b/docs/technical-documentation/project/team-process/_assets/image.png differ diff --git a/docs/technical-documentation/project/team-process/_index.md b/docs/technical-documentation/project/team-process/_index.md new file mode 100644 index 0000000..1252cc0 --- /dev/null +++ b/docs/technical-documentation/project/team-process/_index.md @@ -0,0 +1,139 @@ +--- +title: Team and Work Structure +weight: 50 +description: The way we work and produce runnable, presentable software +linkTitle: Team-Process +--- + +This document describes a proposal to set up a team work structure to primarily get the POC successfully delivered. Later on we will adjust and refine the process to fit for the MVP. + +## Introduction + +### Rationale + +We currently face the following [challenges in our process](https://confluence.telekom-mms.com/display/IPCEICIS/Proof+of+Concept+2024): + +1. missing team alignment on PoC-Output over all components + 1. Action: team is committed to **clearly defined PoC capabilities** + 1. Action: every each team-member is aware of **individual and common work** to be done (backlog) to achieve PoC +1. missing concept for repository (process, structure, + 1. Action: the **PoC has a robust repository concept** up & running + 1. Action: repo concept is applicable for other repositorys as well (esp. documentation repo) + +### General working context + +A **project goal** drives us as a **team** to create valuable **product output**. + +The **backlog** contains the product specification which instructs us by working in **tasks** with the help and usage of **ressources** (like git, 3rd party code and knowledge and so on). + +![alt text](./_assets/P1.png) + +Goal, Backlog, Tasks and Output must be in a well-defined context, such that the team can be productive. + +### POC and MVP working context + +This document has two targets: POC and MVP. + +Today is mid november 2024 and we need to package our project results created since july 2024 to deliver the POC product. + +![alt text](./_assets/P2.png) + +> Think of the agenda's goal like this: Imagine Ralf the big sponsor passes by and sees 'edge Developer Framework' somewhere on your screen. Then he asks: 'Hey cool, you are one of these famous platform guys?! I always wanted to get a demo how this framework looks like!' \ +> **What are you going to show him?** + +## Team and Work Structure (POC first, MVP later) + +In the following we will look at the work structure proposal, primarily for the POC, but reusable for any other release or the MVP + +### Consolidated POC (or any release later) + +![alt text](./_assets/P3.png) + +#### Responsibilities to reliably specify the deliverables + +![alt text](./_assets/P4.png) + +#### Todos + +1. SHOULD: Clarify context (arch, team, leads) +1. MUST: Define Deliverables (arch, team) (Hint: Deleiverables could be seen 1:1 as use cases - not sure about that right now) +1. MUST: Define Output structure (arch, leads) + +### Process (General): from deliverables to output (POC first, MVP later) + +Most important in the process are: + +* **traces** from tickets to outputs (as the clue to understand and control what is where) +* **README.md** (as the clue how to use the output) + +![alt text](./_assets/P5.png) + +### Output Structure POC + +Most important in the POC structure are: + +* one repo which is the product +* a README which maps project goals to the repo content +* the content consists of capabilities +* capabilities are shown ('prooven') by use cases +* the use cases are described in the deliverables + +![alt text](./_assets/P6.png) + +#### Glossary + +* README: user manual and storybook +* Outcome: like resolution, but more verbose and detailled (especially when resolution was 'Done'), so that state changes are easily recognisable + +### Work Structure Guidelines (POC first, MVP later) + +#### Structure + +1. each task and/or user story has at least a branch in an existing repo or a new, dedicated task repo + > recommended: multi-repo over monorepo +1. each repo has a main and development branch. development is the intgration line +1. pull requests are used to merge work outputs to the integration line +1. optional (my be too cumbersome): each PR should be reflected as comment in jira + +#### Workflow (in any task / user story) + +1. when output comes in own repo: `git init` --> always create as fast as possible a new repo +1. commit early and oftenly +1. comments on output and outcome when where is new work done. this could typically correlate to a pull request, see above + +#### Definition of Done + +1. Jira: there is a final comment summarizimg the outcome (in a bit more verbose from than just the 'resolution' of the ticket) and the main outputs. This may typically be a link to the commit and/or pull request of the final repo state +2. Git/Repo: there is a README.md in the root of the repo. It summarizes in a typical Gihub-manner how to use the repo, so that it does what it is intended to do and reveals all the bells and whistles of the repo to the consumer. If the README doesn't lead to the usable and recognizable added value the work is not done! + +#### Review + +1. Before a ticket gets finished (not defined yet which jira-state this is) there must be a review by a second team member +1. the reviewing person may review whatever they want, but must at least check the README + +#### Out of scope (for now) + +The following topics are optional and do not need an agreement at the moment: + +1. Commit message syntax + > Recommendation: at least 'WiP' would be good if the state is experimental +1. branch permissions +1. branch clean up policies +1. squashing when merging into the integration line +1. CI +1. Tech blogs / gists +1. Changelogs + +#### Integration of Jira with Forgejo (compare to https://github.com/atlassian/github-for-jira) + +1. Jira -> Forgejo: Create Branch +1. Forgejo -> Jira: + 1. commit + 2. PR + +## Status of POC Capabilities + +The following table lists an analysis of the status of the ['Funcionality validation' of the POC](https://confluence.telekom-mms.com/display/IPCEICIS/Proof+of+Concept+2024). +Assumption: These functionalities should be the aforementioned capabilities. + +![alt text](./_assets/P8.png) \ No newline at end of file diff --git a/docs/technical-documentation/solution/_index.md b/docs/technical-documentation/solution/_index.md new file mode 100644 index 0000000..aaac88e --- /dev/null +++ b/docs/technical-documentation/solution/_index.md @@ -0,0 +1,6 @@ +--- +title: Solution +weight: 2 +description: "The implemented platforming solutions of EDF, i.e. the solution domain. The documentation of all project output: Design, Building blocks, results, show cases, artifacts and so on" +--- + diff --git a/docs/technical-documentation/solution/design/_index.md b/docs/technical-documentation/solution/design/_index.md new file mode 100644 index 0000000..f5f925c --- /dev/null +++ b/docs/technical-documentation/solution/design/_index.md @@ -0,0 +1,7 @@ +--- +title: Design +weight: 1 +description: Edge Developver Framework Design Documents +--- + +This design documentation structure is inspired by the [design of crossplane](https://github.com/crossplane/crossplane/tree/main/design#readme). diff --git a/docs/technical-documentation/solution/design/decision-iam-and-edf-self-containment.md b/docs/technical-documentation/solution/design/decision-iam-and-edf-self-containment.md new file mode 100644 index 0000000..2ec75aa --- /dev/null +++ b/docs/technical-documentation/solution/design/decision-iam-and-edf-self-containment.md @@ -0,0 +1,31 @@ +--- +title: eDF is self-contained and has an own IAM (WiP) +weight: 2 +description: tbd +--- + +* Type: Proposal +* Owner: Stephan Lo (stephan.lo@telekom.de) +* Reviewers: EDF Architects +* Status: Speculative, revision 0.1 + +## Background + +tbd + +## Proposal + +==== 1 ===== + +There is a core eDF which is self-contained and does not have any impelmented dependency to external platforms. +eDF depends on abstractions. +Each embdding into customer infrastructure works with adapters which implement the abstraction. + +==== 2 ===== + +eDF has an own IAM. This may either hold the principals and permissions itself when there is no other IAM or proxy and map them when integrated into external enterprise IAMs. + + +## Reference + +Arch call from 4.12.24, Florian, Stefan, Stephan-Pierre \ No newline at end of file diff --git a/docs/technical-documentation/solution/design/proposal-local-deployment.md b/docs/technical-documentation/solution/design/proposal-local-deployment.md new file mode 100644 index 0000000..3ef08c1 --- /dev/null +++ b/docs/technical-documentation/solution/design/proposal-local-deployment.md @@ -0,0 +1,23 @@ +--- +title: Agnostic EDF Deployment +weight: 2 +description: The implementation of EDF must be kubernetes provider agnostic +--- + +* Type: Proposal +* Owner: Stephan Lo (stephan.lo@telekom.de) +* Reviewers: EDF Architects +* Status: Speculative, revision 0.1 + +## Background + +EDF is running as a controlplane - or let's say an orchestration plane, correct wording is still to be defined - in a kubernetes cluster. +Right now we have at least ArgoCD as controller of manifests which we provide as CNOE stacks of packages and standalone packages. + +## Proposal + +The implementation of EDF must be kubernetes provider agnostic. Thus each provider specific deployment dependency must be factored out into provider specific definitions or deployment procedures. + +## Local deployment + +This implies that EDF must always be deployable into a local cluster, whereby by 'local' we mean a cluster which is under the full control of the platform engineer, e.g. a kind cluster on their laptop. \ No newline at end of file diff --git a/docs/technical-documentation/solution/design/proposal-stack-hydration.md b/docs/technical-documentation/solution/design/proposal-stack-hydration.md new file mode 100644 index 0000000..adce110 --- /dev/null +++ b/docs/technical-documentation/solution/design/proposal-stack-hydration.md @@ -0,0 +1,28 @@ +--- +title: Agnostic Stack Definition +weight: 2 +description: The implementation of EDF stacks must be kubernetes provider agnostic by a templating/hydration mechanism +--- + +* Type: Proposal +* Owner: Stephan Lo (stephan.lo@telekom.de) +* Reviewers: EDF Architects +* Status: Speculative, revision 0.1 + +## Background + +When booting and reconciling the 'final' stack exectuting orchestrator (here: ArgoCD) needs to get rendered (or hydrated) presentations of the manifests. + +It is not possible or unwanted that the orchestrator itself resolves dependencies or configuration values. + +## Proposal + +The hydration takes place for all target clouds/kubernetes providers. There is no 'default' or 'special' setup, like the Kind version. + +## Local development + +This implies that in a development process there needs to be a build step hydrating the ArgoCD manifests for the targeted cloud. + +## Reference + +Discussion from Robert and Stephan-Pierre in the context of stack development - there should be an easy way to have locally changed stacks propagated into the local running platform. \ No newline at end of file diff --git a/docs/technical-documentation/solution/scenarios/_index.md b/docs/technical-documentation/solution/scenarios/_index.md new file mode 100644 index 0000000..a7839e3 --- /dev/null +++ b/docs/technical-documentation/solution/scenarios/_index.md @@ -0,0 +1,6 @@ +--- +title: Scenarios +weight: 1 +description: Usage scenarios and system architecture +--- + diff --git a/docs/technical-documentation/solution/scenarios/gitops/_index.md b/docs/technical-documentation/solution/scenarios/gitops/_index.md new file mode 100644 index 0000000..b0191fb --- /dev/null +++ b/docs/technical-documentation/solution/scenarios/gitops/_index.md @@ -0,0 +1,16 @@ +--- +title: Gitops +weight: 1 +description: Gitops scenarios +--- + +WiP - this is in work. + +What kind of Gitops do we have with idpbuilder/CNOE ? + +## References + + +https://github.com/gitops-bridge-dev/gitops-bridge + +![alt text](image.png) \ No newline at end of file diff --git a/docs/technical-documentation/solution/scenarios/gitops/image.png b/docs/technical-documentation/solution/scenarios/gitops/image.png new file mode 100644 index 0000000..452b382 Binary files /dev/null and b/docs/technical-documentation/solution/scenarios/gitops/image.png differ diff --git a/docs/technical-documentation/solution/scenarios/orchestration/_index.md b/docs/technical-documentation/solution/scenarios/orchestration/_index.md new file mode 100644 index 0000000..2ef6417 --- /dev/null +++ b/docs/technical-documentation/solution/scenarios/orchestration/_index.md @@ -0,0 +1,34 @@ +--- +title: Orchestration +weight: 1 +description: Usage scenarios and system architecture of platform orchestartion +--- + +WiP - this is in work. + +What deployment scenarios do we have with idpbuilder/CNOE ? + +## References + +* Base Url of CNOE presentations: https://github.com/cnoe-io/presentations/tree/main + +### CNOE in EKS + +The next chart shows a system landscape of CNOE orchestration. + +[2024-04-PlatformEngineering-DevOpsDayRaleigh.pdf](https://github.com/cnoe-io/presentations/blob/main/2024-04-PlatformEngineering-DevOpsDayRaleigh.pdf) + +Questions: What are the degrees of freedom in this chart? What variations with respect to environments and environmnent types exist? + +![alt text](image.png) + +### CNOE in AWS + +The next chart shows a context chart of CNOE orchestration. + +[reference-implementation-aws](https://github.com/cnoe-io/reference-implementation-aws) + +Questions: What are the degrees of freedom in this chart? What variations with respect to environments and environmnent types exist? + + +![alt text](image-1.png) \ No newline at end of file diff --git a/docs/technical-documentation/solution/scenarios/orchestration/image-1.png b/docs/technical-documentation/solution/scenarios/orchestration/image-1.png new file mode 100644 index 0000000..f839018 Binary files /dev/null and b/docs/technical-documentation/solution/scenarios/orchestration/image-1.png differ diff --git a/docs/technical-documentation/solution/scenarios/orchestration/image.png b/docs/technical-documentation/solution/scenarios/orchestration/image.png new file mode 100644 index 0000000..4c47795 Binary files /dev/null and b/docs/technical-documentation/solution/scenarios/orchestration/image.png differ diff --git a/docs/technical-documentation/solution/tools/Backstage/Backstage setup tutorial/_index.md b/docs/technical-documentation/solution/tools/Backstage/Backstage setup tutorial/_index.md new file mode 100644 index 0000000..d8cdba2 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Backstage/Backstage setup tutorial/_index.md @@ -0,0 +1,60 @@ ++++ +title = "Backstage Local Setup Tutorial" +weight = 4 ++++ + +This document provides a comprehensive guide on the prerequisites and the process to set up and run Backstage locally on your machine. + +## Table of Contents + +1. [Prerequisites](#prerequisites) +2. [Setting Up Backstage](#setting-up-backstage) +3. [Run the Backstage Application](#run-the-backstage-application) + +## Prerequisites + +Before you start, make sure you have the following installed on your machine: + +1. **Node.js**: Backstage requires Node.js. You can download it from the [Node.js website](https://nodejs.org/). It is recommended to use the LTS version. + +2. **Yarn**: Backstage uses Yarn as its package manager. You can install it globally using npm: + ```bash + npm install --global yarn + ``` + +3. **Git** +4. **Docker** + +## Setting Up Backstage + + +To install the Backstage Standalone app, you can use npx. npx is a tool that comes preinstalled with Node.js and lets you run commands straight from npm or other registries. + +```bash +npx @backstage/create-app@latest +``` +This command will create a new directory with a Backstage app inside. The wizard will ask you for the name of the app. This name will be created as sub directory in your current working directory. + +Below is a simplified layout of the files and folders generated when creating an app. +```bash +app +├── app-config.yaml +├── catalog-info.yaml +├── package.json +└── packages + ├── app + └── backend +``` + +- **app-config.yaml**: Main configuration file for the app. See Configuration for more information. +- **catalog-info.yaml**: Catalog Entities descriptors. See Descriptor Format of Catalog Entities to get started. +- **package.json**: Root package.json for the project. Note: Be sure that you don't add any npm dependencies here as they probably should be installed in the intended workspace rather than in the root. +- **packages/**: Lerna leaf packages or "workspaces". Everything here is going to be a separate package, managed by lerna. +- **packages/app/**: A fully functioning Backstage frontend app that acts as a good starting point for you to get to know Backstage. +- **packages/backend/**: We include a backend that helps power features such as Authentication, Software Catalog, Software Templates, and TechDocs, amongst other things. + +## Run the Backstage Application +You can run it in Backstage root directory by executing this command: +```bash +yarn dev +``` diff --git a/docs/technical-documentation/solution/tools/Backstage/Exsisting Plugins/_index.md b/docs/technical-documentation/solution/tools/Backstage/Exsisting Plugins/_index.md new file mode 100644 index 0000000..d449433 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Backstage/Exsisting Plugins/_index.md @@ -0,0 +1,49 @@ ++++ +title = "Existing Backstage Plugins" +weight = 4 ++++ + +1. **Catalog**: + - Used for managing services and microservices, including registration, visualization, and the ability to track dependencies and relationships between services. It serves as a central directory for all services in an organization. + +2. **Docs**: + - Designed for creating and managing documentation, supporting formats such as Markdown. It helps teams organize and access technical and non-technical documentation in a unified interface. + +3. **API Docs**: + - Automatically generates API documentation based on OpenAPI specifications or other API definitions, ensuring that your API information is always up to date and accessible for developers. + +4. **TechDocs**: + - A tool for creating and publishing technical documentation. It is integrated directly into Backstage, allowing developers to host and maintain documentation alongside their projects. + +5. **Scaffolder**: + - Allows the rapid creation of new projects based on predefined templates, making it easier to deploy services or infrastructure with consistent best practices. + +6. **CI/CD**: + - Provides integration with CI/CD systems such as GitHub Actions and Jenkins, allowing developers to view build status, logs, and pipelines directly in Backstage. + +7. **Metrics**: + - Offers the ability to monitor and visualize performance metrics for applications, helping teams to keep track of key indicators like response times and error rates. + +8. **Snyk**: + - Used for dependency security analysis, scanning your codebase for vulnerabilities and helping to manage any potential security risks in third-party libraries. + +9. **SonarQube**: + - Integrates with SonarQube to analyze code quality, providing insights into code health, including issues like technical debt, bugs, and security vulnerabilities. + +10. **GitHub**: + - Enables integration with GitHub repositories, displaying information such as commits, pull requests, and other repository activity, making collaboration more transparent and efficient. + +11. **CircleCI**: + - Allows seamless integration with CircleCI for managing CI/CD workflows, giving developers insight into build pipelines, test results, and deployment statuses. + +12. **Kubernetes**: + - Provides tools to manage Kubernetes clusters, including visualizing pod status, logs, and cluster health, helping teams maintain and troubleshoot their cloud-native applications. + +13. **Cloud**: + - Includes plugins for integration with cloud providers like AWS and Azure, allowing teams to manage cloud infrastructure, services, and billing directly from Backstage. + +14. **OpenTelemetry**: + - Helps with monitoring distributed applications by integrating OpenTelemetry, offering powerful tools to trace requests, detect performance bottlenecks, and ensure application health. + +15. **Lighthouse**: + - Integrates Google Lighthouse to analyze web application performance, helping teams identify areas for improvement in metrics like load times, accessibility, and SEO. diff --git a/docs/technical-documentation/solution/tools/Backstage/General Information/_index.md b/docs/technical-documentation/solution/tools/Backstage/General Information/_index.md new file mode 100644 index 0000000..54dabd1 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Backstage/General Information/_index.md @@ -0,0 +1,24 @@ ++++ +title = "Backstage Description" +weight = 4 ++++ + +Backstage by Spotify can be seen as a Platform Portal. It is an open platform for building and managing internal developer tools, providing a unified interface for accessing various tools and resources within an organization. + +Key Features of Backstage as a Platform Portal: +Tool Integration: + +Backstage allows for the integration of various tools used in the development process, such as CI/CD, version control systems, monitoring, and others, into a single interface. +Service Management: + +It offers the ability to register and manage services and microservices, as well as monitor their status and performance. +Documentation and Learning Materials: + +Backstage includes capabilities for storing and organizing documentation, making it easier for developers to access information. +Golden Paths: + +Backstage supports the concept of "Golden Paths," enabling teams to follow recommended practices for development and tool usage. +Modularity and Extensibility: + +The platform allows for the creation of plugins, enabling users to customize and extend Backstage's functionality to fit their organization's needs. +Backstage provides developers with centralized and convenient access to essential tools and resources, making it an effective solution for supporting Platform Engineering and developing an internal platform portal. \ No newline at end of file diff --git a/docs/technical-documentation/solution/tools/Backstage/Plugin Creation Tutorial/_index.md b/docs/technical-documentation/solution/tools/Backstage/Plugin Creation Tutorial/_index.md new file mode 100644 index 0000000..a975456 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Backstage/Plugin Creation Tutorial/_index.md @@ -0,0 +1,169 @@ ++++ +title = "Plugin Creation Tutorial" +weight = 4 ++++ +Backstage plugins and functionality extensions should be writen in TypeScript/Node.js because backstage is written in those languages +### General Algorithm for Adding a Plugin in Backstage + +1. **Create the Plugin** + To create a plugin in the project structure, you need to run the following command at the root of Backstage: + + ```bash + yarn new --select plugin + ``` + + The wizard will ask you for the plugin ID, which will be its name. After that, a template for the plugin will be automatically created in the directory `plugins/{plugin id}`. After this install all needed dependencies. After this install required dependencies. In example case this is `"axios"` for API requests + Emaple: + ```bash + yarn add axios + ``` +2. **Define the Plugin’s Functionality** + In the newly created plugin directory, focus on defining the plugin's core functionality. This is where you will create components that handle the logic and user interface (UI) of the plugin. Place these components in the `plugins/{plugin_id}/src/components/` folder, and if your plugin interacts with external data or APIs, manage those interactions within these components. + +3. **Set Up Routes** + In the main configuration file of your plugin (typically `plugins/{plugin_id}/src/routs.ts`), set up the routes. Use `createRouteRef()` to define route references, and link them to the appropriate components in your `plugins/{plugin_id}/src/components/` folder. Each route will determine which component renders for specific parts of the plugin. + +4. **Register the Plugin** + Navigate to the `packages/app` folder and import your plugin into the main application. Register your plugin in the `routs` array within `packages/app/src/App.tsx` to integrate it into the Backstage system. It will create a rout for your's plugin page + +5. **Add Plugin to the Sidebar Menu** + To make the plugin accessible through the Backstage sidebar, modify the sidebar component in `packages/app/src/components/Root.tsx`. Add a new sidebar item linked to your plugin’s route reference, allowing users to easily access the plugin through the menu. + +6. **Test the Plugin** + Run the Backstage development server using `yarn dev` and navigate to your plugin’s route via the sidebar or directly through its URL. Ensure that the plugin’s functionality works as expected. + +### Example +All steps will be demonstrated using a simple example plugin, which will request JSON files from the API of jsonplaceholder.typicode.com and display them on a page. + +1. Creating test-plugin: + ```bash + yarn new --select plugin + ``` + Adding required dependencies. In this case only "axios" is needed for API requests + ```bass + yarn add axios + ``` +2. Implement code of the plugin component in `plugins/{plugin-id}/src/{Component name}/{filename}.tsx` + ```javascript + import React, { useState } from 'react'; + import axios from 'axios'; + import { Typography, Grid } from '@material-ui/core'; + import { + InfoCard, + Header, + Page, + Content, + ContentHeader, + SupportButton, + } from '@backstage/core-components'; + + export const TestComponent = () => { + const [posts, setPosts] = useState([]); + const [loading, setLoading] = useState(false); + const [error, setError] = useState(null); + + const fetchPosts = async () => { + setLoading(true); + setError(null); + + try { + const response = await axios.get('https://jsonplaceholder.typicode.com/posts'); + setPosts(response.data); + } catch (err) { + setError('Ошибка при получении постов'); + } finally { + setLoading(false); + } + }; + + return ( + +
+ A description of your plugin goes here. +
+ + + + Click to load posts from the API. + + + + + + + This card contains information about the posts fetched from the API. + + {loading && Загрузка...} + {error && {error}} + {!loading && !posts.length && ( + + )} + + + + {posts.length > 0 && ( + +
    + {posts.map(post => ( +
  • + {post.title} + {post.body} +
  • + ))} +
+
+ )} +
+
+
+
+ ); + }; + + ``` + +3. Setup routs in plugins/{plugin_id}/src/routs.ts + ```javascript + import { createRouteRef } from '@backstage/core-plugin-api'; + + export const rootRouteRef = createRouteRef({ + id: 'test-plugin', + }); + ``` + +4. Register the plugin in `packages/app/src/App.tsx` in routes + Import of the plugin: + ```javascript + import { TestPluginPage } from '@internal/backstage-plugin-test-plugin'; + ``` + + Adding route: + ```javascript + const routes = ( + + ... //{Other Routs} + } /> + + ) + ``` + +5. Add Item to sidebar menu of the backstage in `packages/app/src/components/Root/Root.tsx`. This should be added in to Root object as another SidebarItem + ```javascript + export const Root = ({ children }: PropsWithChildren<{}>) => ( + + + ... //{Other sidebar items} + + + {children} + + ); + ``` + +6. Plugin is ready. Run the application + ```bash + yarn dev + ``` + +![example](example_1.png) +![example](example_2.png) \ No newline at end of file diff --git a/docs/technical-documentation/solution/tools/Backstage/Plugin Creation Tutorial/example_1.png b/docs/technical-documentation/solution/tools/Backstage/Plugin Creation Tutorial/example_1.png new file mode 100644 index 0000000..532048c Binary files /dev/null and b/docs/technical-documentation/solution/tools/Backstage/Plugin Creation Tutorial/example_1.png differ diff --git a/docs/technical-documentation/solution/tools/Backstage/Plugin Creation Tutorial/example_2.png b/docs/technical-documentation/solution/tools/Backstage/Plugin Creation Tutorial/example_2.png new file mode 100644 index 0000000..4c162fd Binary files /dev/null and b/docs/technical-documentation/solution/tools/Backstage/Plugin Creation Tutorial/example_2.png differ diff --git a/docs/technical-documentation/solution/tools/Backstage/_index.md b/docs/technical-documentation/solution/tools/Backstage/_index.md new file mode 100644 index 0000000..ef168be --- /dev/null +++ b/docs/technical-documentation/solution/tools/Backstage/_index.md @@ -0,0 +1,5 @@ +--- +title: Backstage +weight: 2 +description: Here you will find information about Backstage, it's plugins and usage tutorials +--- diff --git a/docs/technical-documentation/solution/tools/CNOE/CNOE-competitors/_index.md b/docs/technical-documentation/solution/tools/CNOE/CNOE-competitors/_index.md new file mode 100644 index 0000000..22a54ea --- /dev/null +++ b/docs/technical-documentation/solution/tools/CNOE/CNOE-competitors/_index.md @@ -0,0 +1,68 @@ +--- +title: Analysis of CNOE competitors +weight: 1 +description: We compare CNOW - which we see as an orchestrator - with other platform orchestring tools like Kratix and Humanitc +--- + +## Kratix + +Kratix is a Kubernetes-native framework that helps platform engineering teams automate the provisioning and management of infrastructure and services through custom-defined abstractions called Promises. It allows teams to extend Kubernetes functionality and provide resources in a self-service manner to developers, streamlining the delivery and management of workloads across environments. + +### Concepts +Key concepts of Kratix: +- Workload: +This is an abstraction representing any application or service that needs to be deployed within the infrastructure. It defines the requirements and dependent resources necessary to execute this task. +- Promise: +A "Promise" is a ready-to-use infrastructure or service package. Promises allow developers to request specific resources (such as databases, storage, or computing power) through the standard Kubernetes interface. It’s similar to an operator in Kubernetes but more universal and flexible. +Kratix simplifies the development and delivery of applications by automating the provisioning and management of infrastructure and resources through simple Kubernetes APIs. + +### Pros of Kratix: +- Resource provisioning automation. Kratix simplifies infrastructure creation for developers through the abstraction of "Promises." This means developers can simply request the necessary resources (like databases, message queues) without dealing with the intricacies of infrastructure management. + +- Flexibility and adaptability. Platform teams can customize and adapt Kratix to specific needs by creating custom Promises for various services, allowing the infrastructure to meet the specific requirements of the organization. + +- Unified resource request interface. Developers can use a single API (Kubernetes) to request resources, simplifying interaction with infrastructure and reducing complexity when working with different tools and systems. + +### Cons of Kratix: +- Although Kratix offers great flexibility, it can also lead to more complex setup and platform management processes. Creating custom Promises and configuring their behavior requires time and effort. + +- Kubernetes dependency. Kratix relies on Kubernetes, which makes it less applicable in environments that don’t use Kubernetes or containerization technologies. It might also lead to integration challenges if an organization uses other solutions. + +- Limited ecosystem. Kratix doesn’t have as mature an ecosystem as some other infrastructure management solutions (e.g., Terraform, Pulumi). This may limit the availability of ready-made solutions and tools, increasing the amount of manual work when implementing Kratix. + + +## Humanitec + +Humanitec is an Internal Developer Platform (IDP) that helps platform engineering teams automate the provisioning +and management of infrastructure and services through dynamic configuration and environment management. + +It allows teams to extend their infrastructure capabilities and provide resources in a self-service manner to developers, streamlining the deployment and management of workloads across various environments. + +### Concepts +Key concepts of Humanitec: +- Application Definition: + This is an abstraction where developers define their application, including its services, environments, a dependencies. It abstracts away infrastructure details, allowing developers to focus on building and deploying their applications. + +- Dynamic Configuration Management: + Humanitec automatically manages the configuration of applications and services across multiple environments (e.g., development, staging, production). It ensures consistency and alignment of configurations as applications move through different stages of deployment. + +Humanitec simplifies the development and delivery process by providing self-service deployment options while maintaining +centralized governance and control for platform teams. + +### Pros of Humanitec: +- Resource provisioning automation. Humanitec automates infrastructure and environment provisioning, allowing developers to focus on building and deploying applications without worrying about manual configuration. + +- Dynamic environment management. Humanitec manages application configurations across different environments, ensuring consistency and reducing manual configuration errors. + +- Golden Paths. best-practice workflows and processes that guide developers through infrastructure provisioning and application deployment. This ensures consistency and reduces cognitive load by providing a set of recommended practices. + +- Unified resource management interface. Developers can use Humanitec’s interface to request resources and deploy applications, reducing complexity and improving the development workflow. + +### Cons of Humanitec: +- Humanitec is commercially licensed software + +- Integration challenges. Humanitec’s dependency on specific cloud-native environments can create challenges for organizations with diverse infrastructures or those using legacy systems. + +- Cost. Depending on usage, Humanitec might introduce additional costs related to the implementation of an Internal Developer Platform, especially for smaller teams. + +- Harder to customise diff --git a/docs/technical-documentation/solution/tools/CNOE/_index.md b/docs/technical-documentation/solution/tools/CNOE/_index.md new file mode 100644 index 0000000..3d41c12 --- /dev/null +++ b/docs/technical-documentation/solution/tools/CNOE/_index.md @@ -0,0 +1,4 @@ +--- +title: CNOE +description: CNOE is a platform building orchestrator, which we choosed at least to start in 2024 with to build the EDF +--- diff --git a/docs/technical-documentation/solution/tools/CNOE/argocd/_index.md b/docs/technical-documentation/solution/tools/CNOE/argocd/_index.md new file mode 100644 index 0000000..6c0da5a --- /dev/null +++ b/docs/technical-documentation/solution/tools/CNOE/argocd/_index.md @@ -0,0 +1,141 @@ +--- +title: ArgoCD +weight: 30 +description: A description of ArgoCD and its role in CNOE +--- + +## What is ArgoCD? + +ArgoCD is a Continuous Delivery tool for kubernetes based on GitOps principles. + +> ELI5: ArgoCD is an application running in kubernetes which monitors Git +> repositories containing some sort of kubernetes manifests and automatically +> deploys them to some configured kubernetes clusters. + +From ArgoCD's perspective, applications are defined as custom resource +definitions within the kubernetes clusters that ArgoCD monitors. Such a +definition describes a source git repository that contains kubernetes +manifests, in the form of a helm chart, kustomize, jsonnet definitions or plain +yaml files, as well as a target kubernetes cluster and namespace the manifests +should be applied to. Thus, ArgoCD is capable of deploying applications to +various (remote) clusters and namespaces. + +ArgoCD monitors both the source and the destination. It applies changes from +the git repository that acts as the source of truth for the destination as soon +as they occur, i.e. if a change was pushed to the git repository, the change is +applied to the kubernetes destination by ArgoCD. Subsequently, it checks +whether the desired state was established. For example, it verifies that all +resources were created, enough replicas started, and that all pods are in the +`running` state and healthy. + +## Architecture + +### Core Components + +An ArgoCD deployment mainly consists of 3 main components: + +#### Application Controller + +The application controller is a kubernetes operator that synchronizes the live +state within a kubernetes cluster with the desired state derived from the git +sources. It monitors the live state, can detect derivations, and perform +corrective actions. Additionally, it can execute hooks on life cycle stages +such as pre- and post-sync. + +#### Repository Server + +The repository server interacts with git repositories and caches their state, +to reduce the amount of polling necessary. Furthermore, it is responsible for +generating the kubernetes manifests from the resources within the git +repositories, i.e. executing helm or jsonnet templates. + +#### API Server + +The API Server is a REST/gRPC Service that allows the Web UI and CLI, as well +as other API clients, to interact with the system. It also acts as the callback +for webhooks particularly from Git repository platforms such as GitHub or +Gitlab to reduce repository polling. + +### Others + +The system primarily stores its configuration as kubernetes resources. Thus, +other external storage is not vital. + +Redis +: A Redis store is optional but recommended to be used as a cache to reduce +load on ArgoCD components and connected systems, e.g. git repositories. + +ApplicationSetController +: The ApplicationSet Controller is similar to the Application Controller a +kubernetes operator that can deploy applications based on parameterized +application templates. This allows the deployment of different versions of an +application into various environments from a single template. + +### Overview + +![Conceptual Architecture](./argocd_architecture.webp) + +![Core components](./argocd-core-components.webp) + +## Role in CNOE + +ArgoCD is one of the core components besides gitea/forgejo that is being +bootstrapped by the idpbuilder. Future project creation, e.g. through +backstage, relies on the availability of ArgoCD. + +After the initial bootstrapping phase, effectively all components in the stack +that are deployed in kubernetes are managed by ArgoCD. This includes the +bootstrapped components of gitea and ArgoCD which are onboarded afterward. +Thus, the idpbuilder is only necessary in the bootstrapping phase of the +platform and the technical coordination of all components shifts to ArgoCD +eventually. + +In general, the creation of new projects and applications should take place in +backstop. It is a catalog of software components and best practices that allows +developers to grasp and to manage their software portfolio. Underneath, +however, the deployment of applications and platform components is managed by +ArgoCD. Among others, backstage creates Application CRDs to instruct ArgoCD to +manage deployments and subsequently report on their current state. + +## Glossary + +_Initially shamelessly copied from [the docs](https://argo-cd.readthedocs.io/en/stable/core_concepts/)_ + +Application +: A group of Kubernetes resources as defined by a manifest. This is a Custom Resource Definition (CRD). + +ApplicationSet +: A CRD that is a template that can create multiple parameterized Applications. + +Application source type +: Which Tool is used to build the application. + +Configuration management tool +: See Tool. + +Configuration management plugin +: A custom tool. + +Health +: The health of the application, is it running correctly? Can it serve requests? + +Live state +: The live state of that application. What pods etc are deployed. + +Refresh +: Compare the latest code in Git with the live state. Figure out what is different. + +Sync +: The process of making an application move to its target state. E.g. by applying changes to a Kubernetes cluster. + +Sync status +: Whether or not the live state matches the target state. Is the deployed application the same as Git says it should be? + +Sync operation status +: Whether or not a sync succeeded. + +Target state +: The desired state of an application, as represented by files in a Git repository. + +Tool +: A tool to create manifests from a directory of files. E.g. Kustomize. See Application Source Type. diff --git a/docs/technical-documentation/solution/tools/CNOE/argocd/argocd-core-components.webp b/docs/technical-documentation/solution/tools/CNOE/argocd/argocd-core-components.webp new file mode 100644 index 0000000..6140f51 Binary files /dev/null and b/docs/technical-documentation/solution/tools/CNOE/argocd/argocd-core-components.webp differ diff --git a/docs/technical-documentation/solution/tools/CNOE/argocd/argocd_architecture.webp b/docs/technical-documentation/solution/tools/CNOE/argocd/argocd_architecture.webp new file mode 100644 index 0000000..adee037 Binary files /dev/null and b/docs/technical-documentation/solution/tools/CNOE/argocd/argocd_architecture.webp differ diff --git a/docs/technical-documentation/solution/tools/CNOE/idpbuilder/_index.md b/docs/technical-documentation/solution/tools/CNOE/idpbuilder/_index.md new file mode 100644 index 0000000..72291e2 --- /dev/null +++ b/docs/technical-documentation/solution/tools/CNOE/idpbuilder/_index.md @@ -0,0 +1,6 @@ +--- +title: idpbuilder +weight: 3 +description: Here you will find information about idpbuilder installation and usage +--- + diff --git a/docs/technical-documentation/solution/tools/CNOE/idpbuilder/hostname-routing-proxy.png b/docs/technical-documentation/solution/tools/CNOE/idpbuilder/hostname-routing-proxy.png new file mode 100644 index 0000000..d100481 Binary files /dev/null and b/docs/technical-documentation/solution/tools/CNOE/idpbuilder/hostname-routing-proxy.png differ diff --git a/docs/technical-documentation/solution/tools/CNOE/idpbuilder/hostname-routing.png b/docs/technical-documentation/solution/tools/CNOE/idpbuilder/hostname-routing.png new file mode 100644 index 0000000..a6b9742 Binary files /dev/null and b/docs/technical-documentation/solution/tools/CNOE/idpbuilder/hostname-routing.png differ diff --git a/docs/technical-documentation/solution/tools/CNOE/idpbuilder/http-routing.md b/docs/technical-documentation/solution/tools/CNOE/idpbuilder/http-routing.md new file mode 100644 index 0000000..f2da697 --- /dev/null +++ b/docs/technical-documentation/solution/tools/CNOE/idpbuilder/http-routing.md @@ -0,0 +1,178 @@ +--- +title: Http Routing +weight: 100 +--- + +### Routing switch + +The idpbuilder supports creating platforms using either path based or subdomain +based routing: + +```shell +idpbuilder create --log-level debug --package https://github.com/cnoe-io/stacks//ref-implementation +``` + +```shell +idpbuilder create --use-path-routing --log-level debug --package https://github.com/cnoe-io/stacks//ref-implementation +``` + +However, even though argo does report all deployments as green eventually, not +the entire demo is actually functional (verification?). This is due to +hardcoded values that for example point to the path-routed location of gitea to +access git repos. Thus, backstage might not be able to access them. + +Within the demo / ref-implementation, a simple search & replace is suggested to +change urls to fit the given environment. But proper scripting/templating could +take care of that as the hostnames and necessary properties should be +available. This is, however, a tedious and repetitive task one has to keep in +mind throughout the entire system, which might lead to an explosion of config +options in the future. Code that addresses correct routing is located in both +the stack templates and the idpbuilder code. + +### Cluster internal routing + +For the most part, components communicate with either the cluster API using the +default DNS or with each other via http(s) using the public DNS/hostname (+ +path-routing scheme). The latter is necessary due to configs that are visible +and modifiable by users. This includes for example argocd config for components +that has to sync to a gitea git repo. Using the same URL for internal and +external resolution is imperative. + +The idpbuilder achieves transparent internal DNS resolution by overriding the +public DNS name in the cluster's internal DNS server (coreDNS). Subsequently, +within the cluster requests to the public hostnames resolve to the IP of the +internal ingress controller service. Thus, internal and external requests take +a similar path and run through proper routing (rewrites, ssl/tls, etc). + +### Conclusion + +One has to keep in mind that some specific app features might not +work properly or without haxx when using path based routing (e.g. docker +registry in gitea). Futhermore, supporting multiple setup strategies will +become cumbersome as the platforms grows. We should probably only support one +type of setup to keep the system as simple as possible, but allow modification +if necessary. + +DNS solutions like `nip.io` or the already used `localtest.me` mitigate the +need for path based routing + +## Excerpt + +HTTP is a cornerstone of the internet due to its high flexibility. Starting +from HTTP/1.1 each request in the protocol contains among others a path and a +`Host`name in its header. While an HTTP request is sent to a single IP address +/ server, these two pieces of data allow (distributed) systems to handle +requests in various ways. + +```shell +$ curl -v http://google.com/something > /dev/null + +* Connected to google.com (2a00:1450:4001:82f::200e) port 80 +* using HTTP/1.x +> GET /something HTTP/1.1 +> Host: google.com +> User-Agent: curl/8.10.1 +> Accept: */* +... +``` + +### Path-Routing + +Imagine requesting `http://myhost.foo/some/file.html`, in a simple setup, the +web server `myhost.foo` resolves to would serve static files from some +directory, `//some/file.html`. + +In more complex systems, one might have multiple services that fulfill various +roles, for example a service that generates HTML sites of articles from a CMS +and a service that can convert images into various formats. Using path-routing +both services are available on the same host from a user's POV. + +An article served from `http://myhost.foo/articles/news1.html` would be +generated from the article service and points to an image +`http://myhost.foo/images/pic.jpg` which in turn is generated by the image +converter service. When a user sends an HTTP request to `myhost.foo`, they hit +a reverse proxy which forwards the request based on the requested path to some +other system, waits for a response, and subsequently returns that response to +the user. + +![Path-Routing Example](../path-routing.png) + +Such a setup hides the complexity from the user and allows the creation of +large distributed, scalable systems acting as a unified entity from the +outside. Since everything is served on the same host, the browser is inclined +to trust all downstream services. This allows for easier 'communication' +between services through the browser. For example, cookies could be valid for +the entire host and thus authentication data could be forwarded to requested +downstream services without the user having to explicitly re-authenticate. + +Furthermore, services 'know' their user-facing location by knowing their path +and the paths to other services as paths are usually set as a convention and / +or hard-coded. In practice, this makes configuration of the entire system +somewhat easier, especially if you have various environments for testing, +development, and production. The hostname of the system does not matter as one +can use hostname-relative URLs, e.g. `/some/service`. + +Load balancing is also easily achievable by multiplying the number of service +instances. Most reverse proxy systems are able to apply various load balancing +strategies to forward traffic to downstream systems. + +Problems might arise if downstream systems are not built with path-routing in +mind. Some systems require to be served from the root of a domain, see for +example the container registry spec. + + +### Hostname-Routing + +Each downstream service in a distributed system is served from a different +host, typically a subdomain, e.g. `serviceA.myhost.foo` and +`serviceB.myhost.foo`. This gives services full control over their respective +host, and even allows them to do path-routing within each system. Moreover, +hostname-routing allows the entire system to create more flexible and powerful +routing schemes in terms of scalability. Intra-system communication becomes +somewhat harder as the browser treats each subdomain as a separate host, +shielding cookies for example form one another. + +Each host that serves some services requires a DNS entry that has to be +published to the clients (from some DNS server). Depending on the environment +this can become quite tedious as DNS resolution on the internet and intranets +might have to deviate. This applies to intra-cluster communication as well, as +seen with the idpbuilder's platform. In this case, external DNS resolution has +to be replicated within the cluster to be able to use the same URLs to address +for example gitea. + +The following example depicts DNS-only routing. By defining separate DNS +entries for each service / subdomain requests are resolved to the respective +servers. In theory, no additional infrastructure is necessary to route user +traffic to each service. However, as services are completely separated other +infrastructure like authentication possibly has to be duplicated. + +![DNS-only routing](../hostname-routing.png) + +When using hostname based routing, one does not have to set different IPs for +each hostname. Instead, having multiple DNS entries pointing to the same set of +IPs allows re-using existing infrastructure. As shown below, a reverse proxy is +able to forward requests to downstream services based on the `Host` request +parameter. This way specific hostname can be forwarded to a defined service. + +![Hostname Proxy](../hostname-routing-proxy.png) + +At the same time, one could imagine a multi-tenant system that differentiates +customer systems by name, e.g. `tenant-1.cool.system` and +`tenant-2.cool.system`. Configured as a wildcard-sytle domain, `*.cool.system` +could point to a reverse proxy that forwards requests to a tenants instance of +a system, allowing re-use of central infrastructure while still hosting +separate systems per tenant. + + +The implicit dependency on DNS resolution generally makes this kind of routing +more complex and error-prone as changes to DNS server entries are not always +possible or modifiable by everyone. Also, local changes to your `/etc/hosts` +file are a constant pain and should be seen as a dirty hack. As mentioned +above, dynamic DNS solutions like `nip.io` are often helpful in this case. + +### Conclusion + +Path and hostname based routing are the two most common methods of HTTP traffic +routing. They can be used separately but more often they are used in +conjunction. Due to HTTP's versatility other forms of HTTP routing, for example +based on the `Content-Type` Header are also very common. diff --git a/docs/technical-documentation/solution/tools/CNOE/idpbuilder/installation/_index.md b/docs/technical-documentation/solution/tools/CNOE/idpbuilder/installation/_index.md new file mode 100644 index 0000000..d919ab5 --- /dev/null +++ b/docs/technical-documentation/solution/tools/CNOE/idpbuilder/installation/_index.md @@ -0,0 +1,351 @@ ++++ +title = "Installation of idpbuilder" +weight = 1 ++++ + +## Local installation with KIND Kubernetes + +The idpbuilder uses KIND as Kubernetes cluster. It is suggested to use a virtual machine for the installation. MMS Linux clients are unable to execute KIND natively on the local machine because of network problems. Pods for example can't connect to the internet. + +Windows and Mac users already utilize a virtual machine for the Docker Linux environment. + +### Prerequisites + +- Docker Engine +- Go +- kubectl +- kind + +### Build process + +For building idpbuilder the source code needs to be downloaded and compiled: + +``` +git clone https://github.com/cnoe-io/idpbuilder.git +cd idpbuilder +go build +``` + +The idpbuilder binary will be created in the current directory. + +### Start idpbuilder + +To start the idpbuilder binary execute the following command: + +``` +./idpbuilder create --use-path-routing --log-level debug --package https://github.com/cnoe-io/stacks//ref-implementation +``` + +### Logging into ArgoCD + +At the end of the idpbuilder execution a link of the installed ArgoCD is shown. The credentianls for access can be obtained by executing: + +``` +./idpbuilder get secrets +``` + +### Logging into KIND + +A Kubernetes config is created in the default location `$HOME/.kube/config`. A management of the Kubernetes config is recommended to not unintentionally delete acces to other clusters like the OSC. + +To show all running KIND nodes execute: + +``` +kubectl get nodes -o wide +``` + +To see all running pods: + +``` +kubectl get pods -o wide +``` + +### Next steps + +Follow this documentation: https://github.com/cnoe-io/stacks/tree/main/ref-implementation + +### Delete the idpbuilder KIND cluster + +The cluster can be deleted by executing: + +``` +idpbuilder delete cluster +``` + +## Remote installation into a bare metal Kubernetes instance + +CNOE provides two implementations of an IDP: + +- Amazon AWS implementation +- KIND implementation + +Both are not useable to run on bare metal or an OSC instance. The Amazon implementation is complex and makes use of Terraform which is currently not supported by either base metal or OSC. Therefore the KIND implementation is used and customized to support the idpbuilder installation. The idpbuilder is also doing some network magic which needs to be replicated. + +Several prerequisites have to be provided to support the idpbuilder on bare metal or the OSC: + +- Kubernetes dependencies +- Network dependencies +- Changes to the idpbuilder + +### Prerequisites + +Talos Linux is choosen for a bare metal Kubernetes instance. + +- talosctl +- Go +- Docker Engine +- kubectl +- kustomize +- helm +- nginx + +As soon as the idpbuilder works correctly on bare metal, the next step is to apply it to an OSC instance. + +#### Add *.cnoe.localtest.me to hosts file + +Append this lines to `/etc/hosts` + +``` +127.0.0.1 gitea.cnoe.localtest.me +127.0.0.1 cnoe.localtest.me +``` + +#### Install nginx and configure it + +Install nginx by executing: + +``` +sudo apt install nginx +``` + +Replace `/etc/nginx/sites-enabled/default` with the following content: + +``` +server { + listen 8443 ssl default_server; + listen [::]:8443 ssl default_server; + + include snippets/snakeoil.conf; + + location / { + proxy_pass http://10.5.0.20:80; + proxy_http_version 1.1; + proxy_cache_bypass $http_upgrade; + proxy_set_header Host $host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-Host $host; + proxy_set_header X-Forwarded-Proto $scheme; + } +} +``` + +Start nginx by executing: + +``` +sudo systemctl enable nginx +sudo systemctl restart nginx +``` + +#### Building idpbuilder + +For building idpbuilder the source code needs to be downloaded and compiled: + +``` +git clone https://github.com/cnoe-io/idpbuilder.git +cd idpbuilder +go build +``` + +The idpbuilder binary will be created in the current directory. + +#### Configure VS Code launch settings + +Open the idpbuilder folder in VS Code: + +``` +code . +``` + +Create a new launch setting. Add the `"args"` parameter to the launch setting: + +``` +{ + "version": "0.2.0", + "configurations": [ + { + "name": "Launch Package", + "type": "go", + "request": "launch", + "mode": "auto", + "program": "${fileDirname}", + "args": ["create", "--use-path-routing", "--package", "https://github.com/cnoe-io/stacks//ref-implementation"] + } + ] +} +``` + +#### Create the Talos bare metal Kubernetes instance + +Talos by default will create docker containers, similar to KIND. Create the cluster by executing: + +``` +talosctl cluster create +``` + +#### Install local path privisioning (storage) + +``` +mkdir -p localpathprovisioning +cd localpathprovisioning +cat > localpathprovisioning.yaml < Validation is used when you check your approach before actually executing an +> action. + +Examples: + +- Form validation before processing the data +- Compiler checking syntax +- Rust's borrow checker + +> Verification describes testing if your 'thing' complies with your spec + +Examples: + +- Unit tests +- Testing availability (ping, curl health check) +- Checking a ZKP of some computation + +--- + +## In CNOE + +It seems that both validation and verification within the CNOE framework are +not actually handled by some explicit component but should be addressed +throughout the system and workflows. + +As stated in the [docs](https://cnoe.io/docs/intro/capabilities/validation), +validation takes place in all parts of the stack by enforcing strict API usage +and policies (signing, mitigations, security scans etc, see usage of kyverno +for example), and using code generation (proven code), linting, formatting, +LSP. Consequently, validation of source code, templates, etc is more a best +practice rather than a hard fact or feature and it is up to the user +to incorporate them into their workflows and pipelines. This is probably +due to the complexity of the entire stack and the individual properties of +each component and applications. + +Verification of artifacts and deployments actually exists in a somewhat similar +state. The current CNOE reference-implementation does not provide sufficient +verification tooling. + +However, as stated in the [docs](https://cnoe.io/docs/reference-implementation/integrations/verification) +within the framework `cnoe-cli` is capable of extremely limited verification of +artifacts within kubernetes. The same verification is also available as a step +within a backstage +[plugin](https://github.com/cnoe-io/plugin-scaffolder-actions). This is pretty +much just a wrapper of the cli tool. The tool consumes CRD-like structures +defining the state of pods and CRDs and checks for their existence within a +live cluster ([example](https://github.com/cnoe-io/cnoe-cli/blob/main/pkg/cmd/prereq/ack-s3-prerequisites.yaml)). + +Depending on the aspiration of 'verification' this check is rather superficial +and might only suffice as an initial smoke test. Furthermore, it seems like the +feature is not actually used within the CNOE stacks repo. + +For a live product more in depth verification tools and schemes are necessary +to verify the correct configuration and authenticity of workloads, which is, in +the context of traditional cloud systems, only achievable to a limited degree. + +Existing tools within the stack, e.g. Argo, provide some verification +capabilities. But further investigation into the general topic is necessary. diff --git a/docs/technical-documentation/solution/tools/Crossplane/_index.md b/docs/technical-documentation/solution/tools/Crossplane/_index.md new file mode 100644 index 0000000..a6f2168 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Crossplane/_index.md @@ -0,0 +1,4 @@ +--- +title: Crossplane +description: Crossplane is a tool to provision cloud resources. it can act as a backend for platform orchestrators as well +--- diff --git a/docs/technical-documentation/solution/tools/Crossplane/provider-kind/_index.md b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/_index.md new file mode 100644 index 0000000..c90fc5c --- /dev/null +++ b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/_index.md @@ -0,0 +1,764 @@ +--- +title: Howto develop a crossplane kind provider +weight: 1 +description: A provider-kind allows using crossplane locally +--- + +To support local development and usage of crossplane compositions, a crossplane provider is needed. +Every big hyperscaler already has support in crossplane (e.g. provider-gcp and provider-aws). + +Each provider has two main parts, the provider config and implementations of the cloud resources. + +The provider config takes the credentials to log into the cloud provider and provides a token +(e.g. a kube config or even a service account) that the implementations can use to provision cloud resources. + +The implementations of the cloud resources reflect each type of cloud resource, typical resources are: + +- S3 Bucket +- Nodepool +- VPC +- GkeCluster + +## Architecture of provider-kind + +To have the crossplane concepts applied, the provider-kind consists of two components: kindserver and provider-kind. + +The kindserver is used to manage local kind clusters. It provides an HTTP REST interface to create, delete and get informations of a running cluster, using an Authorization HTTP header field used as a password: + +![kindserver_interface](./kindserver_interface.png) + +The two properties to connect the provider-kind to kindserver are the IP address and password of kindserver. The IP address is required because the kindserver needs to be executed outside the kind cluster, directly on the local machine, as it need to control +kind itself: + +![kindserver_provider-kind](./kindserver_provider-kind.png) + +The provider-kind provides two crossplane elements, the `ProviderConfig` and `KindCluster` as the (only) cloud resource. The +`ProviderConfig` is configured with the IP address and password of the running kindserver. The `KindCluster` type is configured +to use the provided `ProviderConfig`. Kind clusters can be managed by adding and removing kubernetes manifests of type +`KindCluster`. The crossplane reconcilation loop makes use of the kindserver HTTP GET method to see if a new cluster needs to be +created by HTTP POST or being removed by HTTP DELETE. + +The password used by `ProviderConfig` is configured as an kubernetes secret, while the kindserver IP address is configured +inside the `ProviderConfig` as the field endpoint. + +When provider-kind created a new cluster by processing a `KindCluster` manifest, the two providers which are used to deploy applications, provider-helm and provider-kubernetes, can be configured to use the `KindCluster`. + +![provider-kind_providerconfig](./provider-kind_providerconfig.png) + +A Crossplane composition can be created by concaternating different providers and their objects. A composition is managed as a +custom resource definition and defined in a single file. + +![composition](./composition.png) + +## Configuration + +Two kubernetes manifests are defines by provider-kind: `ProviderConfig` and `KindCluster`. The third needed kubernetes +object is a secret. + +The need for the following inputs arise when developing a provider-kind: + +- kindserver password as a kubernetes secret +- endpoint, the IP address of the kindserver as a detail of `ProviderConfig` +- kindConfig, the kind configuration file as a detail of `KindCluster` + +The following outputs arise: + +- kubernetesVersion, kubernetes version of a created kind cluster as a detail of `KindCluster` +- internalIP, IP address of a created kind cluster as a detail of `KindCluster` +- readiness as a detail of `KindCluster` +- kube config of a created kind cluster as a kubernetes secret reference of `KindCluster` + +### Inputs + +#### kindserver password + +The kindserver password needs to be defined first. It is realized as a kubernetes secret and contains the password +which the kindserver has been configured with: + +``` +apiVersion: v1 +data: + credentials: MTIzNDU= +kind: Secret +metadata: + name: kind-provider-secret + namespace: crossplane-system +type: Opaque +``` + +#### endpoint + +The IP address of the kindserver `endpoint` is configured in the provider-kind `ProviderConfig`. This config also references the kindserver password (`kind-provider-secret`): + +``` +apiVersion: kind.crossplane.io/v1alpha1 +kind: ProviderConfig +metadata: + name: kind-provider-config +spec: + credentials: + source: Secret + secretRef: + namespace: crossplane-system + name: kind-provider-secret + key: credentials + endpoint: + url: https://172.18.0.1:7443/api/v1/kindserver +``` + +It is suggested that the kindserver runs on the IP of the docker host, so that all kind clusters can access it without extra routing. + +#### kindConfig + +The kind config is provided as the field `kindConfig` in each `KindCluster` manifest. The manifest also references the provider-kind `ProviderConfig` (`kind-provider-config` in the `providerConfigRef` field): + +``` +apiVersion: container.kind.crossplane.io/v1alpha1 +kind: KindCluster +metadata: + name: example-kind-cluster +spec: + forProvider: + kindConfig: | + kind: Cluster + apiVersion: kind.x-k8s.io/v1alpha4 + nodes: + - role: control-plane + kubeadmConfigPatches: + - | + kind: InitConfiguration + nodeRegistration: + kubeletExtraArgs: + node-labels: "ingress-ready=true" + extraPortMappings: + - containerPort: 80 + hostPort: 80 + protocol: TCP + - containerPort: 443 + hostPort: 443 + protocol: TCP + containerdConfigPatches: + - |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gitea.cnoe.localtest.me:443"] + endpoint = ["https://gitea.cnoe.localtest.me"] + [plugins."io.containerd.grpc.v1.cri".registry.configs."gitea.cnoe.localtest.me".tls] + insecure_skip_verify = true + providerConfigRef: + name: kind-provider-config + writeConnectionSecretToRef: + namespace: default + name: kind-connection-secret +``` + +After the kind cluster has been created, it's kube config is stored in a kubernetes secret `kind-connection-secret` which `writeConnectionSecretToRef` references. + +### Outputs + +The three outputs can be recieved by getting the `KindCluster` manifest after the cluster has been created. The `KindCluster` is +available for reading even before the cluster has been created, but the three outputfields are empty until then. The ready state +will also switch from `false` to `true` after the cluster has finally been created. + +#### kubernetesVersion, internalIP and readiness + +This fields can be recieved with a standard kubectl get command: + +``` +$ kubectl get kindclusters kindcluster-fw252 -o yaml +... +status: + atProvider: + internalIP: 192.168.199.19 + kubernetesVersion: v1.31.0 + conditions: + - lastTransitionTime: "2024-11-12T18:22:39Z" + reason: Available + status: "True" + type: Ready + - lastTransitionTime: "2024-11-12T18:21:38Z" + reason: ReconcileSuccess + status: "True" + type: Synced +``` + +#### kube config + +The kube config is stored in a kubernetes secret (`kind-connection-secret`) which can be accessed after the cluster has been +created: + +``` +$ kubectl get kindclusters kindcluster-fw252 -o yaml +... + writeConnectionSecretToRef: + name: kind-connection-secret + namespace: default +... + +$ kubectl get secret kind-connection-secret +NAME TYPE DATA AGE +kind-connection-secret connection.crossplane.io/v1alpha1 2 107m +``` + +The API endpoint of the new cluster `endpoint` and it's kube config `kubeconfig` is stored in that secret. This values are set in +the Obbserve function of the kind controller of provider-kind. They are set with the special crossplane function managed +ExternalObservation. + +## The reconciler loop of a crossplane provider + +The reconciler loop is the heart of every crossplane provider. As it is coupled async, it's best to describe it working in words: + +Internally, the Connect function get's triggered in the kindcluster controller `internal/controller/kindcluster/kindcluster.go` +first, to setup the provider and configure it with the kindserver password and IP address of the kindserver. + +After that the provider-kind has been configured with the kindserver secret and it's `ProviderConfig`, the provider is ready to +be activated by applying a `KindCluster` manifest to kubernetes. + +When the user applies a new `KindCluster` manifest, a observe loop is started. The provider regulary triggers the `Observe` +function of the controller. As there has yet been created nothing yet, the controller will return +`managed.ExternalObservation{ResourceExists: false}` to signal that the kind cluster resource has not been created yet. +As the is a kindserver SDK available, the controller is using the `Get` function of the SDK to query the kindserver. + +The `KindCluster` is already applied and can be retrieved with `kubectl get kindclusters`. As the cluster has not been +created yet, it readiness state is `false`. + +In parallel, the `Create` function is triggered in the controller. This function has acces to the desired kind config +`cr.Spec.ForProvider.KindConfig` and the name of the kind cluster `cr.ObjectMeta.Name`. It can now call the kindserver SDK to +create a new cluster with the given config and name. The create function is supposed not to run too long, therefore +it directly returns in the case of provider-kind. The kindserver already knows the name of the new cluster and even it is +not yet ready, it will respond with a partial success. + +The observe loops is triggered regulary in parallel. It will be triggered after the create call but before the kind cluster has been +created. Now it will get a step further. It gets the information of kindserver, that the cluster is already knows, but not +finished creating yet. + +After the cluster has been finished creating, the kindserver has all important informations for the provider-kind. That is +The API server endpoint of the new cluster and it's kube config. After another round of the observer loop, the controller +gets now the full set of information of kindcluster (cluster ready, it's API server endpoint and it's kube config). +When this informations has been recieved by the kindserver SDk in form of a JSON file, it is able to signal successfull +creating of the cluster. That is done by returning the following structure from inside the observe function: + +``` + return managed.ExternalObservation{ + ResourceExists: true, + ResourceUpToDate: true, + ConnectionDetails: managed.ConnectionDetails{ + xpv1.ResourceCredentialsSecretEndpointKey: []byte(clusterInfo.Endpoint), + xpv1.ResourceCredentialsSecretKubeconfigKey: []byte(clusterInfo.KubeConfig), + }, + }, nil +``` + +Note that the managed.ConnectionDetails will automatically write the API server endpoint and it's kube config to the kubernetes +secret which `writeConnectionSecretToRef`of `KindCluster` points to. + +It also set the availability flag before returning, that will mark the `KindCluster` as ready: + +``` + cr.Status.SetConditions(xpv1.Available()) +``` + +Before returning, it will also set the informations which are transfered into fields of `kindCluster` which can be retrieved by a +`kubectl get`, the `kubernetesVersion` and the `internalIP` fields: + +``` + cr.Status.AtProvider.KubernetesVersion = clusterInfo.K8sVersion + cr.Status.AtProvider.InternalIP = clusterInfo.NodeIp +``` + +Now the `KindCluster` is setup completly and when it's data is retrieved by `kubectl get`, all data is available and it's readiness +is set to `true`. + +The observer loops continies to be called to enable drift detection. That detection is currently not implemented, but is +prepared for future implementations. When the observer function would detect that the kind cluster with a given name is set +up with a kind config other then the desired, the controller would call the controller `Update` function, which would +delete the currently runnign kind cluster and recreates it with the desired kind config. + +When the user is deleting the `KindCluster` manifest at a later stage, the `Delete` function of the controller is triggered +to call the kindserver SDK to delete the cluster with the given name. The observer loop will acknowledge that the cluster +is deleted successfully by retrieving `kind cluster not found` when the deletion had been successfull. If not, the controller +will trigger the delete function in a loop as well, until the kind cluster has been deleted. + +That assembles the reconciler loop. + +## kind API server IP address + +Each newly created kind cluster has a practially random kubernetes API server endpoint. As the IP address of a new kind cluster +can't determined before creation, the kindserver manages the API server field of the kind config. It will map all +kind server kubernets API endpoints on it's own IP address, but on different ports. That garantees that alls kind +clusters can access the kubernetes API endpoints of all other kind clusters by using the docker host IP of the kindserver +itself. This is needed as the kube config hardcodes the kubernets API server endpoint. By using the docker host IP +but with different ports, every usage of a kube config from one kind cluster to another is working successfully. + +The management of the kind config in the kindserver is implemented in the `Post` function of the kindserver `main.go` file. + +## Create a the crossplane provider-kind + +The official way for creating crossplane providers is to use the provider-template. Process the following steps to create +a new provider. + +First, clone the provider-template. The commit ID when this howto has been written is 2e0b022c22eb50a8f32de2e09e832f17161d7596. +Rename the new folder after cloning. + +``` +git clone https://github.com/crossplane/provider-template.git +mv provider-template provider-kind +cd provider-kind/ +``` + +The informations in the provided README.md are incomplete. Folow this steps to get it running: + +> Please use bash for the next commands (`${type,,}` e.g. is not a mistake) + +``` +make submodules +export provider_name=Kind # Camel case, e.g. GitHub +make provider.prepare provider=${provider_name} +export group=container # lower case e.g. core, cache, database, storage, etc. +export type=KindCluster # Camel casee.g. Bucket, Database, CacheCluster, etc. +make provider.addtype provider=${provider_name} group=${group} kind=${type} +sed -i "s/sample/${group}/g" apis/${provider_name,,}.go +sed -i "s/mytype/${type,,}/g" internal/controller/${provider_name,,}.go +``` + +Patch the Makefile: + +``` +dev: $(KIND) $(KUBECTL) + @$(INFO) Creating kind cluster ++ @$(KIND) delete cluster --name=$(PROJECT_NAME)-dev + @$(KIND) create cluster --name=$(PROJECT_NAME)-dev + @$(KUBECTL) cluster-info --context kind-$(PROJECT_NAME)-dev +- @$(INFO) Installing Crossplane CRDs +- @$(KUBECTL) apply --server-side -k https://github.com/crossplane/crossplane//cluster?ref=master ++ @$(INFO) Installing Crossplane ++ @helm install crossplane --namespace crossplane-system --create-namespace crossplane-stable/crossplane --wait + @$(INFO) Installing Provider Template CRDs + @$(KUBECTL) apply -R -f package/crds + @$(INFO) Starting Provider Template controllers +``` + +Generate, build and execute the new provider-kind: + +``` +make generate +make build +make dev +``` + +Now it's time to add the required fields (internalIP, endpoint, etc.) to the spec fields in go api sources found in: + +- apis/container/v1alpha1/kindcluster_types.go +- apis/v1alpha1/providerconfig_types.go + +The file `apis/kind.go` may also be modified. The word `sample` can be replaces with `container` in our case. + +When that's done, the yaml specifications needs to be modified to also include the required fields (internalIP, endpoint, etc.) + +Next, a kindserver SDK can be implemented. That is a helper class which encapsulates the get, create and delete HTTP calls to the kindserver. Connection infos (kindserver IP address and password) will be stored by the constructor. + +After that we can add the usage of the kindclient sdk in kindcluster controller `internal/controller/kindcluster/kindcluster.go`. + +Finally we can update the `Makefile` to better handle the primary kind cluster creation and adding of a cluster role binding +so that crossplane can access the `KindCluster` objects. Examples and updating the README.md will finish the development. + +All this steps are documented in: https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW/provider-kind/pulls/1 + +## Publish the provider-kind to a user defined docker registry + +Every provider-kind release needs to be tagged first in the git repository: + +``` +git tag v0.1.0 +git push origin v0.1.0 +``` + +Next, make sure you have docker logged in into the target registry: + +``` +docker login forgejo.edf-bootstrap.cx.fg1.ffm.osc.live +``` + +Now it's time to specify the target registry, build the provider-kind for ARM64 and AMD64 CPU architectures and publish it to the target registry: + +``` +XPKG_REG_ORGS_NO_PROMOTE="" XPKG_REG_ORGS="forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz" make build.all publish BRANCH_NAME=main +``` + +The parameter `BRANCH_NAME=main` is needed when the tagging and publishing happens from another branch. The version of the provider-kind that of the tag name. The output of the make call ends then like this: + +``` +$ XPKG_REG_ORGS_NO_PROMOTE="" XPKG_REG_ORGS="forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz" make build.all publish BRANCH_NAME=main +... +14:09:19 [ .. ] Skipping image publish for docker.io/provider-kind:v0.1.0 +Publish is deferred to xpkg machinery +14:09:19 [ OK ] Image publish skipped for docker.io/provider-kind:v0.1.0 +14:09:19 [ .. ] Pushing package forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0 +xpkg pushed to forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0 +14:10:19 [ OK ] Pushed package forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0 +``` + +After publishing, the provider-kind can be installed in-cluster similar to other providers like +provider-helm and provider-kubernetes. To install it apply the following manifest: + +``` +apiVersion: pkg.crossplane.io/v1 +kind: Provider +metadata: + name: provider-kind +spec: + package: forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0 +``` + +The output of `kubectl get providers`: + +``` +$ kubectl get providers +NAME INSTALLED HEALTHY PACKAGE AGE +provider-helm True True xpkg.upbound.io/crossplane-contrib/provider-helm:v0.19.0 38m +provider-kind True True forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0 39m +provider-kubernetes True True xpkg.upbound.io/crossplane-contrib/provider-kubernetes:v0.15.0 38m +``` + +The provider-kind can now be used. + +## Crossplane Composition `edfbuilder` + +Together with the implemented provider-kind and it's config to create a composition which can create kind clusters and +the ability to deploy helm and kubernetes objects in the newly created cluster. + +A composition is realized as a custom resource definition (CRD) considting of three parts: + +- A definition +- A composition +- One or more deplyoments of the composition + +### definition.yaml + +The definition of the CRD will most probably contain one additional fiel, the ArgoCD repository URL to easily select +the stacks which should be deployed: + +``` +apiVersion: apiextensions.crossplane.io/v1 +kind: CompositeResourceDefinition +metadata: + name: edfbuilders.edfbuilder.crossplane.io +spec: + connectionSecretKeys: + - kubeconfig + group: edfbuilder.crossplane.io + names: + kind: EDFBuilder + listKind: EDFBuilderList + plural: edfbuilders + singular: edfbuilders + versions: + - name: v1alpha1 + served: true + referenceable: true + schema: + openAPIV3Schema: + description: A EDFBuilder is a composite resource that represents a K8S Cluster with edfbuilder Installed + type: object + properties: + spec: + type: object + properties: + repoURL: + type: string + description: URL to ArgoCD stack of stacks repo + required: + - repoURL +``` + +### composition.yaml + +This is a shortened version of the file `examples/composition_deprecated/composition.yaml`. It combines a `KindCluster` with +deployments of of provider-helm and provider-kubernetes. Note that the `ProviderConfig` and the kindserver secret has already been +applied to kubernetes (by the Makefile) before applying this composition. + +``` +apiVersion: apiextensions.crossplane.io/v1 +kind: Composition +metadata: + name: edfbuilders.edfbuilder.crossplane.io +spec: + writeConnectionSecretsToNamespace: crossplane-system + compositeTypeRef: + apiVersion: edfbuilder.crossplane.io/v1alpha1 + kind: EDFBuilder + resources: + + ### kindcluster + - base: + apiVersion: container.kind.crossplane.io/v1alpha1 + kind: KindCluster + metadata: + name: example + spec: + forProvider: + kindConfig: | + kind: Cluster + apiVersion: kind.x-k8s.io/v1alpha4 + nodes: + - role: control-plane + kubeadmConfigPatches: + - | + kind: InitConfiguration + nodeRegistration: + kubeletExtraArgs: + node-labels: "ingress-ready=true" + extraPortMappings: + - containerPort: 80 + hostPort: 80 + protocol: TCP + - containerPort: 443 + hostPort: 443 + protocol: TCP + containerdConfigPatches: + - |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gitea.cnoe.localtest.me:443"] + endpoint = ["https://gitea.cnoe.localtest.me"] + [plugins."io.containerd.grpc.v1.cri".registry.configs."gitea.cnoe.localtest.me".tls] + insecure_skip_verify = true + providerConfigRef: + name: example-provider-config + writeConnectionSecretToRef: + namespace: default + name: my-connection-secret + + ### helm provider config + - base: + apiVersion: helm.crossplane.io/v1beta1 + kind: ProviderConfig + spec: + credentials: + source: Secret + secretRef: + namespace: default + name: my-connection-secret + key: kubeconfig + patches: + - fromFieldPath: metadata.name + toFieldPath: metadata.name + readinessChecks: + - type: None + + ### ingress-nginx + - base: + apiVersion: helm.crossplane.io/v1beta1 + kind: Release + metadata: + annotations: + crossplane.io/external-name: ingress-nginx + spec: + rollbackLimit: 99999 + forProvider: + chart: + name: ingress-nginx + repository: https://kubernetes.github.io/ingress-nginx + version: 4.11.3 + namespace: ingress-nginx + values: + controller: + updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + hostPort: + enabled: true + terminationGracePeriodSeconds: 0 + service: + type: NodePort + watchIngressWithoutClass: true + + nodeSelector: + ingress-ready: "true" + tolerations: + - key: "node-role.kubernetes.io/master" + operator: "Equal" + effect: "NoSchedule" + - key: "node-role.kubernetes.io/control-plane" + operator: "Equal" + effect: "NoSchedule" + + publishService: + enabled: false + extraArgs: + publish-status-address: localhost + # added for idpbuilder + enable-ssl-passthrough: "" + + # added for idpbuilder + allowSnippetAnnotations: true + + # added for idpbuilder + config: + proxy-buffer-size: 32k + use-forwarded-headers: "true" + patches: + - fromFieldPath: metadata.name + toFieldPath: spec.providerConfigRef.name + + ### kubernetes provider config + - base: + apiVersion: kubernetes.crossplane.io/v1alpha1 + kind: ProviderConfig + spec: + credentials: + source: Secret + secretRef: + namespace: default + name: my-connection-secret + key: kubeconfig + patches: + - fromFieldPath: metadata.name + toFieldPath: metadata.name + readinessChecks: + - type: None + + ### kubernetes argocd stack of stacks application + - base: + apiVersion: kubernetes.crossplane.io/v1alpha2 + kind: Object + spec: + forProvider: + manifest: + apiVersion: argoproj.io/v1alpha1 + kind: Application + metadata: + name: edfbuilder + namespace: argocd + labels: + env: dev + spec: + destination: + name: in-cluster + namespace: argocd + source: + path: registry + repoURL: 'https://gitea.cnoe.localtest.me/giteaAdmin/edfbuilder-shoot' + targetRevision: HEAD + project: default + syncPolicy: + automated: + prune: true + selfHeal: true + syncOptions: + - CreateNamespace=true + patches: + - fromFieldPath: metadata.name + toFieldPath: spec.providerConfigRef.name +``` + +## Usage + +Set this values to allow many kind clusters running in parallel, if needed: + +``` +sudo sysctl fs.inotify.max_user_watches=524288 +sudo sysctl fs.inotify.max_user_instances=512 + +To make the changes persistent, edit the file /etc/sysctl.conf and add these lines: +fs.inotify.max_user_watches = 524288 +fs.inotify.max_user_instances = 512 +``` + +Start provider-kind: + +``` +make build +kind delete clusters $(kind get clusters) +kind create cluster --name=provider-kind-dev +DOCKER_HOST_IP="$(docker inspect $(docker ps | grep kindest | awk '{ print $1 }' | head -n1) | jq -r .[0].NetworkSettings.Networks.kind.Gateway)" make dev +``` + +Wait until debug output of the provider-kind is shown: + +``` +... +namespace/crossplane-system configured +secret/example-provider-secret created +providerconfig.kind.crossplane.io/example-provider-config created +14:49:50 [ .. ] Starting Provider Kind controllers +2024-11-12T14:49:54+01:00 INFO controller-runtime.metrics Starting metrics server +2024-11-12T14:49:54+01:00 INFO Starting EventSource {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig", "source": "kind source: *v1alpha1.ProviderConfig"} +2024-11-12T14:49:54+01:00 INFO Starting EventSource {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig", "source": "kind source: *v1alpha1.ProviderConfigUsage"} +2024-11-12T14:49:54+01:00 INFO Starting Controller {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig"} +2024-11-12T14:49:54+01:00 INFO Starting EventSource {"controller": "managed/kindcluster.container.kind.crossplane.io", "controllerGroup": "container.kind.crossplane.io", "controllerKind": "KindCluster", "source": "kind source: *v1alpha1.KindCluster"} +2024-11-12T14:49:54+01:00 INFO Starting Controller {"controller": "managed/kindcluster.container.kind.crossplane.io", "controllerGroup": "container.kind.crossplane.io", "controllerKind": "KindCluster"} +2024-11-12T14:49:54+01:00 INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":8080", "secure": false} +2024-11-12T14:49:54+01:00 INFO Starting workers {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig", "worker count": 10} +2024-11-12T14:49:54+01:00 DEBUG provider-kind Reconciling {"controller": "providerconfig/providerconfig.kind.crossplane.io", "request": {"name":"example-provider-config"}} +2024-11-12T14:49:54+01:00 INFO Starting workers {"controller": "managed/kindcluster.container.kind.crossplane.io", "controllerGroup": "container.kind.crossplane.io", "controllerKind": "KindCluster", "worker count": 10} +2024-11-12T14:49:54+01:00 INFO KubeAPIWarningLogger metadata.finalizers: "in-use.crossplane.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers +2024-11-12T14:49:54+01:00 DEBUG provider-kind Reconciling {"controller": "providerconfig/providerconfig.kind.crossplane.io", "request": {"name":"example-provider-config"}} + +``` + +Start kindserver: + +see kindserver/README.md + +When kindserver is started: + +``` +cd examples/composition_deprecated +kubectl apply -f definition.yaml +kubectl apply -f composition.yaml +kubectl apply -f cluster.yaml +``` + +List the created elements, wait until the new cluster is created, then switch back to the primary cluster: + +``` +kubectl config use-context kind-provider-kind-dev +``` + +Show edfbuilder compositions: + +``` +kubectl get edfbuilders +NAME SYNCED READY COMPOSITION AGE +kindcluster True True edfbuilders.edfbuilder.crossplane.io 4m45s +``` + +Show kind clusters: + +``` +kubectl get kindclusters +NAME READY SYNCED EXTERNAL-NAME INTERNALIP VERSION AGE +kindcluster-wlxrt True True kindcluster-wlxrt 192.168.199.19 v1.31.0 5m12s +``` + +Show helm deployments: + +``` +kubectl get releases +NAME CHART VERSION SYNCED READY STATE REVISION DESCRIPTION AGE +kindcluster-29dgf ingress-nginx 4.11.3 True True deployed 1 Install complete 5m32s +kindcluster-w2dxl forgejo 10.0.2 True True deployed 1 Install complete 5m32s +kindcluster-x8x9k argo-cd 7.6.12 True True deployed 1 Install complete 5m32s +``` + +Show kubernetes objects: + +``` +kubectl get objects +NAME KIND PROVIDERCONFIG SYNCED READY AGE +kindcluster-8tbv8 ConfigMap kindcluster True True 5m50s +kindcluster-9lwc9 ConfigMap kindcluster True True 5m50s +kindcluster-9sgmd Deployment kindcluster True True 5m50s +kindcluster-ct2h7 Application kindcluster True True 5m50s +kindcluster-s5knq ConfigMap kindcluster True True 5m50s +``` + +Open the composition in VS Code: examples/composition_deprecated/composition.yaml + +## What is missing + +Currently missing is the third and final part, the imperative steps which need to be processed: + +- creation of TLS certificates and giteaAdmin password +- creation of a Forgejo repository for the stacks +- uploading the stacks in the Forgejo repository + +Connecting the definition field (ArgoCD repo URL) and composition interconnects (function-patch-and-transform) are also missing. \ No newline at end of file diff --git a/docs/technical-documentation/solution/tools/Crossplane/provider-kind/composition.drawio b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/composition.drawio new file mode 100644 index 0000000..48abda4 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/composition.drawio @@ -0,0 +1,72 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/technical-documentation/solution/tools/Crossplane/provider-kind/composition.png b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/composition.png new file mode 100644 index 0000000..ce63ab8 Binary files /dev/null and b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/composition.png differ diff --git a/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_interface.drawio b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_interface.drawio new file mode 100644 index 0000000..4d11b51 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_interface.drawio @@ -0,0 +1,31 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_interface.png b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_interface.png new file mode 100644 index 0000000..5d09530 Binary files /dev/null and b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_interface.png differ diff --git a/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_provider-kind.drawio b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_provider-kind.drawio new file mode 100644 index 0000000..7da7ae6 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_provider-kind.drawio @@ -0,0 +1,49 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_provider-kind.png b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_provider-kind.png new file mode 100644 index 0000000..c55fc61 Binary files /dev/null and b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/kindserver_provider-kind.png differ diff --git a/docs/technical-documentation/solution/tools/Crossplane/provider-kind/provider-kind_providerconfig.drawio b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/provider-kind_providerconfig.drawio new file mode 100644 index 0000000..44dd400 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/provider-kind_providerconfig.drawio @@ -0,0 +1,71 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/technical-documentation/solution/tools/Crossplane/provider-kind/provider-kind_providerconfig.png b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/provider-kind_providerconfig.png new file mode 100644 index 0000000..e588964 Binary files /dev/null and b/docs/technical-documentation/solution/tools/Crossplane/provider-kind/provider-kind_providerconfig.png differ diff --git a/docs/technical-documentation/solution/tools/Kube-prometheus-stack/_index.md b/docs/technical-documentation/solution/tools/Kube-prometheus-stack/_index.md new file mode 100644 index 0000000..2bbf352 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Kube-prometheus-stack/_index.md @@ -0,0 +1,30 @@ +--- +title: Kube-prometheus-stack +description: Kube-prometheus-stack contains Kubernetes manifests, Prometheus and Grafana, including preconfigured dashboards +--- + +## Kube-prometheus-stack Overview + +Grafana is an open-source monitoring solution that enables viusalization of metrics and logs. +Prometheus is an open-source monitoring and alerting system which collects metrics from services and allows the metrics to be shown in Grafana. + +### Implementation Details + +The application ist started in edfbuilder/kind/stacks/core/kube-prometheus.yaml. +The application has the sync option spec.syncPolicy.syncOptions ServerSideApply=true. This is necessary, since kube-prometheus-stack exceeds the size limit for secrets and without this option a sync attempt will fail and throw an exception. +The Helm values file edfbuilder/kind/stacks/core/kube-prometheus/values.yaml contains configuration values: +grafana.additionalDataSources contains Loki as a Grafana Data Source. +grafana.ingress contains the Grafana ingress configuratione, like the host url (cnoe.localtest.me). +grafana.sidecar.dashboards contains necessary configurations so additional user defined dashboards are loaded when Grafana is started. +grafana.grafana.ini.server contains configuration details that are necessary, so the ingress points to the correct url. + +### Start +Once Grafana is running it is accessible under https://cnoe.localtest.me/grafana. +Many preconfigured dashboards can be used by klicking the menu option Dashboards. + +### Adding your own dashboards +The application edfbuilder/kind/stacks/core/kube-prometheus.yaml is used to import new Loki dashboards. Examples for imported dashboards can be found in the folder edfbuilder/kind/stacks/core/kube-prometheus/dashboards. + +It is possible to add your own dashboards: Dashboards must be in JSON format. To add your own dashboard create a new ConfigMap in YAML format using onw of the examples as a blueprint. The new dashboard in JSON format has to be added as the value for data.k8s-dashboard-[...].json like in the examples. (It is important to use a unique name for data.k8s-dashboard-[...].json for each dashboard.) + +Currently preconfigured dashboards include several dahboards for Loki and a dashboard to showcase Nginx-Ingress metrics. diff --git a/docs/technical-documentation/solution/tools/Loki/_index.md b/docs/technical-documentation/solution/tools/Loki/_index.md new file mode 100644 index 0000000..91945b3 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Loki/_index.md @@ -0,0 +1,10 @@ +--- +title: Loki +description: Grafana Loki is a scalable open-source log aggregation system +--- + +## Loki Overview + +The application Grafana Loki is started in edfbuilder/kind/stacks/core/loki.yaml. +Loki is started in microservices mode and contains the components ingester, distributor, querier, and query-frontend. +The Helm values file edfbuilder/kind/stacks/core/loki/values.yaml contains configuration values. diff --git a/docs/technical-documentation/solution/tools/Promtail/_index.md b/docs/technical-documentation/solution/tools/Promtail/_index.md new file mode 100644 index 0000000..a5a1a81 --- /dev/null +++ b/docs/technical-documentation/solution/tools/Promtail/_index.md @@ -0,0 +1,9 @@ +--- +title: Promtail +description: Grafana Promtail is an agent that ships logs to a Grafan Loki instance (log-shipper) +--- + +## Promtail Overview + +The application Grafana Promtail is started in edfbuilder/kind/stacks/core/promtail.yaml. +The Helm values file edfbuilder/kind/stacks/core/promtail/values.yaml contains configuration values. diff --git a/docs/technical-documentation/solution/tools/_index.md b/docs/technical-documentation/solution/tools/_index.md new file mode 100644 index 0000000..2771cce --- /dev/null +++ b/docs/technical-documentation/solution/tools/_index.md @@ -0,0 +1,7 @@ +--- +title: Tools +linkTitle: Tools +weight: 4 +description: The tools that are used for implementing Edge Developer Framework +--- + diff --git a/docs/technical-documentation/solution/tools/kyverno integration/_index.md b/docs/technical-documentation/solution/tools/kyverno integration/_index.md new file mode 100644 index 0000000..12ca83e --- /dev/null +++ b/docs/technical-documentation/solution/tools/kyverno integration/_index.md @@ -0,0 +1,44 @@ +--- +title: Kyverno +description: Kyverno is a policy engine for Kubernetes designed to enforce, validate, and mutate configurations of Kubernetes resources +--- + +## Kyverno Overview + +Kyverno is a policy engine for Kubernetes designed to enforce, validate, and mutate configurations of Kubernetes resources. It allows administrators to define policies as Kubernetes custom resources (CRDs) without requiring users to learn a new language or system. + +### Key Uses + +1. **Policy Enforcement**: Kyverno ensures resources comply with security, operational, or organizational policies, such as requiring specific labels, annotations, or resource limits. +2. **Validation**: It checks resources against predefined rules, ensuring configurations are correct before they are applied to the cluster. +3. **Mutation**: Kyverno can automatically modify resources on-the-fly, adding missing fields or values to Kubernetes objects. +4. **Generation**: It can generate resources like ConfigMaps or Secrets automatically when needed, helping to maintain consistency. + +Kyverno simplifies governance and compliance in Kubernetes environments by automating policy management and ensuring best practices are followed. + +## Prerequisites +Same as for idpbuilder installation +- Docker Engine +- Go +- kubectl +- kind + +## Installation +### Build process +For building idpbuilder the source code needs to be downloaded and compiled: + +``` +git clone https://github.com/cnoe-io/idpbuilder.git +cd idpbuilder +go build +``` + +### Start idpbuilder + +To start the idpbuilder with kyverno integration execute the following command: + +``` +idpbuilder create --use-path-routing -p https://github.com/cnoe-io/stacks//ref-implementation -p https://github.com/cnoe-io/stacks//kyverno-integration +``` + +After this step, you can see in ArgoCD that kyverno was installed diff --git a/docs/technical-documentation/solution/tools/kyverno integration/kyverno.png b/docs/technical-documentation/solution/tools/kyverno integration/kyverno.png new file mode 100644 index 0000000..c6f42fc Binary files /dev/null and b/docs/technical-documentation/solution/tools/kyverno integration/kyverno.png differ diff --git a/mkdocs.yaml b/mkdocs.yaml index b90ab9e..f612e5a 100644 --- a/mkdocs.yaml +++ b/mkdocs.yaml @@ -24,9 +24,17 @@ nav: - Use Cases: technical-documentation/concepts/3_use-cases/_index.md - Digital Platforms: technical-documentation/concepts/4_digital-platforms/_index.md - Platform Orchestrators: technical-documentation/concepts/5_platforms/_index.md - - About: - - License: about/license.md - - Release Notes: about/release-notes.md + - EDP solution: + - Design: technical-documentation/solution/design/_index.md + - Scenarios: technical-documentation/solution/scenarios/_index.md + - Tools: technical-documentation/solution/tools/_index.md + - EDP Project: + - Bootstrapping: technical-documentation/project/bootstrapping/_index.md + - Conceptual Onboarding: technical-documentation/project/conceptual-onboarding/_index.md + - Stakeholer Workshop: technical-documentation/project/intro-stakeholder-workshop/_index.md + - Plan 2024: technical-documentation/project/plan-in-2024/_index.md + - Team Process: technical-documentation/project/team-process/_index.md + plugins: - techdocs-core