doc(victoria-k8s-stack): WiP .... first steps of documentation of the components we have, se DevFW/infra-deploy#13 (comment)

This commit is contained in:
Stephan Lo 2025-05-26 16:09:56 +02:00
parent a541850e7b
commit d6e4421f10
3 changed files with 38 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

View file

@ -0,0 +1,38 @@
# Victoria k8s Stack
We use [Victoria Metrics k8s stack](https://docs.victoriametrics.com/helm/victoriametrics-k8s-stack/) and [Vector](https://docs.victoriametrics.com/helm/victorialogs-single/#sending-logs-to-external-victorialogs).
## Why do we use it?
* compare it with competitors like ELK, Loki, Prometheus
* it delivers logging and metrics
* in ELK we replaced the 'E'(ElasticSearch) by VictoriaLogs and VictoriaMetrics, L(Logstash) is replaced by Vector, 'K'(Kibana) by Grafana
* Loki (also the 'E'): has 5 components (like distrubutor, querier, querier-frontend....), VM
* ELK is hard to manage
* Durability: We need to store logs for years, there should be a 'shrink' process
* Challenge: Scaling, there are huge amounts of data (like TB/d)
* we urde for simplicity, cost, scalability
## Big Picture
### Architecture
The high level deployment picture of VictoriaMetrics k8 s Stack looks like this:
![alt text](./_assets/vm-deployment-architecture.png)
### Deployment
In detail, after having deployed it, we see the following components:
![alt text](./_assets/vm-pods.png)
1. vector-vector: the log shipper to victorialogs, twice because it is a daemon-set and thus deploed on each node in the cluster
2. prometheus-node-exporter: a metrics generator and metrics endpoint of node metrics, also deployed on each node
3. vmagent: the central agent scraping data from the metrics collectors
4. vmalert: not used yet
5. vmsingle-victoria-metrics: the metrics server, getting the data from vmagent
6. vlogs: the logging server, getting the data from vector
7. victoria-metrics-operator: the operator providing and managing the custom resources we deploy