edp-doc/docs/user-documentation/victoria-k8s-stack.md

38 lines
1.6 KiB
Markdown

# Victoria k8s Stack
We use [Victoria Metrics k8s stack](https://docs.victoriametrics.com/helm/victoriametrics-k8s-stack/) and [Vector](https://docs.victoriametrics.com/helm/victorialogs-single/#sending-logs-to-external-victorialogs).
## Why do we use it?
* compare it with competitors like ELK, Loki, Prometheus
* it delivers logging and metrics
* in ELK we replaced the 'E'(ElasticSearch) by VictoriaLogs and VictoriaMetrics, L(Logstash) is replaced by Vector, 'K'(Kibana) by Grafana
* Loki (also the 'E'): has 5 components (like distrubutor, querier, querier-frontend....), VM
* ELK is hard to manage
* Durability: We need to store logs for years, there should be a 'shrink' process
* Challenge: Scaling, there are huge amounts of data (like TB/d)
* we urde for simplicity, cost, scalability
## Big Picture
### Architecture
The high level deployment picture of VictoriaMetrics k8 s Stack looks like this:
![alt text](./_assets/vm-deployment-architecture.png)
### Deployment
In detail, after having deployed it, we see the following components:
![alt text](./_assets/vm-pods.png)
1. vector-vector: the log shipper to victorialogs, twice because it is a daemon-set and thus deploed on each node in the cluster
2. prometheus-node-exporter: a metrics generator and metrics endpoint of node metrics, also deployed on each node
3. vmagent: the central agent scraping data from the metrics collectors
4. vmalert: not used yet
5. vmsingle-victoria-metrics: the metrics server, getting the data from vmagent
6. vlogs: the logging server, getting the data from vector
7. victoria-metrics-operator: the operator providing and managing the custom resources we deploy