Runbook: NodeCPUHigh¶
Alerte : NodeCPUHigh (PrometheusRule, severity warning à 80%, critical à 95%)
Symptôme :
Node
<node>sature son CPU (> 80% load moyen 5 min). Les pods commencent à subir du throttling.
Severity : 🟡 warning à 80%, 🔴 critical à 95%
Diagnostic¶
1. Identifier le node chargé¶
2. Top pods consommateurs sur ce node¶
3. Dashboard "Node CPU"¶
Query PromQL :
4. Vérifier throttling¶
kubectl exec -n akko <pod> -- cat /sys/fs/cgroup/cpu.stat 2>/dev/null | grep throttled
# Si nr_throttled > 0 → le container est CPU-limité
Causes fréquentes + fix¶
| Cause | Typique pour | Fix |
|---|---|---|
| Trino query lourde | Coordinator 100% CPU pendant 10+ min | SHOW TRANSACTIONS, kill query; runbook trino-slow-queries.md |
| Spark driver OOM-prevent | GC thrashing | Bump driver memory ou reduce executor count |
| Airflow scheduler loop | Scheduler CPU > 80% constant | Bump parallelism, shard DAGs |
| Postgres VACUUM FULL | PG stuck 100% CPU | Attendre fin vacuum ou pg_cancel_backend() |
| Node sous-dimensionné | Toujours >70% sur 24h | Ajouter un node ou bump VM size |
| Mining/crypto container | CPU 100% inconnu | Audit seccomp + NetworkPolicy egress |
Fix d'urgence (< 5 min)¶
# 1. Identifier le PID qui consomme
ssh root@<node>
top -b -n1 | head -20
# 2. Si c'est Trino : kill la query
kubectl exec -n akko deploy/akko-trino -- \
trino --execute="SELECT query_id, state, query FROM system.runtime.queries WHERE state='RUNNING' ORDER BY created DESC LIMIT 5"
# Via HTTP avec token admin :
curl -X DELETE -H "Authorization: Bearer $TRINO_ADMIN_TOKEN" \
https://trino.$AKKO_DOMAIN/v1/query/<query_id>
# 3. Si le pod peut être rescheduled : drain le node
kubectl drain <node> --ignore-daemonsets --delete-emptydir-data --grace-period=30
Fix pérenne (R02)¶
Option A — bump resource limits¶
# helm/examples/values-dev.yaml (exemple Trino)
trino:
coordinator:
resources:
limits:
cpu: 2 # was 1
memory: 4Gi
worker:
resources:
limits:
cpu: 4 # was 2
Option B — ajouter un node¶
- Netcup :
helm upgradeavec un nouveau pool - EKS :
eksctl scale nodegroup --nodes=<N+1> - k3d dev :
k3d node create
Option C — HPA (Horizontal Pod Autoscaler)¶
Prévention¶
- Dashboard Dashboards "Node resources" avec CPU, RAM, Disk per node
- PrometheusRule early warning à 70% pour 10 min
- Capacity planning review trimestriel
- Throttling alertes :
container_cpu_cfs_throttled_periods_total
Lessons learned¶
- Trino est le principal driver CPU — c'est attendu. Ne pas paniquer à 80% si la query est normale. Panique à 100% constant > 30 min.
Liens utiles¶
- Trino slow queries
- Node disk full
- Dashboard : Dashboards / "AKKO — Kubernetes overview"