Pluggable layers (Lego pattern)¶
AKKO is a platform of layered contracts, not a stack of specific tools. Every layer that the rest of the platform reads through a canonical Kubernetes resource (Secret, ConfigMap, Service) is pluggable: AKKO ships one default implementation, and the operator can swap it for the client's existing infrastructure with a single Helm value flip — never a multi-week migration.
This page explains the pattern, lists the pluggable layers shipped today, and shows how to switch a layer to BYOS (Bring Your Own Storage / Identity / Observability) mode.
The principle¶
For every layer that has a stable industry contract (S3, OIDC, Prometheus remote-write, OTLP, OCI…), AKKO:
| Mode | What it means | When to pick it |
|---|---|---|
| default | The single implementation AKKO ships and tests. Apache 2.0 / MIT / BSD licence, demo-friendly footprint. | Most installs. Fastest to demo, easiest to validate. |
| external (BYOS) | No layer pod is instantiated. AKKO consumes whatever endpoint the operator wires through the canonical Secret. | Client already runs AWS S3 / Datadog / Grafana / Auth0 — keep their existing infrastructure, AKKO just talks to it. |
AKKO does not ship multiple parallel implementations of the same
contract. There is no "second sub-chart packaged for Ceph" or "second
sub-chart packaged for Loki" — those reach AKKO via the external
slot pointing at the customer's endpoint, and AKKO never pretends to
maintain them.
Pluggable layers shipped today¶
| Layer | Contract | Default packaged | License | Foundation | BYOS targets |
|---|---|---|---|---|---|
| Object storage | S3 v4 | SeaweedFS | Apache 2.0 | Single-vendor (active) | AWS S3 / GCS / Azure Blob / MinIO Enterprise / Ceph RGW |
| Log aggregation | Loki API / OTLP | VictoriaLogs | Apache 2.0 | Single-vendor (active) | Grafana Cloud Loki / Datadog / Splunk / Elastic / OpenSearch |
| Dashboards | PromQL + JSON dashboards | Perses | Apache 2.0 | CNCF Sandbox | Grafana / Grafana Cloud / Datadog / Dynatrace |
| Log shipper | OTLP / Loki API | Fluent Bit | Apache 2.0 | CNCF Graduated | OTel Collector / Vector / operator-managed |
All four are bundled in two Lego sub-charts:
akko-storage— exposes the S3 contract via Secretakko-s3+ ConfigMapakko-s3-config.akko-observability— exposes the logs + dashboards contracts via Secretsakko-logs,akko-dashboards, ConfigMapsakko-logs-config,akko-dashboards-config.
Every other AKKO chart that needs storage or observability reads from
these canonical resources — never from a hardcoded akko-minio:9000
or akko-grafana reference.
How consumers see it¶
Secret/akko-s3 → endpoint, region, bucket, accessKey, secretKey
ConfigMap/akko-s3-config → endpoint, region, bucket, backend (informational)
Secret/akko-logs → endpoint, token, backend
ConfigMap/akko-logs-config → endpoint, backend, shipper (informational)
ConfigMap/akko-dashboards-config → url, backend, label
These names are fixed across releases. Polaris, Trino, Spark, MLflow,
akko-rag, Airflow, akko-aden, the cockpit nginx all read the same
akko-s3 Secret. No service ever writes a hardcoded reference to
"minio" or "grafana".
Three swap scenarios¶
Scenario A — Default install¶
helm install akko helm/akko/ -n akko --create-namespace \
-f helm/examples/values-dev.yaml \
-f helm/examples/values-netcup.yaml
You get SeaweedFS + VictoriaLogs + Perses + Fluent Bit, all wired, all working, ~100 MB RAM extra over the legacy MinIO/Loki/Grafana stack.
Scenario B — Bring your own AWS S3 (storage layer)¶
helm upgrade akko helm/akko/ -n akko \
--set akko-storage.backend=external \
--set akko-storage.external.endpoint=https://s3.eu-west-3.amazonaws.com \
--set akko-storage.external.region=eu-west-3 \
--set akko-storage.external.bucket=acme-prod-akko \
--set akko-storage.external.accessKey=$AKKO_AWS_KEY \
--set akko-storage.external.secretKey=$AKKO_AWS_SECRET
AKKO stops deploying SeaweedFS pods. Polaris/Trino/Spark/MLflow now write Iceberg files, MLflow artifacts and demo data directly to your AWS bucket. No data migration script required — just point the endpoint at the S3 you already use.
The same pattern works for GCS (https://storage.googleapis.com),
Azure Blob (S3 facade), MinIO Enterprise, Ceph RGW, or any other S3
v4 endpoint.
Scenario C — Bring your own Grafana (dashboards layer)¶
helm upgrade akko helm/akko/ -n akko \
--set akko-observability.dashboards.backend=external \
--set akko-observability.dashboards.external.url=https://grafana.acme.internal \
--set akko-observability.dashboards.external.label="Acme Grafana"
AKKO stops deploying Perses. The cockpit's Dashboards page becomes a
deep-link card pointing at your Grafana (no iframe — CSP would fight
us). AKKO ships dashboard JSON definitions you can import in your
existing Grafana under helm/akko/charts/akko-observability/dashboards/.
Scenario D — Ship logs to Datadog (logs layer)¶
helm upgrade akko helm/akko/ -n akko \
--set akko-observability.logs.backend=external \
--set akko-observability.logs.external.endpoint=https://http-intake.logs.datadoghq.eu/v1/input \
--set akko-observability.logs.external.token=$DD_API_KEY
VictoriaLogs is gone. The Fluent Bit DaemonSet stays — it now ships to Datadog. The cockpit's Logs page attempts the same query path (VictoriaLogs is Loki-API compatible and so is Datadog Logs Search), falling back gracefully if a query syntax feature isn't supported.
You can mix these freely: keep SeaweedFS for storage but ship logs to Datadog. Keep VictoriaLogs for logs but use your own Grafana for dashboards. Each slot is independent.
Why not ship Ceph / Grafana / Loki as packaged options?¶
Because that's two stacks to maintain instead of one. The cost of "AKKO supports Ceph natively" is:
- a second sub-chart with its own values, templates, NetworkPolicies
- a second test matrix to keep green
- a second migration path the next time the upstream chart bumps
- a second documentation page to keep in sync with the codebase
- twice the bug surface in the AKKO release notes
The cost of "AKKO supports your existing Ceph through external" is:
- you write
external.endpoint=https://your-ceph-rgw.acme.internal - AKKO talks to it like it talks to AWS S3.
The Lego pattern is the second answer applied uniformly. AKKO bets on one default per layer, validates it deeply, and lets the modularity layer carry the polymorphism the rest of the way.
What's next¶
The same pattern is being applied to other pluggable contracts:
| Layer | Status | Default | BYOS |
|---|---|---|---|
| Identity / SSO | Sprint 44 | Keycloak | Auth0 / Okta / Entra ID / AWS Cognito |
| Directory | Sprint 45 | LLDAP | Existing Active Directory / OpenLDAP |
| Container registry | Already shipped | Harbor | ECR / GAR / ACR / Quay |
| CI/CD | Sprint 46 | Woodpecker | GitHub Actions / GitLab CI |
| LLM gateway | Sprint 47 | LiteLLM | Bedrock / Vertex AI / Azure OpenAI |
| Catalog | Sprint 48 | OpenMetadata | DataHub / Apache Atlas |
See feedback_akko_layers_not_tools.md in the agent memory for the
durable rule, and project_minio_replacement.md /
project_observability_replacement.md for the storage and
observability decisions specifically.