Installation¶
Get AKKO running on your local machine in under 10 minutes with Helm and k3d.
Prerequisites¶
Before you begin, make sure the following tools are installed:
| Tool | Minimum Version | Purpose |
|---|---|---|
| Docker Desktop | 4.30+ | Container runtime (or containerd on Linux) |
| kubectl | 1.28+ | Kubernetes CLI |
| Helm | 3.14+ | Package manager for Kubernetes |
| k3d | 5.6+ | Lightweight local Kubernetes (wraps k3s) |
| Git | 2.30+ | Clone the repository |
| OS | -- | macOS (ARM or Intel) or Linux |
Install everything on macOS
On Linux, follow the official install instructions for each tool, or use your distribution's package manager.
System Requirements¶
| Resource | Minimum | Recommended |
|---|---|---|
| RAM | 8 GB | 16 GB (required for governance profile) |
| Docker memory | 8 GB | 16 GB |
| Disk | 20 GB | 40 GB (images + persistent volumes) |
| CPU | 4 cores | 8 cores |
Docker Desktop Memory
The default Docker Desktop memory allocation (2 GB) is not enough. Go to Docker Desktop > Settings > Resources and set memory to at least 8 GB for core services, or 16 GB if you plan to enable the governance profile (OpenMetadata + OpenSearch).
Quick Install¶
Two commands to go from zero to a fully running sovereign analytics stack:
1. Clone the repository¶
2. Deploy AKKO¶
The deploy script creates the k3d cluster, builds all 12 custom images, imports them, generates secrets, and runs the full Helm install with all required values files (values-dev.yaml, values-domain.yaml, values-dev-secrets.yaml) and the Keycloak realm.
First deployment takes time
The very first run pulls ~15 GB of container images and builds 12 custom images. Subsequent deployments are much faster since images are cached. Expect 10--15 minutes on the first run depending on your internet connection.
Verify the Installation¶
Check pod status¶
All pods should reach Running or Completed status within 2--3 minutes. You should see 25+ pods.
Open the cockpit¶
Navigate to:
The cockpit is the central portal. It displays health status cards for every service. All cards should turn green within 1--2 minutes of startup.
Service URLs
All services are accessible at *.akko.local subdomains via Traefik ingress:
| Service | URL |
|---|---|
| Cockpit | https://akko.local |
| JupyterHub | https://lab.akko.local |
| Superset | https://bi.akko.local |
| Airflow | https://orchestrator.akko.local |
| Trino | https://federation.akko.local |
| Spark UI | https://compute.akko.local |
| Object storage console | https://storage.akko.local |
| Dashboards | https://metrics.akko.local |
| Prometheus | https://prometheus.akko.local |
| Ollama | https://ollama.akko.local |
| Polaris | https://polaris.akko.local |
| Keycloak | https://identity.akko.local |
| Traefik | https://traefik.akko.local |
| AKKO Docs | https://docs.akko.local |
| OpenMetadata | https://catalog.akko.local (governance) |
Production Deployment (from Harbor Registry)¶
For production environments, AKKO images and the umbrella Helm chart are pre-built and stored in a Harbor registry. The deployment server never builds images — it only pulls from Harbor.
One-line install from the AKKO public Harbor
The chart is published as an OCI artifact at
oci://harbor.akko-ai.com/akko-charts/akko. From any cluster:
helm install akko oci://harbor.akko-ai.com/akko-charts/akko \
--version 2026.4.1 \
--namespace akko --create-namespace \
-f values-domain.yaml \
-f values-dev-secrets.yaml
See Deploy AKKO from Harbor for the multi-cluster install pattern, and ADR-034 (ADR-034) for the distribution rationale.
Harbor is deployed before and independently from AKKO
Harbor has its own Helm release, its own namespace (harbor), its own
bundled PostgreSQL, and its own local-path PVCs. It does not depend
on akko-postgresql or object storage. This separation is intentional: it
guarantees that AKKO can be fully uninstalled, reinstalled, or migrated
to another cluster without ever rebuilding images, because the
registry survives every AKKO lifecycle operation.
See Harbor Registry for the full registry lifecycle (install, push/pull, backup, hardening, capacity planning).
Architecture¶
CI / Build Server Production Server
┌────────────────┐ ┌────────────────────┐
│ git clone akko │ │ k3s / EKS / GKE │
│ build-images.sh│──push──►Harbor──►│ helm install akko │
│ helm package │ (own DB, │ (pulls from Harbor) │
└────────────────┘ own PVCs) └────────────────────┘
Step 1: Build and push images (on build server)¶
git clone https://github.com/AKKO-p/AKKO.git akko && cd akko
AKKO_REGISTRY=harbor.example.com/akko AKKO_PARALLEL=4 bash helm/scripts/build-images.sh
This builds 12 custom images in parallel and pushes them to your Harbor registry.
Step 2: Configure the target cluster¶
On the production server, configure the container runtime to authenticate with Harbor:
k3s:
# /etc/rancher/k3s/registries.yaml
mirrors:
"harbor.example.com":
endpoint:
- "https://harbor.example.com"
configs:
"harbor.example.com":
auth:
username: "robot$akko-pull"
password: "<token>"
systemctl restart k3s.
EKS/GKE/AKS: Use imagePullSecrets in the Helm values instead.
Step 3: Generate domain and secrets¶
bash helm/scripts/generate-domain-values.sh akko.production.com
bash helm/scripts/generate-dev-secrets.sh --random
Step 4: Deploy¶
helm install akko helm/akko/ -n akko --create-namespace \
--set global.image.registry=harbor.example.com/akko \
-f helm/examples/values-dev.yaml \
-f helm/examples/values-domain.yaml \
-f helm/examples/values-dev-secrets.yaml \
--set-file akko-keycloak.realm.data=helm/examples/realm-domain.json \
--wait --timeout 20m
Step 5: Seed demo data (optional)¶
bash helm/scripts/demo/banking/seed.sh
bash helm/scripts/demo/banking/verify.sh # Expected: 30+ PASS, 0 FAIL
Values Architecture (detailed)¶
AKKO uses layered Helm values that adapt to any infrastructure. Each layer overrides the previous:
values.yaml ← Production defaults (safe, no passwords)
↑ overridden by
values-dev.yaml ← App config (RBAC, DAGs, profiles)
↑ overridden by
values-domain.yaml ← Domain URLs (auto-generated, never edit manually)
↑ overridden by
values-dev-secrets.yaml ← Passwords (gitignored, never commit)
| File | What it contains | Who generates it | When to edit |
|---|---|---|---|
values.yaml |
Secure defaults: empty passwords (fail-fast), akko.local domain, resource limits, health probes |
Chart maintainers | Never |
values-dev.yaml |
RBAC roles, OPA policies, Airflow DAGs, JupyterHub profiles, monitoring dashboards | Shared across all environments | Rarely — when adding features |
values-domain.yaml |
ALL domain-dependent URLs: OAuth callbacks, ingress hosts, Keycloak endpoints | generate-domain-values.sh <domain> |
Never edit — always regenerate |
values-dev-secrets.yaml |
ALL passwords: PostgreSQL, object storage, Keycloak admin, Dashboards, Airflow, LiteLLM | generate-dev-secrets.sh (deterministic) or --random (production) |
Never commit to git |
values-enterprise.yaml |
External LDAP/AD federation, governance profiles, resource quotas | Manual for enterprise clients | Only for enterprise with external IdP |
Per-infrastructure examples:
| Environment | Registry (global.image.registry) |
Domain | Secrets method |
|---|---|---|---|
| k3d local | localhost:5050 |
akko.local |
generate-dev-secrets.sh (deterministic) |
| k3s Netcup | localhost:30500/akko (Harbor) |
159.195.77.208.nip.io |
generate-dev-secrets.sh --random |
| EKS | 123456.dkr.ecr.eu-west-1.amazonaws.com/akko |
akko.company.com |
AWS Secrets Manager + External Secrets Operator |
| Air-gapped | harbor.internal/akko |
akko.internal |
HashiCorp Vault |
| OpenShift | image-registry.openshift-image-registry.svc/akko |
akko.apps.ocp.company.com |
OpenShift Secrets |
Custom Domain Configuration¶
By default AKKO uses *.akko.local. To use a custom domain, run the domain generation script:
This generates values-domain.yaml and realm-domain.json with all URLs updated. Then redeploy:
All ingress rules, Keycloak redirect URIs, and service URLs will automatically update to *.mycompany.internal.
DNS resolution
For local development, add entries to /etc/hosts:
echo '127.0.0.1 akko.local cockpit.akko.local identity.akko.local lab.akko.local bi.akko.local orchestrator.akko.local federation.akko.local compute.akko.local minio.akko.local polaris.akko.local grafana.akko.local prometheus.akko.local catalog.akko.local docs.akko.local experiments.akko.local ollama.akko.local traefik.akko.local' | sudo tee -a /etc/hosts
Alternatively, use a wildcard DNS tool like dnsmasq to resolve all *.akko.local subdomains.
TLS Certificates¶
Self-signed (default)¶
AKKO generates a self-signed wildcard certificate for *.akko.local during deployment. This is suitable for local development and air-gapped environments.
Bring your own certificate¶
To use your own TLS certificate, create a Kubernetes Secret and reference it in your values:
cert-manager (production)¶
For production deployments with automatic Let's Encrypt certificates, install cert-manager and configure a ClusterIssuer. See the Kubernetes Deployment guide for detailed instructions.
Safari and Self-Signed Certificates¶
Since AKKO uses self-signed TLS certificates for local development, Safari (and other browsers) will display a security warning on first visit.
Trust the certificate on macOS
To avoid repeated warnings in Safari:
- Export the certificate from the cluster:
- Open Keychain Access
- Drag
akko.crtinto the System keychain - Double-click the imported certificate
- Expand Trust and set When using this certificate to Always Trust
- Close and restart Safari
In Chrome or Firefox, you can click "Advanced" and "Proceed" on each
*.akko.local domain instead.
Governance Profile¶
To enable OpenMetadata (data catalog) and OpenSearch, upgrade with the governance flags:
helm upgrade akko helm/akko/ \
-f helm/examples/values-dev.yaml \
--set-file akko-keycloak.realm.data=helm/examples/realm-akko-k3d.json \
--set openmetadata.enabled=true \
--set akko-opensearch.enabled=true
Governance profile requires 16 GB+
OpenMetadata Server (~1 GB) + OpenSearch (~512 MB heap + native memory) will cause OOM restarts if Docker Desktop is configured with less than 16 GB of RAM.
Uninstall¶
Remove the AKKO release:
To also delete all persistent data (full reset):
To delete the k3d cluster entirely:
Common Issues¶
Port conflicts on 80 or 443
k3d maps ports 80 and 443 from the cluster to your host. If another process (Apache, Nginx, another k3d cluster) is using these ports, cluster creation will fail. Check with:
Stop the conflicting process before running k3d-create.sh.
Pods stuck in Pending or CrashLoopBackOff
Check pod events for details:
Common causes: insufficient memory (increase Docker Desktop allocation),
image pull errors (run build-images.sh again), or missing PVCs.
Docker memory errors or OOM kills
If services restart repeatedly, check Docker Desktop memory allocation. Spark workers and OpenMetadata are memory-hungry. Increase to 8 GB minimum (16 GB for the governance profile).
kubectl cannot connect to the cluster
Ensure your kubeconfig points to the k3d cluster:
If the cluster was deleted, recreate it with k3d-create.sh.
Next Steps¶
- Kubernetes Deployment -- Advanced Helm configuration, production tuning, and multi-node setups
- Architecture -- How services connect