ShinyProxy — Multi-Tenant Dashboard Runtime¶
ShinyProxy provides a multi-tenant runtime for Streamlit and R Shiny dashboards in AKKO. It spawns one isolated container per user per dashboard, ensuring that two users never share Python state. It integrates with ADEN to serve auto-generated dashboards and authenticates via Keycloak OIDC.
Architecture¶
Browser
|
+--v-----------+ +-------------------+
| Traefik | | Keycloak (OIDC) |
| (ingress) | +--------+----------+
+--+-----------+ |
| |
+--v-----------+ +-----------v-----------+
| ShinyProxy |<-->| PostgreSQL (sessions)|
| (:8080) | +-----------------------+
+--+-----------+
|
| spawns per (user x dashboard)
|
+--v-----------+ +--v-----------+
| Streamlit | | Streamlit |
| Pod (user A)| | Pod (user B)|
+--------------+ +--------------+
- ShinyProxy spawns isolated Kubernetes pods for each user session
- OIDC via Keycloak — SSO authentication with group-based access control
- PostgreSQL session store — enables HA across multiple ShinyProxy replicas
- App catalog syncer — watches the ADEN database and hot-reloads the dashboard catalog
URLs¶
| Mode | URL |
|---|---|
| Kubernetes (k3d) | https://reports.<domain> |
Configuration (Helm values)¶
akko-shinyproxy:
enabled: true
server:
image:
repository: openanalytics/shinyproxy
tag: "3.4.0"
replicas: 2 # HA via PostgreSQL session store
oidc:
enabled: true
clientId: "akko-shinyproxy"
scopes: "openid profile email groups"
database:
host: "akko-postgresql"
name: "shinyproxy"
kubernetes:
heartbeatTimeoutSeconds: 300 # Kill idle dashboard pods after 5 min
maxInstancesGlobal: 50
maxInstancesPerUser: 5
defaultResources:
requests:
cpu: 250m
memory: 512Mi
limits:
cpu: 1
memory: 2Gi
ingress:
host: "reports.akko.local"
resources:
requests:
cpu: 200m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
Health Check¶
ShinyProxy exposes a health endpoint on port 8080:
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 30
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 20
periodSeconds: 10
RBAC (who can access)¶
- OIDC authentication via Keycloak — all users must log in through SSO
- Group-based access — the
rolesClaim: groupsmaps Keycloak groups to ShinyProxy roles - Per-app access control — each dashboard in the catalog can restrict access to specific groups
- Keycloak client:
akko-shinyproxywith scopesopenid profile email groups
App Catalog Syncer¶
The syncer sidecar watches the ADEN dashboards table in PostgreSQL and writes the ShinyProxy app catalog ConfigMap. This enables ADEN-generated dashboards to appear in ShinyProxy without manual configuration.
| Setting | Default | Description |
|---|---|---|
syncer.intervalSeconds |
30 | Polling interval for new dashboards |
syncer.adenDb |
aden |
PostgreSQL database containing the dashboards table |
syncer.reportsPvc |
akko-aden-reports |
PVC where ADEN writes dashboard files |
Key Features¶
| Feature | Description |
|---|---|
| Multi-tenant isolation | One pod per user per dashboard — no shared state |
| OIDC SSO | Keycloak authentication with group-based access |
| Auto-catalog | ADEN dashboards appear automatically via the syncer |
| HA | Multiple replicas with PostgreSQL-backed sessions |
| Resource caps | Per-user and global limits on concurrent dashboard pods |
| Idle cleanup | Dashboard pods killed after 5 min inactivity |
Resource Requirements¶
| Component | Minimum RAM | Recommended |
|---|---|---|
| ShinyProxy server | 1 Gi | 2 Gi |
| Each dashboard pod | 512 Mi | 2 Gi |
Troubleshooting¶
Dashboard Pod Fails to Spawn¶
Symptoms: User clicks a dashboard but ShinyProxy shows "Starting..." indefinitely.
Cause: Insufficient cluster resources, image pull failure, or the dashboard image is not in the registry.
Solution:
# Check ShinyProxy logs for spawn errors
kubectl logs -n akko deploy/akko-akko-shinyproxy --tail=100 | grep -i "spawn\|error\|fail"
# Check if the dashboard pod was created
kubectl get pods -n akko | grep sp-
# Check events for pull errors
kubectl get events -n akko --sort-by='.lastTimestamp' | grep -i "pull\|image"
OIDC Login Loop¶
Symptoms: User is redirected to Keycloak repeatedly without being authenticated.
Cause: The OIDC issuer URL or client secret is misconfigured.
Solution:
# Verify OIDC configuration
kubectl get secret -n akko akko-shinyproxy-oidc -o yaml
# Check ShinyProxy logs for OIDC errors
kubectl logs -n akko deploy/akko-akko-shinyproxy --tail=50 | grep -i "oidc\|oauth\|token"
# Verify the Keycloak client exists
kubectl exec -n akko deploy/akko-keycloak -- /opt/keycloak/bin/kcadm.sh get clients -r akko --fields clientId | grep shinyproxy
Syncer Not Updating Catalog¶
Symptoms: New ADEN dashboards do not appear in ShinyProxy.
Cause: The syncer pod is not running, or it cannot connect to PostgreSQL.
Solution: