Skip to main content

Deploy on Kubernetes

Community-Maintained Beta

The Agenta Helm chart is community-maintained and currently in beta. If you encounter issues or have suggestions, please open a GitHub issue or reach out in our Slack community.

This guide walks you through deploying Agenta on Kubernetes using the Helm chart. By the end, you will have a working Agenta instance running in your cluster.

Database migrations run automatically as a post-install or post-upgrade hook.

Prerequisites

  • A running Kubernetes cluster (v1.24+)
  • kubectl configured to access your cluster
  • helm CLI (v3.10+) installed
  • An ingress controller if you want to expose Agenta with a public hostname

Quick Start

1. Clone the Repository

git clone --depth 1 https://github.com/Agenta-AI/agenta && cd agenta

2. Generate Secrets

Generate the required secret values:

AG_AUTH_KEY=$(openssl rand -hex 32)
AG_CRYPT_KEY=$(openssl rand -hex 32)
PG_PASS=$(openssl rand -hex 16)
warning

Save these values in a secure secret manager. You will need them for future upgrades. Avoid using export as it exposes variables to all child processes.

3. Install the Chart

For OSS:

helm install agenta hosting/helm/agenta-oss \
--namespace agenta --create-namespace \
--set global.agentaLicense=oss \
--set secrets.agentaAuthKey=$AG_AUTH_KEY \
--set secrets.agentaCryptKey=$AG_CRYPT_KEY \
--set secrets.postgresPassword=$PG_PASS
info

The chart automatically wires secrets.postgresPassword to both the application pods and the Bitnami PostgreSQL subchart (via a shared Kubernetes Secret). You only need to set it once.

Security note

The --set approach is convenient for testing but exposes secrets in shell history and in helm get values output. For production, use a values.yaml file with restricted permissions or secrets.existingSecret to reference a pre-existing Kubernetes Secret. See Secrets for details.

4. Verify

# Watch pods start
kubectl -n agenta get pods -w

# Check the migration job completed
kubectl -n agenta get jobs

Once all pods are running, use port-forwarding for a quick local check:

kubectl port-forward svc/agenta-agenta-oss-web 3000:3000 -n agenta

Then open http://localhost:3000 in your browser.

If you want to expose Agenta through a public hostname, see Set Up Ingress and TLS.

::: info What gets deployed

The chart creates the following workloads inside your Kubernetes namespace:

  • Web frontend (Next.js)
  • API backend (FastAPI + Gunicorn)
  • Services backend (FastAPI + Gunicorn)
  • Worker (tracing) for OTLP trace ingestion
  • Worker (evaluations) for async evaluation jobs
  • Worker (webhooks) for webhook delivery
  • Worker (events) for async event processing
  • Cron for scheduled maintenance tasks
  • PostgreSQL (Bitnami subchart) with three databases
  • Redis Volatile for caching and pub/sub
  • Redis Durable for queues and persistent state
  • SuperTokens for authentication
  • Alembic migration job (post-install/post-upgrade hook)
  • Ingress resource for routing traffic to web, API, and services (when enabled)

:::

How-tos

Use a Values File

For production deployments, create a values.yaml file instead of passing --set flags.

warning

Never commit values.yaml to version control if it contains secrets. Add it to .gitignore and restrict file permissions (chmod 600 values.yaml). For fully managed secret lifecycle, use secrets.existingSecret to reference a pre-existing Kubernetes Secret or integrate with an external secrets operator.

Start from one of the example files:

  • hosting/helm/agenta-oss/values-oss.example.yaml
  • hosting/helm/agenta-oss/values-ee.example.yaml

Install with:

helm install agenta hosting/helm/agenta-oss \
--namespace agenta --create-namespace \
-f values.yaml

Deploy Enterprise Edition

Set global.agentaLicense: ee to deploy Enterprise Edition.

Enterprise images are private packages that will be shared with you. Create the namespace and image pull secret first:

kubectl create namespace agenta

kubectl create secret docker-registry ghcr-pull-secret \
--docker-server=<ee-server> \
--docker-username=<ee-username> \
--docker-password=<ee-pat> \
--namespace agenta

Then copy hosting/helm/agenta-oss/values-ee.example.yaml, replace the placeholder secrets and URLs, and install it:

helm install agenta hosting/helm/agenta-oss \
--namespace agenta \
-f values-ee.yaml

Reference

Configuration is done through Helm values. The full default values are in hosting/helm/agenta-oss/values.yaml.

Global Settings

ValuePurposeDefault
global.agentaLicenseEdition to deployoss
global.webUrlPublic web URLhttp://localhost
global.apiUrlPublic API URLhttp://localhost/api
global.servicesUrlPublic services URLhttp://localhost/services
global.imagePullSecretsImage pull secrets[]

Secrets

ValuePurposeDefault
secrets.existingSecretName of an existing Secret to use instead of the chart-managed one""
secrets.agentaAuthKeyAuthorization key (required)""
secrets.agentaCryptKeyEncryption key (required)""
secrets.postgresPasswordPostgreSQL password (required)""
secrets.supertokensApiKeySuperTokens API key (recommended for production)""
secrets.oauthOAuth env vars injected into pods (key = env var name){}
secrets.llmProvidersLLM provider API keys injected into pods{}

To use an existing Kubernetes Secret instead of having the chart create one, set secrets.existingSecret to the name of your Secret. It must contain AGENTA_AUTH_KEY, AGENTA_CRYPT_KEY, and POSTGRES_PASSWORD.

If you enable secret-backed settings such as OAuth, LLM provider keys, SendGrid, Composio, New Relic, or Turnstile, add those keys to the same Secret too. When secrets.existingSecret is set, the chart does not create or update the Secret for you.

caution

When secrets.supertokensApiKey is empty, the SuperTokens instance runs without authentication. Any pod that can reach the SuperTokens service can manage auth data. Set an API key for production deployments.

Access Control

You can restrict who can sign up and create organizations.

info

Access control settings are available only in Enterprise Edition.

ValuePurposeDefault
accessControl.allowedDomainsOnly allow sign-ups from these email domains (comma-separated)""
accessControl.blockedDomainsBlock sign-ups from these email domains (comma-separated)""
accessControl.blockedEmailsBlock specific email addresses (comma-separated)""
accessControl.orgCreationAllowlistOnly these emails can create organizations (comma-separated)""

Email (SendGrid)

To enable transactional emails (invitations and password resets), configure SendGrid.

ValuePurposeDefault
email.sendgrid.apiKeySendGrid API key""
email.sendgrid.fromAddressSender email address""

Configuring Images

ValuePurposeDefault
api.image.repositoryAPI imageDerived from global.agentaLicense
api.image.tagAPI image tag.Chart.AppVersion
web.image.repositoryWeb imageDerived from global.agentaLicense
web.image.tagWeb image tag.Chart.AppVersion
services.image.repositoryServices imageDerived from global.agentaLicense
services.image.tagServices image tag.Chart.AppVersion

Workers, cron, and Alembic jobs reuse the API image.

If you build your own images or mirror them to another registry, set the image repositories directly in your values file:

api:
image:
repository: registry.example.com/agenta-api-ee
tag: v0.95.1

web:
image:
repository: registry.example.com/agenta-web-ee
tag: v0.95.1

services:
image:
repository: registry.example.com/agenta-services-ee
tag: v0.95.1

When these values are set, they override the edition-based defaults.

Component Toggles and Replicas

Each component (api, web, services, workerEvaluations, workerTracing, workerWebhooks, workerEvents, cron, supertokens) supports:

ValuePurposeDefault
<component>.enabledEnable/disable the componenttrue
<component>.replicasNumber of replicas1
<component>.resourcesResource requests/limits{}
<component>.nodeSelectorNode selector{}
<component>.tolerationsTolerations[]
<component>.affinityAffinity rules{}
<component>.envExtra environment variables{}

PostgreSQL (Bundled)

The chart includes Bitnami PostgreSQL as a subchart. It is enabled by default and creates three databases. The names depend on global.agentaLicense.

  • OSS: agenta_oss_core, agenta_oss_tracing, agenta_oss_supertokens
  • EE: agenta_ee_core, agenta_ee_tracing, agenta_ee_supertokens
caution

The bundled PostgreSQL init scripts run only when the database volume is first created. If you change global.agentaLicense later, the chart will not rename or recreate the databases automatically. For that case, create the new databases yourself or do a fresh install with a new PostgreSQL volume.

ValuePurposeDefault
postgresql.enabledEnable bundled PostgreSQLtrue
postgresql.auth.usernameDatabase useragenta
postgresql.auth.passwordDatabase password (must match secrets.postgresPassword)""
postgresql.primary.persistence.sizePVC size10Gi

Redis

The chart deploys two Redis instances: volatile (caching/pub-sub) and durable (queues/persistent state).

ValuePurposeDefault
redisVolatile.enabledEnable volatile Redistrue
redisVolatile.maxmemoryMax memory512mb
redisVolatile.passwordPassword (recommended for production)""
redisDurable.enabledEnable durable Redistrue
redisDurable.maxmemoryMax memory512mb
redisDurable.passwordPassword (recommended for production)""
redisDurable.persistence.sizePVC size5Gi
caution

By default both Redis instances run without authentication. In shared or multi-tenant clusters, set passwords for both instances or use Kubernetes NetworkPolicies to restrict access to the Agenta namespace.

Alembic (Database Migrations)

Migrations run as a Kubernetes Job with post-install,post-upgrade hooks. Post-hooks are used because PostgreSQL is deployed as a Bitnami subchart and is not available until after the main release installs.

ValuePurposeDefault
alembic.enabledEnable migration jobtrue
alembic.activeDeadlineSecondsJob timeout600
alembic.backoffLimitRetry count3
alembic.ttlSecondsAfterFinishedCleanup delay300

Set Up Ingress and TLS

The chart creates an Ingress resource with three path rules:

  • /api routes to the API service
  • /services routes to the services backend
  • / routes to the web frontend

Use this section when you want a public hostname instead of port-forwarding.

Your ingress setup must route:

  • / to the web service
  • /api to the API service
  • /services to the services service

It must also strip the /api and /services prefixes before forwarding requests upstream.

Ingress Values

ValuePurposeDefault
ingress.enabledEnable Ingresstrue
ingress.classNameIngress classtraefik
ingress.hostHostname""
ingress.tlsTLS configuration[]
ingress.annotationsIngress annotations{}
ingress.paths.api.pathAPI path pattern/api
ingress.paths.api.pathTypeAPI path typePrefix
ingress.paths.services.pathServices path pattern/services
ingress.paths.services.pathTypeServices path typePrefix
ingress.paths.web.pathWeb path pattern/
ingress.paths.web.pathTypeWeb path typePrefix
Ingress class name

The chart defaults to ingress.className: "traefik". If your cluster uses a different ingress controller, override this value to match. NGINX users must also override the ingress paths (see Path Prefix Stripping below).

You can check which ingress classes are available in your cluster with kubectl get ingressclass.

Path Prefix Stripping

The API and services backends expect requests without the /api or /services prefix. Configure your ingress controller to strip these prefixes.

Traefik: Use a StripPrefix Middleware via extraObjects:

ingress:
className: "traefik"
host: "agenta.example.com"
annotations:
traefik.ingress.kubernetes.io/router.middlewares: agenta-strip-prefixes@kubernetescrd

extraObjects:
- apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: strip-prefixes
namespace: "{{ .Release.Namespace }}"
spec:
stripPrefix:
prefixes:
- /api
- /services

NGINX Ingress Controller: Override the paths to use regex capture groups and add rewrite annotations:

ingress:
className: "nginx"
host: "agenta.example.com"
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
paths:
api:
path: /api/(.*)
pathType: ImplementationSpecific
services:
path: /services/(.*)
pathType: ImplementationSpecific
web:
path: /(.*)
pathType: ImplementationSpecific

Enable TLS

To enable TLS, provide a TLS secret and update your global URLs to use https://:

global:
webUrl: "https://agenta.example.com"
apiUrl: "https://agenta.example.com/api"
servicesUrl: "https://agenta.example.com/services"

ingress:
host: "agenta.example.com"
tls:
- secretName: agenta-tls
hosts:
- agenta.example.com

If you use cert-manager, add the appropriate annotation:

ingress:
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"

Use External Services

You can disable any bundled infrastructure component and point to an external instance instead.

External PostgreSQL

If you deploy Enterprise Edition, replace agenta_oss_* in the examples below with agenta_ee_*.

postgresql:
enabled: false
databases:
core: "agenta_oss_core"
tracing: "agenta_oss_tracing"
supertokens: "agenta_oss_supertokens"
external:
host: "your-pg-host.example.com"
port: 5432
username: "agenta"
sslmode: "require"

sslmode is appended to auto-constructed connection URIs only (as ?ssl= for asyncpg and ?sslmode= for the sync driver). It defaults to "prefer". Set it to "require" or "verify-full" for managed databases (e.g., AWS RDS, Cloud SQL). When using full URI overrides (uriCore, uriTracing, uriSupertokens), include the SSL parameter directly in the URI. In that case, sslmode is ignored.

Create the three databases and grant permissions before installing:

CREATE ROLE agenta WITH LOGIN PASSWORD 'your-password';

CREATE DATABASE agenta_oss_core OWNER agenta;
CREATE DATABASE agenta_oss_tracing OWNER agenta;
CREATE DATABASE agenta_oss_supertokens OWNER agenta;

-- Grants needed for schema migrations (CREATE, ALTER) and application queries.
-- You can replace ALL with specific privileges if your security policy requires it
-- (e.g., SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER for a narrower scope).
\c agenta_oss_core
GRANT ALL ON SCHEMA public TO agenta;

\c agenta_oss_tracing
GRANT ALL ON SCHEMA public TO agenta;

\c agenta_oss_supertokens
GRANT ALL ON SCHEMA public TO agenta;

You can also provide full URI overrides:

postgresql:
enabled: false
external:
uriCore: "postgresql+asyncpg://user:pass@host:5432/agenta_oss_core"
uriTracing: "postgresql+asyncpg://user:pass@host:5432/agenta_oss_tracing"
uriSupertokens: "postgresql://user:pass@host:5432/agenta_oss_supertokens"
warning

URI overrides contain credentials inline. Prefer using secrets.existingSecret or an external secrets operator to avoid storing passwords in values.yaml.

External Redis

redisVolatile:
enabled: false
external:
uri: "redis://your-redis-host:6379/0"

redisDurable:
enabled: false
external:
uri: "redis://your-redis-host:6379/1"

External SuperTokens

supertokens:
enabled: false
external:
uri: "http://your-supertokens-host:3567"

Add LLM Provider Keys and OAuth

Pass LLM API keys and OAuth credentials through the secrets section. These are stored in the Kubernetes Secret and injected as environment variables into the application pods. They will be used as defaults for the users if they don't provide their API keys.

secrets:
llmProviders:
OPENAI_API_KEY: "sk-..."
ANTHROPIC_API_KEY: "sk-ant-..."
oauth:
GOOGLE_OAUTH_CLIENT_ID: "..."
GOOGLE_OAUTH_CLIENT_SECRET: "..."

Upgrade

To upgrade to a newer version:

helm upgrade agenta hosting/helm/agenta-oss \
--namespace agenta \
-f values.yaml

The Alembic migration job runs automatically as a post-upgrade hook. Check its status:

kubectl -n agenta get jobs -l app.kubernetes.io/component=alembic
kubectl -n agenta logs job/agenta-agenta-oss-alembic

To pin to a specific version:

api:
image:
tag: "v0.86.8"
web:
image:
tag: "v0.86.8"
services:
image:
tag: "v0.86.8"

Uninstall

helm uninstall agenta --namespace agenta
warning

This does not delete PersistentVolumeClaims. To fully remove data, delete the PVCs manually:

kubectl -n agenta delete pvc -l app.kubernetes.io/instance=agenta

Troubleshooting

Pods not starting

Check pod status and events:

kubectl -n agenta get pods
kubectl -n agenta describe pod <pod-name>

Common causes:

  • Missing secrets: ensure secrets.agentaAuthKey, secrets.agentaCryptKey, and secrets.postgresPassword are set
  • Image pull errors: verify image names and that imagePullSecrets are configured if using a private registry

Migration job fails

Check migration logs:

kubectl -n agenta logs job/agenta-agenta-oss-alembic

Common causes:

  • PostgreSQL not ready: the job includes an init container that waits for PostgreSQL, but external databases may have network issues
  • Wrong credentials: verify secrets.postgresPassword matches the database password

Ingress not working

Verify the Ingress resource:

kubectl -n agenta get ingress
kubectl -n agenta describe ingress agenta-agenta-oss

Common causes:

  • Missing ingress controller: ensure Traefik or NGINX Ingress Controller is installed
  • Missing path prefix stripping: the API and services backends will return 404 if /api and /services prefixes are not stripped (see Path Prefix Stripping)
  • Wrong ingress.className: must match your ingress controller's class name

Services can't connect to each other

Check logs for connection errors:

kubectl -n agenta logs -l app.kubernetes.io/component=api --prefix

Common causes:

  • Database URIs incorrect: check global.webUrl, global.apiUrl, and global.servicesUrl
  • Redis not ready: check Redis pod status

The API URL must include /api

The API URL (global.apiUrl) must end with /api. This is a current limitation. Authentication endpoints expect this prefix, and requests will fail with 404 if it is missing.

Correct:

global:
apiUrl: "https://agenta.example.com/api"

Incorrect:

global:
apiUrl: "https://api.agenta.example.com" # missing /api suffix

The services URL (global.servicesUrl) does not have this constraint.

All services must share the same origin

Web, API, and services must be served from the same origin (same host and port). The API backend only allows cross-origin requests from a fixed set of origins, so placing the frontend and API on different ports or subdomains will cause the browser to block requests.

Route all three through a single public origin using path-based routing:

https://agenta.example.com            -> web
https://agenta.example.com/api -> API (with prefix stripping)
https://agenta.example.com/services -> services (with prefix stripping)

You can do this with the built-in Ingress resource or with an external reverse proxy (Traefik, NGINX, Caddy, etc.).

Environment changes require a pod restart

If you change environment variables outside of Helm (for example, by editing the deployment directly), restart the web pods for the new values to take effect:

kubectl -n agenta rollout restart deployment/<release>-agenta-oss-web

The web frontend generates its runtime configuration file (__env.js) once at pod startup. New environment values only appear after the pod restarts.

Note that helm upgrade handles this automatically by updating the deployment template, which triggers a rollout. A manual restart is only needed when you change variables without going through Helm.

Also note that any active kubectl port-forward sessions will break after a rollout because they are bound to the old pod. Restart your port-forward commands after each rollout.

Getting Help

If you run into issues: