Skip to main content

Kubernetes Deployment Overview

Cardano Rosetta Java ships official Helm charts for Kubernetes deployment. The charts support both single-host K3s deployments and any managed Kubernetes cluster (EKS, GKE, AKS).

Architecture

The deployment consists of 4 Helm subcharts plus orchestration resources in the parent chart:

Parent chart: cardano-rosetta-java
├── cardano-node StatefulSet — Cardano blockchain node + socat sidecar
├── postgresql StatefulSet — PostgreSQL with blockchain-tuned configuration
├── yaci-indexer Deployment — Blockchain data indexer (Yaci Store)
└── rosetta-api Deployment — Rosetta HTTP API

Monitoring (Prometheus, Grafana) is not included — production clusters should bring their own monitoring solution.

Key Architectural Differences from Docker Compose

UNIX socket bridging via socat

Docker Compose shares the Cardano node's UNIX socket (/node/node.socket) via volume mounts. In Kubernetes, pods cannot share UNIX sockets. A socat sidecar inside the cardano-node pod forwards TCP connections on port 3002 to the UNIX socket. The yaci-indexer connects using the n2c-socat Spring profile.

Startup ordering via init containers

Docker Compose depends_on chains are replaced by Kubernetes init containers:

cardano-node (init: mithril-download → cardano-node starts)

├─ postgresql starts immediately (no node dependency — it's just a database)

└─ yaci-indexer (wait-for-postgres: pg_isready
wait-for-node-tcp: nc cardano-node:3002
copy-node-config: copies configs from node image)
rosetta-api (wait-for-postgres
wait-for-indexer: /actuator/health
copy-node-config: copies configs from node image)
index-applier Job ──────────────► (plain Job, runs automatically with the release)

The cardano-node pod runs three containers: the node itself, a socat sidecar (bridges the UNIX socket to TCP port 3002), and cardano-submit-api — so READY shows 3/3 when fully up.

The index-applier is a plain Kubernetes Job that runs automatically with the release (default indexApplier.mode: automatic). It waits for the Rosetta API to become responsive, then builds optimised database indexes. This Job can take 6–18 hours on mainnet. It is cleaned up automatically after 24 hours via ttlSecondsAfterFinished.

note

The index-applier runs as a plain Job by default — no --no-hooks flag is needed. Operators who prefer explicit, operator-triggered index building can switch to legacy hook mode with --set indexApplier.mode=hook. In hook mode, monitor the Job independently and never use --wait-for-jobs.

Three sync stages

StageWhat's happeningPods ready
SYNCINGyaci-indexer catching up to chain tipAll pods up, API responding
APPLYING_INDEXESIndexer reached tip, DB indexes being builtAll pods up, API responding
LIVEFully synced, all indexes validAll pods ready, API fully functional

Hardware Profiles

Three built-in profiles scale resources across all components:

ProfileTotal RAMTotal vCPUUse case
entry32 GB4 coresPreprod, single host, K3s
mid48 GB8 coresMainnet production
advanced94 GB16 coresHigh-throughput production

See Helm Values Reference for per-component breakdown.

Prerequisites

Software

  • Helm v3.14+ (helm version)
  • kubectl configured for your cluster
  • K3s v1.28+ or any managed Kubernetes cluster (EKS, GKE, AKS)

Access

  • A DB_PASSWORD for PostgreSQL (min 16 characters recommended for production)
  • Outbound internet access (Mithril snapshot download, ~20 GB for mainnet)

Hardware (minimum for preprod / entry profile)

  • 8 vCPU
  • 32 GB RAM
  • 150 GB fast SSD (for node data + PostgreSQL)
  • 16 vCPU
  • 64 GB RAM
  • 700 GB fast SSD (500 GB node + 200 GB PostgreSQL)

Next Steps