How Kaphera composes EDC into a product

§3 of 4 · ~3 min read

Twelve components arranged in five layers: the bottom layer (Enablement) is what makes data exchange compliant; the top layer (Developer experience) is what makes the rest of the stack reachable from a CLI; everything in between is the operations surface that turns the protocol into a product.

The picture

flowchart TB
  subgraph DX["Developer experience"]
    CLI["kaphera CLI"]
    TFP["Terraform provider"]
    TFM["Terraform modules"]
  end
  subgraph APP["Applications (self-hosted / BYOC)"]
    SRV["Cloud Server"]
    CON["Cloud Console"]
  end
  subgraph MAN["Managed services (cloud.kaphera.com)"]
    MSRV["Managed Server"]
    MCON["Managed Console"]
  end
  subgraph OPS["Operators"]
    EDC["EDC Operator"]
    ENA["EDC Enablement Operator"]
    KCO["Cloud Operator"]
  end
  subgraph ENB["Enablement"]
    DTR["Digital Twin Registry"]
    DSP["DSP Data Plane"]
  end
  DX --> APP
  DX --> MAN
  MAN --> APP
  APP --> OPS
  OPS --> ENB

Five layers, one paragraph each

Developer experience. Three components, all engineer-facing: the [[02-product/solutions/kaphera-cli|kaphera CLI]], the Terraform provider, and the Terraform modules. They give a builder like Lars the ability to provision a connector, manage identities, and monitor participants without ever touching the Kubernetes cluster directly. Same binary, three backends (the managed cloud, a self-hosted server, or a raw Kubernetes target).

Applications. Two components, both Elastic-licensed: the Cloud Server (REST API for all platform operations) and the Cloud Console (the web UI for engineers who do not live in the terminal). They are the self-hosted entry surface for BYOC and white-label deployments.

Managed services. Two components, both proprietary: the globally unified Managed Server and Managed Console running on cloud.kaphera.com. They are how a participant like Petra joins a dataspace in under a minute without provisioning anything.

Operators. Three Kubernetes operators, all written in Rust. The EDC Operator (Apache 2.0) manages Eclipse Dataspace Components: control plane, identity wallet, credential issuer, data plane. The EDC Enablement Operator (Apache 2.0) manages the supporting services around them. The Cloud Operator (source-available) manages the platform infrastructure layer, PostgreSQL, Vault, NATS, Keycloak, and the organisational model that ties everything together.

Enablement. Two GPL components: the Digital Twin Registry (multi-tenant, AAS Part 2-compliant, what Catena-X needs for product-level data exchange) and the DSP Data Plane (the Dataspace Protocol-compliant data plane). This is the layer that makes a data exchange compliant rather than merely technically functional.

Open source, source-available, and why the split is structural

Three licence buckets, each with a deliberate role. Apache 2.0 covers the EDC operators: maximally permissive, designed to be the reference implementation any team can run, inspect, and build on. GPL covers the Digital Twin Registry and DSP Data Plane: open with a copyleft guarantee that derivative implementations stay open. Source-available (Elastic and proprietary) covers the platform layer that runs the operations: auditable by any organisation that needs to inspect what manages their infrastructure, but not forkable into a competing managed service. The platform layer is open source where it should be (the operators that move data) and source-available where it has to be (the platform that runs them), so the infrastructure stays open whether you self-host or not. The commercial moat is the operations, not the code.


Previous: ← §2 Identity and policy Next: §4 Deployment and licensing →