FAQs
Consolidated questions about Kaphera Cloud: what it is, what it costs, who runs it, how the open-source model works, and how it fits the dataspace ecosystem. Pulled together from launch FAQs, internal GTM material, and the conversations these questions actually came up in.
What is Kaphera Cloud?
Kaphera Cloud is a managed platform for deploying and operating EDC Connectors. It is built from twelve purpose-built solutions spanning the full stack, from the Kubernetes operators that manage infrastructure, to the APIs and console that serve end users, to the developer tooling that lets engineers provision everything as code.
At the infrastructure layer, three operators divide the work:
| Name | Description | License |
|---|---|---|
| Kaphera EDC Operator | Manages the EDC themselves | Apache 2.0 |
| Kaphera EDC Enablement Operator | Manages EDC supporting services | Apache 2.0 |
| Kaphera Cloud Operator | Manages the platform infrastructure | Source-available — Elastic |
At the application layer, two components expose the platform to its users:
| Name | Description | License |
|---|---|---|
| Kaphera Cloud Server | The REST API that handles all platforms ops | Source-available — Elastic |
| Kaphera Cloud Console | The UI for interacting with the platform | Source-available — Elastic |
Both the server and console have managed counterparts:
| Name | Description | License |
|---|---|---|
| Kaphera Cloud Managed Server | Global managed REST API platform ops | Source-available — proprietary |
| Kaphera Cloud Managed Console | Globally unified management UI | Source-available — proprietary |
The platform also includes two enablement solutions, written in Rust that manages digital twin metadata and data signalling across data space participants:
| Name | Description | License |
|---|---|---|
| Kaphera Digital Twin Registry | Multi-tenant, AAS Part 2-compliant DTR | GPL |
| Kaphera DSP Data Plane | DSP-compliant data plane | GPL |
For engineers who work in code rather than consoles, three tools provide full programmatic access:
| Name | Description | License |
|---|---|---|
| kaphera CLI | For command-line operations | Apache 2.0 |
| Kaphera Cloud Terraform Provider | Declarative infrastructure management | Apache 2.0 |
| Kaphera Cloud Terraform Modules | For composable, opinionated deployments | Apache 2.0 |
Together, these solutions serve both sides of a data space. For organisations governing or operating a data space, the platform provides a complete governance interface: register data space profiles, configure identity and onboarding rules, and control whether those profiles are publicly discoverable to all Kaphera organisations or restricted to a selected group. For participants, it handles everything needed to join: create or join an organisation, establish its digital identity, and initiate onboarding into any available data space in a single step. The underlying infrastructure is fully managed.
Kaphera Cloud operates at the data exchange layer. It governs how data moves between organisations, under what policies, and with what auditability. It does not store or process the data itself; those responsibilities remain with the participant organisations and their existing infrastructure.
What problem does Kaphera Cloud actually solve?
The infrastructure for sovereign data exchange already exists. The Eclipse Dataspace Connector has been available for years. The protocols are specified. The standards bodies have done their work. The problem is not that organisations cannot share data. It is that the barrier to entry makes it inaccessible to most of the organisations that actually need to.
Consider what participation looks like today across four real data spaces where Think-it operates.
Mobility Data Space (MDS). Germany’s national data space for the mobility sector connects over 150 participants (public transport operators, fleet companies, city administrations, and mobility startups) exchanging traffic, parking, charging, and logistics data. When MDS needed to move beyond its previous provider, whose restricted free tier (one user, one connector, two contracts) made even testing impractical and whose proprietary dependencies made independence impossible, the problem was not the protocol. It was that standing up a production connector required deep EDC expertise, a Kubernetes environment, and weeks of integration work. The organisations joining MDS, many of them public institutions and SMEs, do not have that capacity. MDS now runs on infrastructure built by Think-it, serving connectors to participants who could not have built or operated them independently.
Media Data Space (MeDaS). Part of Germany’s MISSION KI national AI initiative, funded by BMDV and coordinated by Acatech, MeDaS is building a prototype data space for the media industry to demonstrate how publishers, broadcasters, and content platforms can share data for AI model training, collaborative news authentication, privacy-preserving audience segmentation, and cross-platform metadata interoperability, all while each organisation retains sovereign control over its data. The challenge is not technical possibility; it is that every participating media organisation would need to independently deploy and operate connector infrastructure to exchange data under these conditions. For an industry already under margin pressure, that infrastructure cost is a non-starter without a platform that makes participation immediate.
GDSO Tyre Data Space. The Global Data Service Organisation connects tyre manufacturers (Continental, Michelin, Bridgestone), retreaders, and recyclers through the Tyre Lifecycle Data Service, a system for tracking tyre declarations from production through retreading to end-of-life. Each participant needs a connector to exchange lifecycle data through the Data Space Protocol. Today, Think-it is migrating the system from a legacy architecture (TIS/ONS resolution and Cognito authentication) to full EDC connector-to-connector communication, meaning every tyre manufacturer needs to deploy a Connector and Identity Hub stack. For a retreader with a dozen employees, or a recycler operating across three EU countries, standing up that infrastructure independently is not realistic. The EU Digital Product Passport requirements that will mandate this data sharing are approaching, and the participants who need to comply are precisely the ones least equipped to build the infrastructure from scratch.
Catena-X / Tractus-X. The automotive supply chain data space targets over 200 OEMs and tier-1 suppliers for quality management, carbon footprint tracking across multi-tier supply chains, and Digital Product Passport compliance. The EU Data Act enters enforcement for automotive in September 2026. When BMW needs component-level data from a tier-3 supplier in Pilsen to issue a precise quality recall rather than recalling an entire production run, the data exists and the willingness to share it exists. But that tier-3 supplier (85 employees, no IT team, no Kubernetes cluster) is being told by their customer that they need a dataspace connector within four months. The existing options either assume they have platform engineers on staff or cost more per month than their digital infrastructure budget can absorb.
The pattern across all four is the same. The organisations at the top of each supply chain (the OEMs, the consortium operators, the large publishers) can afford the engineering effort to participate. The organisations further down (the tier-3 suppliers, the SME retreaders, the small fleet operators, the regional broadcasters) cannot. And those are precisely the organisations whose participation makes the data space valuable. A supply chain traceability system that only reaches the top tier is not traceability. A tyre lifecycle system that does not include the recyclers is not a lifecycle. A media data space that only large publishers can afford to join does not represent the industry.
This is the problem Kaphera Cloud solves. Not the existence of data exchange; that is settled. The barrier to entry.
With Kaphera, joining a data space is immediate. An operations manager at a tier-3 supplier can create an organisation, browse available data spaces, and join with a single click. The platform handles identity establishment, credential issuance, and connector provisioning automatically. No Kubernetes cluster. No six-week integration project. No consultant. The infrastructure runs in the background, managed by someone else, and the participant only has to think about it if something goes wrong.
For the engineers building dataspace infrastructure (the platform engineers at systems integrators who are assembling connector stacks from scratch for each client), Kaphera replaces weeks of work with a deployment they can ship in a day and hand to a colleague without a knowledge transfer.
For the organisations governing data spaces (the Sophie Renards running the technical infrastructure of an MDS or a GDSO), Kaphera replaces a patchwork of components that were not designed to work together with a single, auditable governance interface.
And because the infrastructure layer is open source, what Kaphera builds is not another proprietary dependency to manage. It is infrastructure of the commons: sovereign, inspectable, and designed to outlast any single vendor relationship. The EDC Operator under Apache 2.0 and the Digital Twin Registry under GPL mean that the connectivity layer itself is never a lock-in vector. Organisations that outgrow the managed platform can self-host. Organisations that prefer managed operations get them. The infrastructure stays open either way.
The result is not just lower cost. It is a structural change in who can participate. When the barrier drops from “hire a platform engineering team” to “click join,” the data space reaches the organisations it was always meant to serve.
Who is Kaphera Cloud for?
Kaphera Cloud serves four customer profiles. Each one faces a different version of the barrier-to-entry problem and is served by a different combination of solutions. All four use the same underlying platform. The experiences are surfaced through feature flagging, so the operational infrastructure is shared while each profile sees only what is relevant to their role.
| Name | # Personas | Description |
|---|---|---|
| Builder | Personas: 2 | Any engineer whose job is to make sovereign data exchange work, whether by deploying it for others or integrating it into their own product. They are reached through the open-source operator and developer experience. |
| Participant | Personas: 3 | An organisation joining a dataspace. They did not all arrive by choice. What unites them is that participating in a dataspace has, until now, required becoming an infrastructure operator, and that is not a role they have any interest in filling. Approximately 85% of the Catena-X supply chain is currently untapped, with cost and complexity as the primary barriers. |
| Governance Authority | Personas: 2 | Is the organisation responsible for operating or defining a dataspace, the entity that sets its rules, establishes the identity model, and controls who can participate. In practice this is an industry consortium (MDS, GDSO), a public institution, a regulatory body, or a large enterprise governing a shared data exchange network. Their problem is that standing up and operating a dataspace has historically required assembling capabilities from components that were not designed to work together. |
| White-label Partner | Personas: 1 | Is a systems integrator or platform company that has identified managed connector services as a product line they want to offer, but does not want to build the underlying platform from scratch. They have the client base, the brand presence, and the commercial relationships needed to sell managed infrastructure. What they lack is the time, the specialised expertise, and the capital to build a production-grade multi-tenant EDC operator. |
| Name | Type | Title | Interest |
|---|---|---|---|
| Lars Hoffmann | System Integrator | Senior platform engineer, Munich. Nine years in infrastructure. Works at a mid-size systems integrator serving automotive and industrial clients. | Lars has shipped Kubernetes infrastructure before. He understands cloud-native patterns, GitOps, and secrets management in production. Eighteen months ago his manager handed him a dataspace project with a six-week timeline and documentation that described what the components were but not how to run them. Six weeks later he had something in staging and had become the de facto EDC expert at his organisation, a title he did not seek and would happily give up. Each subsequent client project requires reassembling most of the same infrastructure from scratch, because nothing he built is cleanly portable. He does not want to be the EDC expert. He wants to ship a working connector in a day and move on to the actual integration work. |
| Leila Brandt | Application Builder | Backend engineer, Berlin. Works at a 40-person sustainability SaaS scale-up. One of four on the backend team. | A tier-1 automotive client told Leila’s company that PCF data exchange would need to happen through Catena-X. That client is 20% of ARR. It was not a feature request. It was a prerequisite. She expected an API integration task; she found an infrastructure and protocol problem. Getting her company’s carbon calculation API registered as a data asset, configuring access policies, and implementing the consumer-side flow took significantly longer than any comparable integration she had shipped. The deeper concern is maintenance: every EDC release carries breaking changes that pull her away from her product roadmap. She shipped the Catena-X integration. Now she needs it to stay shipped, and if the product expands to provisioning connectors for her customers’ suppliers, she needs that to be a programmatic API call, not an infrastructure project. |
Can Kaphera Cloud be used to operate a data space, not just participate in one?
Yes. Organisations governing or building a data space can use Kaphera Cloud to register their data space profile, define the identity model and onboarding rules participants must satisfy, and publish the data space for discovery. That discoverability is configurable: a profile can be made available to all organisations on the platform or limited to a specific group. The same three deployment options available to participants, managed shared, fully dedicated, and enterprise, apply equally to data space operators. This makes Kaphera Cloud a complete solution for both the organisations who govern a data space and those who join one.
Which data spaces does Kaphera Cloud support at launch?
Two connector profiles are available at general availability: MDS (Mobility Data Space) and Tractus-X (Catena-X). Each profile bundles the data space-specific credential specifications, trust anchors, and onboarding flows required to participate. Support for additional data spaces is on the roadmap.
How does the open-source model work?
The platform has two licensing tiers reflecting the distinction between the EDC layer and the platform layer.
The EDC operator is released under Apache 2.0: use it, run it, build on it, with no restrictions. The Kaphera Digital Twin Registry, a multi-tenant, AAS Part 2-compliant registry written in Rust, is released under GPL: the source is fully open, but any modifications you distribute must remain open. If you modify it and use it internally, you have no obligation to share anything; the copyleft obligation only applies to distributed versions. For data transfer, the EDC operator ships with data plane components based on upstream Eclipse EDC.
The Kaphera Cloud operator and the platform server are source-available: the code is readable and auditable, organisations can run it internally without restriction, but offering it as a commercial service to third parties requires a licence agreement with Kaphera. White-label partners who want to offer managed connector services under their own brand are the primary use case for this commercial licence.
The kaphera CLI and Terraform provider are open source. The managed cloud service is Kaphera’s commercial offering built on top of all of the above.
| Component | Licence | Self-hostable |
|---|---|---|
| EDC operator | Apache 2.0 | Yes, no restrictions |
| Kaphera Digital Twin Registry | GPL | Yes, modifications must stay open |
| Kaphera Cloud operator | Source-available — Elastic | Yes for internal use; commercial re-sale requires agreement |
| Platform server | Source-available — Elastic | Yes for internal use; commercial re-sale requires agreement |
kaphera CLI | Open source | Yes |
| Terraform provider | Open source | Yes |
| Terraform modules | Open source | Yes |
| Managed cloud service | Commercial | n/a |
Why is Kaphera Cloud built in Rust?
Kaphera chose Rust for both operators and the Digital Twin Registry for reasons specific to what those components need to guarantee.
Memory safety is the most significant one. Between 66% and 75% of all documented security vulnerabilities in production systems stem from memory safety failures, a figure that has led the US White House Office of the National Cyber Director, the NSA, and CISA to jointly recommend a strategic transition to memory-safe languages for critical infrastructure. Rust enforces memory safety at compile time through its borrow checker, eliminating this class of vulnerability before code runs rather than patching it after. For infrastructure handling sensitive supply chain data under contractual data processing agreements, that compile-time guarantee carries real weight.
Each operator requires consistent behaviour under sustained load, continuously reconciling state across potentially hundreds of connector deployments. Rust’s memory model eliminates garbage collection pauses entirely, giving both operators predictable latency regardless of workload size.
The Digital Twin Registry demands go further. The Kaphera DTR serves many participants from a single shared process, enforcing strict tenant isolation and fine-grained access control at the database and API levels. Rust’s type system enforces the absence of data races at compile time, meaning concurrency correctness is a property of the code rather than a runtime assumption. The lower per-tenant memory footprint that Rust enables is also what makes shared-tier pricing economically viable: fewer resources per tenant means the cost structure works.
Why is Kaphera Cloud built on Kubernetes?
The same reasoning that makes Kaphera open source makes it Kubernetes-native: sovereign infrastructure should not depend on any single vendor’s runtime, and it should be built from composable, purpose-built components rather than monolithic systems.
Kubernetes is the open, vendor-neutral substrate for cloud infrastructure. It runs on every major cloud provider, on sovereign European clouds, and on bare metal. An organisation running Kaphera on AWS today can move to Scaleway, OVHcloud, or their own hardware without changing a line of configuration. That portability is not a convenience. For organisations choosing sovereign infrastructure specifically to avoid platform dependency, it is a requirement.
The operator pattern is where the philosophy becomes architecture. Each Kaphera controller manages exactly one resource kind, communicates with other controllers only through the Kubernetes API, and computes its desired state as a pure function of its inputs. The same principle applies as in UNIX tools: small, composable, purpose-built, communicating through a shared interface. EDC Connectors are multi-component systems (control plane, identity wallet, credential issuer, data plane, database, secrets engine, messaging) each with its own lifecycle and failure modes. The operator pattern is what makes it possible to manage hundreds of these deployments with the same reliability as one: the controller continuously reconciles actual state toward declared intent, detects drift, and corrects it.
Being cloud-native also means Kaphera composes with the infrastructure an organisation already operates. Observability, RBAC, network policy, GitOps, secrets management: these are capabilities of the platform, not features Kaphera needs to reinvent. An organisation running Kubernetes gets Kaphera as a natural extension of their existing infrastructure rather than a foreign system beside it. For multi-tenancy, Kubernetes provides the isolation primitives (namespace boundaries, resource quotas, network policies) that enforce tenant separation at the platform level, not just in application code.
What does it cost?
| Deployment | Price | Best for |
|---|---|---|
| Managed shared | €XX,XX / participation / month | Most economical entry point; completely isolated resources on Kaphera-operated European cloud infrastructure |
| Fully dedicated | €XXX,XX / month | Multiple participations or stronger resource and isolation requirements |
| Enterprise | Undisclosed; five-figure monthly range | Custom compliance, integration, or infrastructure requirements |
| Bring-your-own-cloud | €XXXXXX,XX / month | Platform managed by Kaphera on the client’s own infrastructure |
All four options are available to both data space operators and participants. Contact the team for enterprise and BYOC arrangements.
Can I run the operators on my own infrastructure?
Yes. The EDC operator is Apache 2.0 licensed and runs on any Kubernetes cluster with no restrictions. The Kaphera Cloud operator and platform server are source-available: organisations can run them internally without restriction. For organisations that want Kaphera to manage operations on their own cloud account, bring-your-own-cloud (BYOC) is on the roadmap following the initial managed cloud release. BYOC organisations can move between Kaphera-managed and self-managed at any time. The source-available licence ensures they are always in control. See the roadmap question below for the full picture.
What does the setup process look like?
For participants, the experience is designed so that the platform side of the process, from login to data exchange, takes under a minute. Create or join an organisation, browse available data spaces, and join with a single click. The platform handles identity establishment, credential issuance, and connector provisioning automatically, with a real-time status tracker showing progress. Data space-specific onboarding validation requirements vary by data space and sit outside the platform’s control, but everything Kaphera is responsible for is immediate.
An organisation’s digital identity on Kaphera Cloud has two layers. The first is the connector’s cryptographic identity: a set of verifiable credentials that represent the organisation within the data space, establishing its membership, its permissions, and its authority to negotiate contracts and exchange data. Kaphera manages the issuance and full lifecycle of these credentials. The second layer is human access management: who within the organisation can configure the connector, view contracts, approve data offers, and operate the platform. Kaphera supports standard identity providers via OAuth 2.0 and SAML, allowing organisations to use their existing directory services rather than managing a separate set of platform credentials. Together, these two layers give both the connector and the people operating it a clear, governed identity within the ecosystem.
From there, choose a deployment type based on your requirements. The managed shared tier runs on Kaphera-operated European cloud infrastructure with completely isolated resources on a shared cluster. It is the most economical option and is priced per data space participation per month. The fully dedicated tier provisions a set of isolated, dedicated instances for organisations managing multiple participations or requiring stronger resource guarantees, billed per resource used per month. Enterprise customisations are available for organisations with specific compliance, integration, or infrastructure requirements.
On a managed deployment, a first connector is running within minutes of completing the join flow. The same flow is available through the web console, the kaphera CLI, or the Terraform provider.
Is Kaphera Cloud production-ready?
Yes. Kaphera has operated EDC Connectors in production through its role as operator of the Mobility Data Space, and the MDS connector profile has been running on the platform since the soft launch earlier in 2026, now serving over 150 connectors. Tractus-X GA is the platform’s first full public release, backed by that production track record.
Where to go next
- PR FAQ: the launch press release framed as Q&A; sibling material to this consolidated FAQ.
- Product brief: the master product description (scope, goals, deployment tiers, GA scope).
- Glossary: terminology that surfaces across these answers (DSP, EDC, AAS, BPN, Catena-X, MDS).
- By persona: the four customer profiles and eight named humans these answers describe.
- Customer journeys: the same questions, told as scenarios from the customer’s point of view.