PR/FAQ

The launch press release framed as Q&A: market context, competitive positioning, the four customer profiles, and the most common external and internal objections answered.

The first cloud-native platform purpose-built for engineers who need to deploy EDC Connectors without becoming EDC experts.


Berlin, Germany. 1st October 2026.

Today, Kaphera announces the general availability of Kaphera Cloud, a managed platform and open-source EDC operator for deploying and operating EDC Connectors at scale.


Kaphera Cloud launches as European digital sovereignty shifts from long-term ambition to immediate priority. At Davos in January 2026, European Commission Vice President Henna Virkkunen told a WEF audience that the continent’s dependence on foreign technology “can be weaponized against us”. France has since moved to replace Microsoft Teams across its civil service with a sovereign alternative. Germany’s state of Schleswig-Holstein has migrated 44,000 government inboxes to open-source alternatives, saving over €15 million in annual licensing costs. The pending Cloud and AI Development Act is a direct response to the fact that three US companies control 65% of Europe’s cloud services market. For industries built on sensitive supply chain data, including automotive, freight, and pharmaceuticals, the question of who controls data infrastructure is no longer abstract. Data spaces built on open standards and operated by European companies are the practical answer. On Kaphera’s internal estimate, the serviceable market for enterprise data sharing across automotive, manufacturing, health, and energy in the EU and US stands at €3 billion, within a total addressable market of €110 billion across target industries. Kaphera Cloud is the infrastructure layer that removes the technical barrier to joining one.

Organisations across regulated industries are now legally required to exchange sensitive data in auditable, policy-governed ways. The mandate exists across automotive, freight, and pharmaceuticals; the infrastructure to fulfil it at scale, for every participant in the supply chain, does not yet. Sharing data in a sovereign and compliant way has been technically complex and costly. The barrier is not a single thing: a team tasked with connecting to a data space must simultaneously navigate a complex Java codebase, Kubernetes operations, secrets management, database cluster management, identity infrastructure, and the credential requirements of their target data space. Each of those is a discipline on its own. Combined, they routinely stretch deployment timelines from weeks into months, and most of the work cannot be reused across projects or clients. For systems integrators running multiple concurrent implementations, and for enterprises being pulled into data spaces by regulatory deadlines, that cost is unsustainable.

The SME dimension makes this acute. On Kaphera’s internal estimate, approximately 85% of the Catena-X supply chain remains untapped because participation has been too complex and too costly for smaller organisations. Self-hosting a production-grade connector environment on standard cloud infrastructure today costs upwards of €300 per month per organisation in running infrastructure alone, before accounting for the engineering time to build and maintain it. For a supplier that needs to exchange a handful of data items per week to satisfy a supply chain mandate, that is not viable. This is not a future addressable market. It is a present, funded, regulatory problem that the ecosystem has not yet been able to solve at the right price point. And the gap has a structural consequence: every SME that cannot participate is a missing link in the supply chain transparency that OEMs and regulators are requiring. Making participation economically viable at every tier is what turns a data space from a network of large enterprises into a working ecosystem.

Kaphera Cloud serves four groups at this intersection. Builders are platform engineers, DevOps engineers, and solutions architects at systems integrators or enterprise IT teams, tasked with connecting clients to data spaces reliably without becoming EDC specialists. Governance Authorities are the organisations that operate or define a data space, responsible for registering profiles, setting identity rules, and controlling discoverability. Participants are the organisations joining one, typically pulled in by a supply chain mandate, a regulatory deadline, or a customer requirement, whether on managed shared infrastructure, a fully dedicated deployment, or their own cloud with Kaphera managing operations. White-label partners are systems integrators and platform companies that license the Kaphera Cloud operator and server to run under their own brand, building a managed connector offering without building the platform from scratch.

Kaphera Cloud eliminates this. The platform is built on three Kubernetes operators, all written in Rust. The EDC operator (Apache 2.0) manages the full lifecycle of Eclipse Dataspace Components: control plane, identity wallet, credential issuer, and data plane. The EDC Enablement operator (Apache 2.0) manages the supporting services around them. The Kaphera Cloud operator (source-available) manages the platform infrastructure layer: PostgreSQL, Vault, NATS, Keycloak, and the organisational model that ties everything together. Together, they let teams work with a clean, high-level interface rather than assembling infrastructure from scratch. A team that previously spent six to eight weeks setting up a single connector environment can reach a functional, production-grade deployment in days.

Alongside the operators, Kaphera is releasing the Kaphera Digital Twin Registry as open source under the GPL. Written in Rust, it is a multi-tenant, AAS Part 2-compliant registry for managing digital twin metadata across data space participants, the infrastructure that Catena-X requires for product-level data exchange. Where existing implementations are single-tenant, the Kaphera DTR is built from the ground up to serve many participants from a shared process with strict tenant isolation, fine-grained access control, and Catena-X compliance. For data transfer, the EDC operator ships with components based on upstream Eclipse EDC, which provide production-grade data plane capabilities from day one.

The platform ships with the complete developer toolchain: kaphera, a purpose-built CLI for provisioning connectors, managing data space identities, and monitoring participant status without requiring direct cluster access; a Terraform provider and composable Terraform modules for teams who manage infrastructure as code; and a web console modelled on the DigitalOcean experience, clear and operational, built for engineers who are not EDC specialists.

Kaphera Cloud launches with two supported connector profiles. The MDS connector profile has been running in production since the platform’s soft launch earlier this year, now serving over 150 connectors across Mobility Data Space participants, fully validated and available to any team that needs to join MDS. The Tractus-X connector profile launches today, timed to Catena-X’s adoption of the latest EDC Connector version, the release that introduces production-grade multi-tenancy and makes Connector as a Service (CaaS) viable at acceptable unit economics for the first time. The platform’s multi-tenant architecture makes managed connector infrastructure economically viable at every tier of the supply chain. Every systems integrator working in the automotive supply chain can now offer their clients a managed EDC deployment, on Kaphera Cloud or on their own infrastructure, using the same open-source operator.

Kaphera Cloud is a complete solution for both sides of a data space. Organisations governing or operating a data space can register their data space profiles, define identity and onboarding rules, and control discoverability, making profiles available to all Kaphera organisations or to a selected group. For participants, the experience is designed for speed: from login to data exchange in under a minute. Create or join an organisation, browse available data spaces, and join with a single click. The platform handles the rest: identity establishment, credential issuance, and connector provisioning. Aside from data space-specific onboarding validation requirements, which vary by data space, the platform side of the process is immediate.

Three deployment options are available to both data space operators and participants. The managed shared tier runs on Kaphera-operated European cloud infrastructure with completely isolated resources, priced per data space participation per month. The fully dedicated tier provides a set of isolated, dedicated instances for organisations managing multiple participations, billed per resource used. Enterprise customisations are available for organisations with specific infrastructure or compliance requirements. Bring-your-own-cloud deployments, where Kaphera manages the operators on the client’s own infrastructure, are coming next, followed by compute-to-data capabilities and interoperable applications built on sovereign infrastructure.

The EDC operator and the Digital Twin Registry are open source, forever. The Kaphera Cloud operator and the platform server are source-available: the code is readable and auditable, organisations can run it internally without restriction, but offering it as a commercial service to third parties requires a licence agreement with Kaphera. Kaphera lowers the barrier of entry for any engineer who wants to self-host, while building a business on the teams and organisations that would rather not.

Existing managed connector services from other providers are proprietary: their operational stacks are closed, which means adopting them introduces the same vendor dependency that sovereign data infrastructure is designed to eliminate. Kaphera Cloud is the only managed connector platform whose EDC operational layer is fully open source, and whose platform layer is source-available rather than closed. No other provider gives customers the ability to inspect, audit, and run the full stack independently.

Palantir built its business connecting sensitive data across government agencies and large enterprises, becoming one of the world’s most valuable software companies in the process. That model depends on centralised control: a single trusted entity through which all data must flow. “Palantir proved there is enormous value in connecting sensitive data across organisational boundaries”, said Mehemed Bougsea, CEO of Kaphera. “They built it as a closed, centralised platform you have to trust completely. We are building what that could have been: open infrastructure where every participant keeps control of their own data, and no single actor, including us, sits in the middle. That is what democratising sovereign data sharing actually means.”

“We run data space implementations for clients across the automotive and freight sectors”, said Ramy H’Cini, CTO of Think-it. “The time our teams used to spend on connector operations is now time we spend on actual integration work. That is the difference.”

Kaphera Cloud is available at kaphera.cloud. The EDC operator source code is published at github.com/kaphera/edc-operator under the Apache 2.0 license. The Kaphera Cloud operator and platform server source code are available at github.com/kaphera/kaphera-cloud. Teams can start with the managed shared tier immediately; dedicated, enterprise, and white-label deployments are available on request.

FAQs

External FAQs

What is Kaphera Cloud?

Kaphera Cloud is a managed platform built on three Kubernetes operators for deploying and operating EDC Connectors. The EDC operator (Apache 2.0) manages Eclipse Dataspace Components (control plane, identity wallet, credential issuer, and data plane). The EDC Enablement operator (Apache 2.0) manages the supporting services around them. The Kaphera Cloud operator (source-available) manages the platform infrastructure (PostgreSQL, Vault, NATS, Keycloak, and the organisational model). The platform also includes the Kaphera Digital Twin Registry, a multi-tenant, AAS Part 2-compliant registry released under GPL that manages digital twin metadata across data space participants. Together, these components serve both sides of a data space. For organisations governing or operating a data space, the platform provides a complete governance interface: register data space profiles, configure identity and onboarding rules, and control whether those profiles are publicly discoverable to all Kaphera organisations or restricted to a selected group. For participants, it handles everything needed to join: create or join an organisation, establish its digital identity, and initiate onboarding into any available data space in a single step. The underlying infrastructure is fully managed. Kaphera Cloud operates at the data exchange layer. It governs how data moves between organisations, under what policies, and with what auditability. It does not store or process the data itself; those responsibilities remain with the participant organisations and their existing infrastructure.

What problem does Kaphera Cloud actually solve?

The core problem is trust between organisations that need to share data but cannot do so through ad-hoc means without legal and compliance exposure. Email is not auditable. File transfers are not policy-governed. As European regulation increasingly mandates transparency across supply chains, the question of how to exchange data between organisations in a controlled, traceable, and consent-driven way is no longer optional.

A few concrete examples illustrate the scope. BMW and Bosch need to exchange component-level data to issue precise quality recalls rather than recalling entire production runs. Automotive manufacturers need to track the carbon footprint of individual parts across multi-tier supply chains to satisfy Digital Product Passport requirements. Fruit and vegetable exporters face container loads being rejected at European borders because there is no transparent way to verify growing conditions before produce arrives.

In each case, the data exists and the willingness to share it under the right conditions exists. What has been missing is infrastructure that makes the exchange safe, auditable, and operationally viable for every participant, not just large enterprises with dedicated technical teams. That is what Kaphera Cloud provides.

Who is Kaphera Cloud for?

Kaphera Cloud serves four customer profiles, each with distinct needs and a tailored set of solutions.

Customer ProfileWhoWhat Kaphera provides
builderPlatform engineers, DevOps, and solutions architects at systems integrators or enterprise ITOpen-source kaphera-edc-operator, [[kaphera-cli|kaphera CLI]], kaphera-cloud-terraform-provider, and kaphera-cloud-terraform-modules. Ship sovereign data infrastructure without mastering every layer of the EDC stack
governance-authorityOrganisations that operate or define a data spaceRegister data space profiles, configure identity and onboarding rules, manage credential issuance, and control discoverability through the kaphera-cloud-console and kaphera-cloud-server
participantOrganisations joining a data space, from SMEs on shared infrastructure to large enterprises on their own cloudData space participation across three deployment tiers (Managed, Dedicated, BYOC), with the kaphera-cloud-managed-console, kaphera-cloud-managed-server, and the kaphera-digital-twin-registry
white-label-partnerSystems integrators and platform companies building Connector-as-a-Service offeringsLicense the kaphera-cloud-operator and kaphera-cloud-server to run under their own brand. Build a managed connector business without building the platform from scratch

All four profiles use the same underlying platform. The experiences are surfaced through feature flagging, so the operational infrastructure is shared while each profile sees only what is relevant to their role. The platform is designed to be accessible at every level of the supply chain, from the OEM to the tier-three supplier. Shared-tier pricing brings participation within reach for SMEs that have historically been priced out of it, which is how Kaphera enables large-scale data space adoption rather than serving only the large enterprises at the top of the supply chain.

Can Kaphera Cloud be used to operate a data space, not just participate in one?

Yes. Organisations governing or building a data space can use Kaphera Cloud to register their data space profile, define the identity model and onboarding rules participants must satisfy, and publish the data space for discovery. That discoverability is configurable: a profile can be made available to all organisations on the platform or limited to a specific group. The same three deployment options available to participants, managed shared, fully dedicated, and enterprise, apply equally to data space operators. This makes Kaphera Cloud a complete solution for both the organisations who govern a data space and those who join one.

Which data spaces does Kaphera Cloud support at launch?

Two connector profiles are available at general availability: MDS (Mobility Data Space) and Tractus-X (Catena-X). Each profile bundles the data space-specific credential specifications, trust anchors, and onboarding flows required to participate. Support for additional data spaces is on the roadmap.

How does the open-source model work?

The platform has two licensing tiers reflecting the distinction between the EDC layer and the platform layer.

The EDC operator is released under Apache 2.0: use it, run it, build on it, with no restrictions. The Kaphera Digital Twin Registry, a multi-tenant, AAS Part 2-compliant registry written in Rust, is released under GPL: the source is fully open, but any modifications you distribute must remain open. If you modify it and use it internally, you have no obligation to share anything; the copyleft obligation only applies to distributed versions. For data transfer, the EDC operator ships with data plane components based on upstream Eclipse EDC.

The Kaphera Cloud operator and the platform server are source-available: the code is readable and auditable, organisations can run it internally without restriction, but offering it as a commercial service to third parties requires a licence agreement with Kaphera. White-label partners who want to offer managed connector services under their own brand are the primary use case for this commercial licence.

The kaphera CLI and Terraform provider are open source. The managed cloud service is Kaphera’s commercial offering built on top of all of the above.

ComponentLicenceSelf-hostable
EDC operatorApache 2.0Yes, no restrictions
Kaphera Digital Twin RegistryGPLYes, modifications must stay open
Kaphera Cloud operatorSource-available — ElasticYes for internal use; commercial re-sale requires agreement
Platform serverSource-available — ElasticYes for internal use; commercial re-sale requires agreement
kaphera CLIOpen sourceYes
Terraform providerOpen sourceYes
Terraform modulesOpen sourceYes
Managed cloud serviceCommercialn/a

Why is Kaphera Cloud built in Rust?

Kaphera chose Rust for both operators and the Digital Twin Registry for reasons specific to what those components need to guarantee.

Memory safety is the most significant one. Between 66% and 75% of all documented security vulnerabilities in production systems stem from memory safety failures, a figure that has led the US White House Office of the National Cyber Director, the NSA, and CISA to jointly recommend a strategic transition to memory-safe languages for critical infrastructure. Rust enforces memory safety at compile time through its borrow checker, eliminating this class of vulnerability before code runs rather than patching it after. For infrastructure handling sensitive supply chain data under contractual data processing agreements, that compile-time guarantee carries real weight.

Each operator requires consistent behaviour under sustained load, continuously reconciling state across potentially hundreds of connector deployments. Rust’s memory model eliminates garbage collection pauses entirely, giving both operators predictable latency regardless of workload size.

The Digital Twin Registry demands go further. The Kaphera DTR serves many participants from a single shared process, enforcing strict tenant isolation and fine-grained access control at the database and API levels. Rust’s type system enforces the absence of data races at compile time, meaning concurrency correctness is a property of the code rather than a runtime assumption. The lower per-tenant memory footprint that Rust enables is also what makes shared-tier pricing economically viable: fewer resources per tenant means the cost structure works.

Why is Kaphera Cloud built on Kubernetes?

The same reasoning that makes Kaphera open source makes it Kubernetes-native: sovereign infrastructure should not depend on any single vendor’s runtime, and it should be built from composable, purpose-built components rather than monolithic systems.

Kubernetes is the open, vendor-neutral substrate for cloud infrastructure. It runs on every major cloud provider, on sovereign European clouds, and on bare metal. An organisation running Kaphera on AWS today can move to Scaleway, OVHcloud, or their own hardware without changing a line of configuration. That portability is not a convenience. For organisations choosing sovereign infrastructure specifically to avoid platform dependency, it is a requirement.

The operator pattern is where the philosophy becomes architecture. Each Kaphera controller manages exactly one resource kind, communicates with other controllers only through the Kubernetes API, and computes its desired state as a pure function of its inputs. The same principle applies as in UNIX tools: small, composable, purpose-built, communicating through a shared interface. EDC Connectors are multi-component systems (control plane, identity wallet, credential issuer, data plane, database, secrets engine, messaging) each with its own lifecycle and failure modes. The operator pattern is what makes it possible to manage hundreds of these deployments with the same reliability as one: the controller continuously reconciles actual state toward declared intent, detects drift, and corrects it.

Being cloud-native also means Kaphera composes with the infrastructure an organisation already operates. Observability, RBAC, network policy, GitOps, secrets management: these are capabilities of the platform, not features Kaphera needs to reinvent. An organisation running Kubernetes gets Kaphera as a natural extension of their existing infrastructure rather than a foreign system beside it. For multi-tenancy, Kubernetes provides the isolation primitives (namespace boundaries, resource quotas, network policies) that enforce tenant separation at the platform level, not just in application code.

What does it cost?

DeploymentPriceBest for
Managed shared€XX,XX / participation / monthMost economical entry point; completely isolated resources on Kaphera-operated European cloud infrastructure
Fully dedicated€XXX,XX / monthMultiple participations or stronger resource and isolation requirements
EnterpriseUndisclosed; five-figure monthly rangeCustom compliance, integration, or infrastructure requirements
Bring-your-own-cloud€XXXXXX,XX / monthPlatform managed by Kaphera on the client’s own infrastructure

All four options are available to both data space operators and participants. Contact the team for enterprise and BYOC arrangements.

Can I run the operators on my own infrastructure?

Yes. The EDC operator is Apache 2.0 licensed and runs on any Kubernetes cluster with no restrictions. The Kaphera Cloud operator and platform server are source-available: organisations can run them internally without restriction. For organisations that want Kaphera to manage operations on their own cloud account, bring-your-own-cloud (BYOC) is on the roadmap following the initial managed cloud release. BYOC organisations can move between Kaphera-managed and self-managed at any time. The source-available licence ensures they are always in control. See the roadmap question below for the full picture.

What does the setup process look like?

For participants, the experience is designed so that the platform side of the process, from login to data exchange, takes under a minute. Create or join an organisation, browse available data spaces, and join with a single click. The platform handles identity establishment, credential issuance, and connector provisioning automatically, with a real-time status tracker showing progress. Data space-specific onboarding validation requirements vary by data space and sit outside the platform’s control, but everything Kaphera is responsible for is immediate.

An organisation’s digital identity on Kaphera Cloud has two layers. The first is the connector’s cryptographic identity: a set of verifiable credentials that represent the organisation within the data space, establishing its membership, its permissions, and its authority to negotiate contracts and exchange data. Kaphera manages the issuance and full lifecycle of these credentials. The second layer is human access management: who within the organisation can configure the connector, view contracts, approve data offers, and operate the platform. Kaphera supports standard identity providers via OAuth 2.0 and SAML, allowing organisations to use their existing directory services rather than managing a separate set of platform credentials. Together, these two layers give both the connector and the people operating it a clear, governed identity within the ecosystem.

From there, choose a deployment type based on your requirements. The managed shared tier runs on Kaphera-operated European cloud infrastructure with completely isolated resources on a shared cluster. It is the most economical option and is priced per data space participation per month. The fully dedicated tier provisions a set of isolated, dedicated instances for organisations managing multiple participations or requiring stronger resource guarantees, billed per resource used per month. Enterprise customisations are available for organisations with specific compliance, integration, or infrastructure requirements.

On a managed deployment, a first connector is running within minutes of completing the join flow. The same flow is available through the web console, the kaphera CLI, or the Terraform provider.

Is Kaphera Cloud production-ready?

Yes. Kaphera has operated EDC Connectors in production through its role as operator of the Mobility Data Space, and the MDS connector profile has been running on the platform since the soft launch earlier in 2026, now serving over 150 connectors. Tractus-X GA is the platform’s first full public release, backed by that production track record.


Internal FAQs

Customer Needs and Total Addressable Market

Who are the four personas, and what does each of them need?

Kaphera Cloud is built around four distinct personas: one Builder persona reached through the open-source EDC operator and developer experience; two platform personas, Governance Authority and Participant, reached through the managed platform; and White-label Partners who license the Kaphera Cloud operator and server to offer managed connector services under their own brand.

The Builder is a platform engineer at a systems integrator like Think-it, Metaform, or Capgemini. They are good at their job. They have shipped Kubernetes infrastructure before, they understand cloud-native patterns, and they know how to operate databases and secrets management in production. Their manager has handed them a new project: get a client connected to Catena-X, or MDS, or another IDSA-compatible data space, and make it work reliably.

They start reading. The EDC Connector is a complex Java codebase with documentation that explains what things are, but rarely how to actually run them end-to-end. The identity stack requires wiring together IdentityHub and CredentialIssuer, each with their own configuration surface. Vault needs to be set up and integrated. The data space they are targeting has its own credential specifications and trust anchors that are not well documented outside of closed consortium forums. Each of these pieces is a discipline on its own, and none of them come pre-integrated.

Six weeks later, they have something running in a staging environment. It has taken far longer than the estimate they gave their manager. They are now the de facto EDC expert at their company, which was never the plan. The next client project will require most of this work again from scratch, because nothing they built is easily portable. And somewhere in the background, the regulatory deadline that drove the project in the first place is getting closer.

Their job is to ship sovereign data infrastructure reliably, not to become an expert in every layer of the stack to do it. They need something they can deploy in a day, operate without a manual, and hand off to a colleague without a two-hour knowledge transfer. When they find it, they will use it on every subsequent project. And if they work at a systems integrator, that means every client they onboard after them.

The Governance Authority is the organisation serving as the governing entity within a data space, the entity that sets its rules, defines the identity model, and controls who can participate. They need to register their data space profile, set the identity model and onboarding rules that participants must satisfy, manage credential issuance, and control who can discover and join. Regulatory deadlines, consortium commitments, or industry mandates are typically the forcing function. They care about compliance posture, auditability, and operational stability above all else.

The Participant is an organisation joining a data space, often pulled in by a customer, a supply chain mandate, or a regulatory deadline. They want the onboarding process to be fast and the operational overhead to be minimal. They are not infrastructure teams; they want to participate, not operate. Within this persona there are three deployment profiles that reflect different organisational realities:

  • Managed (Shared): SMEs and mid-tier suppliers who need data space participation at a fraction of self-hosting cost, with zero infrastructure expertise. The most economical option, priced per participation per month.
  • Dedicated: Mid-market enterprises managing multiple participations or requiring full resource isolation, dedicated SLAs, and predictable performance.
  • BYOC (Bring Your Own Cloud): Large enterprises with internal teams that have been trying to make EDC work. Kaphera replaces that engineering effort with a proven platform running on their own infrastructure. Because the Kaphera Cloud operator is source-available, they can move between Kaphera-managed and self-managed at any time, always in control.

The White-label Partner is a systems integrator or platform company that wants to offer managed connector services under their own brand. They license the Kaphera Cloud operator and server rather than building a platform from scratch. Their customers see their brand; Kaphera provides the infrastructure and operational foundation underneath.

Governance Authorities, Participants, and White-label Partners use the same underlying platform. The experiences are surfaced through feature flagging, so the operational infrastructure is shared while each persona sees only what is relevant to their role. This means Kaphera’s cost base scales efficiently across all sides of any data space it serves.

How large is the market?

The market is large and accelerating. The global data governance and AI readiness market was €5.52 billion in 2025 and is growing at 26% CAGR, reaching an estimated €11.25 billion by 2028 (Global Growth Insights). Within that, the total addressable market across EU and US target industries is €110 billion. The serviceable addressable market, focused on enterprise data management and sharing in automotive, manufacturing, health, and energy, is €3 billion. Kaphera’s serviceable obtainable market, targeting its ICP in EU automotive and manufacturing, is €450 million, representing approximately 15% of that serviceable segment.

The near-term addressable market is the community of systems integrators and enterprises actively participating in or onboarding into IDSA-compatible data spaces. Catena-X alone targets over 200 OEMs and tier-1 suppliers in the automotive supply chain, most of which will need at least one connector deployment. MDS participants span mobility sector companies, public institutions, and fleet operators across Germany and, increasingly, Europe. The EU Data Act, generally applicable since 12 September 2025 with the cross-sector access-by-design milestone arriving in September 2026, is the forcing function that converts latent demand into funded projects. Each connector deployed by a systems integrator on behalf of a client is a deployment Kaphera can serve, and integrators working across multiple clients represent multiplied volume.

X,XXX connector deployments is a meaningful near-term milestone, not a ceiling. Every new data space, every new regulated sector facing Digital Product Passport requirements, every enterprise that joins Catena-X represents additional demand. The platform strategy is explicitly designed to scale with network effects: each deployed connector increases the value of the network, and each new data space profile Kaphera supports multiplies the addressable base. At X,XXX deployments, the platform generates approximately €XXX,XXX in monthly revenue against approximately €XX,XXX in cloud infrastructure costs, producing a platform surplus of roughly €XXX,XXX per month before HR. With a current monthly HR cost of €XX,XXX, total monthly surplus at that scale is approximately €XX,XXX. The model shows profitability is reachable well before X,XXX deployments given the low shared-infrastructure cost per connector.

What filters the total addressable market down to realistic near-term targets?

Organisations with no current driver to join a data space are not near-term customers. The initial focus is on systems integrators already executing data space projects, as they have immediate deployment needs and multiplied reach across their own client portfolios. Direct enterprise deals follow as organisations facing Data Act deadlines begin procurement. The MDS soft launch, with over 150 connectors already running, provides the reference footprint that makes those conversations credible.


Go-to-Market Strategy and Risk

What is the go-to-market strategy?

The GTM runs four parallel channels, each mapped to one of the four personas.

The Builder channel is driven by the open-source release and developer experience. The EDC operator under Apache 2.0, the Digital Twin Registry under GPL, the public documentation, and the kaphera CLI are the acquisition mechanism. Builders find Kaphera through the open-source community, evaluate it by running it, and convert to the managed platform when they want someone else to operate it. Think-it naturally amplifies this channel: by using Kaphera in its own implementations and recommending it to clients, Think-it’s engineers become a direct source of Builder adoption across the systems integrator ecosystem.

The enterprise channel is Mehemed’s direct responsibility. Target organisations are enterprises facing EU Data Act deadlines, Catena-X supply chain mandates, or other regulatory forcing functions that make data space participation a funded priority rather than an exploratory one. Beyond existing public data spaces, the channel will also investigate emerging affine markets, including Ireland given Kaphera’s incorporation there, where nascent data space ecosystems represent early-mover opportunities. Private organisations looking to govern internal data spaces, such as consumer electronics companies managing supply chain transparency, are a parallel target. These deals are larger, slower, and relationship-driven. The Governor and Participant platform experiences, combined with the dedicated and enterprise deployment tiers, are the primary commercial offering for this channel.

The partnerships channel sits with Malte Gasseling at Think-it. This covers ecosystem relationships with Catena-X directly, co-selling arrangements with systems integrators, and engagement with adjacent players in the data space market, including Orbiter, Nexyo, and Aruba. The goal is to establish Kaphera as the reference infrastructure layer across the ecosystem rather than winning deals in isolation. Partnerships that unlock distribution or certification pathways are prioritised over those that are purely commercial.

What are the primary risks for each channel, and how are they managed?

The Builder channel’s main risk is conversion: high open-source adoption does not automatically translate into paying customers. Builders who self-host successfully have no reason to upgrade. The mitigation is product: the managed platform needs to be demonstrably easier to operate than the self-hosted alternative, and the pricing needs to sit well below the cost of doing it yourself. The current unit economics support this, and the gap widens as platform maturity increases.

The enterprise channel’s main risk is sales cycle length. Enterprise deals in regulated industries, particularly automotive, move slowly. A single deal lost to timeline can shift quarterly projections significantly at current scale. The mitigation is pipeline diversity: Mehemed’s focus on multiple concurrent conversations across different sectors reduces dependence on any single deal. The MDS production track record and 150-plus live connectors are the credibility anchor that shortens evaluation cycles.

The partnerships channel’s main risk is misaligned incentives. Adjacent players like Orbiter, Nexyo, and Aruba are potential partners in some contexts and potential competitors in others. Managing those relationships requires clarity on where Kaphera’s scope ends and theirs begins. The open-source strategy helps here: Kaphera’s infrastructure layer is explicitly not a lock-in vector, which makes co-existence and co-selling easier to frame. Malte’s position at Think-it, which has existing relationships across the ecosystem, gives Kaphera indirect access to those conversations without requiring Kaphera to carry them directly at this stage of the company.

Who are the main competitors and how is Kaphera positioned against them?

The managed EDC connector market includes both specialist players and large established providers.

Among specialists, Sovity is the most established, having certified their Connector-as-a-Service as an Enablement Service Provider in Catena-X. Orbiter operated MDS before Kaphera and was the prior incumbent. Nexyo is active in the space with a focus on application-layer tooling. DASS-X, a BASF Group spin-off with TISAX Level 3 certification, targets the Catena-X and Chem-X application layer with an end-to-end suite covering onboarding, use-case execution, and Manufacturing-X integration; their differentiation is application depth rather than infrastructure flexibility.

Among large established players, T-Systems (via Deutsche Telekom’s Data Intelligence Hub) holds the first IDSA-certified connector and became the largest Catena-X Enablement Service Provider as of April 2025, qualified across four roles in the ecosystem. Their offering runs as a proprietary managed service. NTT Data brings global systems integration depth and active EDC research, particularly in large enterprise and telecom segments, and is building toward global data exchange infrastructure in partnership with T-Systems.

Kaphera’s differentiation is structural and goes beyond any single dimension.

No competitor has open-sourced their EDC operational layer. By releasing the EDC operator under Apache 2.0 and the Digital Twin Registry under GPL, Kaphera becomes the reference implementation for the ecosystem, one that any systems integrator can adopt, audit, and deploy independently. The platform layer is source-available rather than closed, giving customers full visibility without creating the conditions for a competing proprietary fork. The managed platform earns its business through operational excellence, not lock-in. This is a moat that is difficult to replicate without undoing a proprietary business model.

Production track record and ecosystem proximity compound this. Kaphera and Think-it have operated MDS connectors in production across more than 150 active deployments, giving the team operational knowledge that a new entrant cannot replicate quickly. Think-it holds EDC core committer status in the Eclipse project, meaning the team understands and contributes to where the specification goes rather than reacting to it after the fact.

The governance structure is the third differentiator, and for sovereignty-sensitive customers it is the most durable. Kaphera’s steward-ownership model makes a structural promise that large competitors backed by major telcos or multinationals cannot make: the platform cannot be acquired by a hyperscaler or third party and repurposed. For organisations choosing sovereign infrastructure specifically to avoid that outcome, the governance model is a material consideration, not just a positioning statement.


Economics and P&L

What are the per-connector unit economics?

Shared tier (70% of projected deployments): the shared infrastructure serving XXX tenants costs approximately €XXX/month in total, a blended per-tenant infrastructure cost well under €X/month. At €XX/month revenue per shared deployment (control plane + data plane + database + secrets), margins on the shared tier at scale are very high.

Dedicated tier (30% of projected deployments): infrastructure cost per dedicated connector is approximately €XXX/month for a full stack (control plane, data plane, database, secrets management). Revenue on the same full stack is €XXX/month. Gross margin on the dedicated tier is approximately XX%.

These figures use an average of AWS, DigitalOcean, and Scaleway pricing. Scaleway offers the lowest infrastructure cost and is the preferred primary provider for European deployments on both sovereignty and cost grounds.

What is the rationale for the shared-tier price point of €20/connector/month for the control plane?

Early feedback from Builders was clear: ~€100/connector/month was difficult to justify per project. The shared tier is structured to break that barrier. At €20 for the control plane, a Builder deploying connectors for several clients is looking at a total platform cost per client that is defensible as an operational line item rather than a capital investment. The margin is preserved because shared infrastructure cost per tenant is negligible at any meaningful volume.

What is the break-even deployment count?

The fixed shared infrastructure cost is approximately €XXX/month. At a blended revenue per shared deployment of approximately €XX/month, break-even on infrastructure alone is reached at approximately XX shared deployments. HR costs (approximately €XX,XXX/month for the current team of six) require approximately XXX billable deployments at blended average revenue to cover fully. The platform is not expected to cover the full HR base at launch; current runway covers the period while the platform scales toward that threshold.

What is the pricing rationale for the dedicated tier?

Dedicated connectors are priced to reflect the isolation and operational guarantee they carry. They are the right tier for organisations with data sensitivity requirements, regulatory audit expectations, or SLA commitments that make shared infrastructure inappropriate. At €XXX/month for a full dedicated stack, the price is consistent with what enterprise teams currently pay for comparable managed infrastructure, and well below the cost of running a dedicated connector environment internally.


Licensing and Open-Source Strategy

Why open-source the EDC operator under Apache 2.0?

The EDC operator is the hardest and most time-consuming piece for any Builder to produce from scratch: wiring up control plane, identity wallet, credential issuer, and data plane into a coherent lifecycle. Open-sourcing it under the most permissive licence removes the entry barrier completely and is consistent with Kaphera’s belief that the dataspace connectivity layer should not itself be a lock-in vector. The commercial moat is not the EDC operator code; it is the managed platform, the operational expertise built through production deployments, and the data space-specific profiles and integrations that take months to build correctly.

Why open-source the Kaphera Digital Twin Registry under GPL?

The Kaphera DTR is architecturally novel: a multi-tenant, Rust-native, AAS Part 2-compliant registry built for the data space ecosystem, designed from the ground up to serve many participants from a shared process with strict tenant isolation and fine-grained access control. GPL is chosen over Apache 2.0 deliberately: it requires distributed modifications to remain open, which keeps derivative implementations in the community rather than disappearing into proprietary products. Anyone can run it; they just cannot quietly close it.

Why source-available for the Kaphera Cloud operator and platform server?

Source-available licensing lets any organisation read, audit, and run the platform internally. It prevents a cloud provider or competitor from forking the platform and offering it as a competing managed service without a commercial agreement. This strikes the balance sovereignty-sensitive customers need: full visibility into the code that manages their infrastructure, without creating the conditions for a proprietary fork that would undermine the ecosystem. Organisations that want to offer managed connector services commercially (the white-label use case) can do so under a licence agreement with Kaphera.

Does open-sourcing the EDC operator and Digital Twin Registry risk commoditising Kaphera’s own service?

The open components lower the barrier for self-hosting the EDC and registry layers. They do not eliminate the reasons most organisations will choose the managed platform: they do not want to operate Kubernetes clusters, manage highly-available secrets infrastructure, maintain certificate chains, handle data space-specific credential updates as specifications evolve, or carry operational risk. The Kaphera Cloud operator and server (the platform layer that ties all of this together) are source-available, not open source, which preserves Kaphera’s commercial position while keeping the code auditable. Systems integrators working under client deadlines want deployment speed, not operational burden, and the EDC operator and DTR serve that goal alongside the managed service.


Think-it Partnership and Krypton Transition

What is Krypton, and how does it relate to Kaphera’s reported production deployments?

Krypton is the existing managed connector platform built and operated by Think-it, currently running MDS CaaS in production across 70-plus deployed connectors. It is the direct predecessor to Kaphera Cloud and the source of the production track record cited in investor reporting. To reflect the operational reality accurately, AWS account ownership for the Krypton infrastructure is being transferred to Kaphera. This means Kaphera formally owns and can report on the system, while Think-it continues to provide day-to-day operational support during the transition period.

What does the Think-it and Kaphera agreement cover during the transition?

The arrangement between Think-it and Kaphera covers two parallel workstreams. The first is platform migration: AWS admin access is granted to Kaphera, and the MDS CaaS infrastructure, including wallet and issuance flow, is progressively migrated to DCP-compatible architecture. The second is continued operations: Think-it’s engineering team, led by Mostafa as Engineering Manager with platform engineers Khalil and Bacem from March onwards, maintains operational responsibility for MDS CaaS and Future Insights accounts through to the Kaphera v1.0 migration. This is a part-time arrangement running from February to June 2026.

What is the transition timeline from Krypton to Kaphera Cloud?

The strategy is deliberate rather than rushed. MDS migrates to Kaphera v1.0 as soon as the platform is solid, at which point Krypton goes dark in early 2027. The sequencing is intentional: MDS is a critical client and a reputational anchor, and the migration should happen when Kaphera Cloud is genuinely ready to serve it without disruption, not on an artificially compressed timeline. The AWS ownership transfer is the first step, giving Kaphera operational standing and investor reportability now while the platform matures. The full cutover follows once MDS-specific credential profiles, trust anchor integration, and the onboarding flow are production-grade. Kaphera Cloud v1.0 supports MDS first, Tractus-X second.

Why maintain Krypton at all rather than shutting it down immediately?

Krypton serves a live, paying client base in production. Shutting it down before Kaphera Cloud is ready to absorb those deployments would mean either disrupting MDS participants or rushing the Kaphera v1.0 release into a state it is not ready for. Neither is acceptable. Krypton buys the team the space to build the platform correctly, validate it thoroughly, and migrate clients on a schedule that protects the relationship rather than straining it. It also preserves the production track record that underpins investor reporting and commercial credibility during the fundraising period. Krypton goes dark when Kaphera Cloud is ready, not before.

What are the risks in this transition and how are they managed?

The primary risk is operational continuity for MDS during the migration window. The mitigation is the parallel-run approach: Think-it’s team maintains the Krypton environment until Kaphera Cloud is fully validated, with a DNS-based cutover and a rollback plan preserving the ability to revert to Think-it infrastructure within minutes if needed. The secondary risk is IP and repository ownership of the Krypton codebase. Since Krypton is intended to be open-sourced into the Kaphera operator anyway, the IP transition is treated as a deferred but non-contentious item, with the krypton-* GitHub repositories remaining under Think-it access in the interim.


Dependencies

What certifications are required to operate in Catena-X?

Two distinct requirements gate Catena-X operations, and they split naturally across Kaphera and Think-it.

TISAX Level 2 is mandatory for all certified Enablement Service Providers in Catena-X, effective July 2025 with a 12-month grace period. It requires a third-party audit of organisational security posture and is substantially overlapping with ISO 27001, where work is already underway. This is Kaphera’s obligation to fulfil at company level.

CX-0018 is the Dataspace Connectivity conformance standard. For an Enablement Service Provider offering a Connector-as-a-Service, it requires a formal audit against DSP protocol compliance, BPN validation, credential service integration, and policy constraint enforcement as defined in CX-0152. The three accredited conformity assessment bodies are TÜV SÜD, Deloitte, and TÜV Rheinland. Kaphera’s EDC deployments are technically close to conformance, but the audit is formal and non-trivial. Given that CX-0018 conformance requires deep EDC implementation knowledge rather than infrastructure operations, Think-it holds the CX-0018 path, consistent with the broader division of responsibility between the two organisations.

When does Catena-X support the latest EDC Connector version with multi-tenancy?

The Tractus-X connector GA is gated on Catena-X formally supporting the latest EDC Connector version in its production environment, expected in autumn 2026. Kaphera’s discovery work runs ahead of that gate to ensure the Tractus-X profile is ready to deploy the moment it opens. This is a hard external dependency; Kaphera cannot accelerate it, but the timing is what drives the press release launch window.

What happens if Catena-X’s EDC adoption slips?

MDS GA is not gated on Catena-X and launched first, in June 2026. The platform, operator, toolchain, and open-source releases are all in place regardless of the Catena-X timeline. A slip delays the Tractus-X connector profile specifically, not the platform itself.

What are the critical third-party components the platform depends on?

The platform builds on upstream Eclipse Foundation components for the EDC Connector control plane, identity management, and credential issuance. These are not Kaphera forks; they are upstream dependencies managed by the operator. Any breaking change in an upstream Eclipse component could affect the platform. Kaphera’s mitigation is to track upstream closely, contribute fixes where possible, and maintain the operator’s abstraction layer so that component upgrades do not surface to end users.

How dependent is growth on Think-it as a distribution channel?

Think-it is the primary near-term channel for Builder adoption. As a systems integrator with active data space projects, Think-it’s engineers are the prototype Builder persona. The platform is designed to be self-serve for any Builder at any organisation, however. The open-source operator and public documentation are the mechanism for reaching Builders at Metaform, CMW, Capgemini, PwC, and others. Think-it provides the initial reference deployments and credibility; the GTM beyond Think-it relies on the open-source community and direct outreach to the systems integrator ecosystem.


Feasibility

What are the hardest engineering problems between now and launch?

The core platform, covering the full EDC Connector stack, identity management, messaging, secrets, and database, was targeted for completion by end of Q1 2026. MDS-specific credential profiles, trust anchor integration, and the onboarding flow are the primary work for MDS GA in June 2026. The Tractus-X profile follows in autumn 2026. The engineering challenges are well-scoped: the upstream components exist, the architecture is defined, and Kaphera has direct operational experience with the EDC Connector in production. Timeline compression is the main risk.

What is the team size and cost?

The current team is six people: Federico Dionisi as CPO, Nina Juresic as Design Director, and one Platform Engineer on the product and engineering side; Mehemed Bougsea as CEO focused on enterprise sales, and one Business Developer focused on Builder acquisition and developer community growth on the growth side; Felix Kreimer as Finance Ops. Malte Gasseling at Think-it covers partnerships, including ecosystem relationships with Catena-X and adjacent players in the data space market. Total monthly HR cost is approximately €XX,XXX. This is a deliberately lean team operating with systematic leverage: automation, open-source tooling, and the SaaS infrastructure model. Headcount will expand post-funding; the €1M raise in progress is sized to extend runway and add engineering capacity for the Tractus-X connector and platform hardening.

What does the roadmap beyond autumn 2026 look like?

Bring-your-own-cloud (BYOC) is the next major capability. Enterprises or large integrators who want Kaphera’s managed service running on their own cloud infrastructure can host the platform themselves while Kaphera handles system operations and manages the operators remotely. Because the Kaphera Cloud operator is source-available, BYOC organisations can move between Kaphera-managed and self-managed at any time. It gives organisations the sovereignty guarantees of self-hosting with the operational simplicity of a managed service, without lock-in.

Following BYOC, the platform roadmap covers additional infrastructure capabilities and the application layer. Compute-to-data will allow algorithms to be executed against data in place, without the data leaving the participant’s environment, which is the foundation for privacy-preserving analytics across organisational boundaries. A custom multi-tenant data plane is planned as the platform scales and expands to additional frameworks beyond EDC, such as FIWARE.

The application layer is the longer-term horizon. Starting from the standard applications defined by Catena-X, end users will be able to access purpose-built applications that exchange data through Kaphera’s sovereign infrastructure without any awareness of the connector mechanics underneath. The platform strategy is the foundation; every subsequent initiative builds on it.

Why build this now, specifically?

Multi-tenancy in the new EDC Connector version is the technical unlock that makes CaaS economically viable. Before this release, offering EDC as a managed service required running isolated infrastructure per tenant, pushing per-connector costs to levels that were hard to justify in most projects. The new version changes that calculus. Every cloud provider can theoretically offer EDC as a service now. Kaphera’s move is to open-source the EDC operational layer and release the platform layer as source-available, establish the reference implementation in the market, and let the managed platform earn its business through operational excellence rather than proprietary lock-in.