Builder
The customer-segment archetype for engineers whose job is to make sovereign data exchange work for someone else, either as a system integrator deploying it for clients, or as an application developer integrating it into their own product. Both sub-types are reached through the same open-source operator and developer experience, and neither wants to become an EDC specialist.
Who is the customer?
The Builder is any engineer whose job is to make sovereign data exchange work, whether by deploying it for others or integrating it into their own product. Two distinct sub-types share this profile, and both are reached through the same open-source operator and developer experience.
The SI Builder is a platform engineer, DevOps engineer, or solutions architect working at a systems integrator, firms like Think-it, Metaform, or Capgemini, or embedded inside an enterprise IT team tasked with delivering data infrastructure for a specific project or mandate. They are technically competent and experienced. They have shipped Kubernetes infrastructure before, they understand cloud-native patterns, and they know how to operate secrets management, database clusters, and identity systems in production. They read changelogs. They care about how things actually work, not just what they are supposed to do.
They are typically mid-to-senior level, working under a project manager or engineering lead who has handed them a scoped deliverable: connect the company to Catena-X, or MDS, or another IDSA-compatible dataspace, and make it work reliably before a regulatory or client deadline. The deadline is real. The scope of the task, as they discover it, is not what they were told.
They are not EDC specialists. They were not hired to be EDC specialists, and they have no particular desire to become one. Their professional identity is in platform engineering and reliable infrastructure delivery, not in mastering the idiosyncrasies of a specific Java middleware stack.
The Application Builder is a backend or full-stack engineer at a product company, a funded scale-up or established SaaS business building in a domain such as sustainability, compliance, traceability, circular economy, battery analytics, or supply chain intelligence. Their company has decided to integrate with a dataspace as a product feature or business requirement. They are not deploying EDC for clients; they are integrating EDC connectivity into a product their company owns.
Two distinct integration depths characterise the Application Builder. The first is participating directly: the company needs to publish their own data assets, carbon footprints, compliance certificates, material passports, lifecycle events, or consume other participants’ data to enrich their product. One connector. One integration. Maintained as a product feature that should require minimal ongoing attention.
The second is provisioning for their own customers: the company’s product serves organisations that each need to be a dataspace participant. They provision connectors programmatically for their customer base via the control plane management API, each new customer onboarded gets a connector, their existing API registered as a data asset, and the ability to exchange data with their supply chain partners. Kaphera’s programmatic provisioning is what makes this viable at any meaningful scale.
The Application Builder is comfortable with REST APIs, can navigate OpenAPI documentation, and writes integration code. They do not run Kubernetes clusters and have no desire to become EDC specialists. Their professional identity is in the domain product they build, not in the infrastructure underneath it.
What is their problem?
For the SI Builder, the problem begins the moment they start reading the EDC Connector documentation. The documentation describes what components exist and what they are for, but rarely how to run them end-to-end in a production-grade configuration. The identity stack requires wiring together IdentityHub and CredentialIssuer, each with their own configuration surface and their own failure modes. Vault needs to be provisioned and integrated separately. The target dataspace has its own credential specifications and trust anchors that are either underdocumented or documented only in closed consortium forums the Builder may not have access to. None of these pieces come pre-integrated, and each one represents a discipline in its own right.
Six weeks later, sometimes eight or ten, the Builder has something running in a staging environment. It has taken significantly longer than the estimate they gave at the start of the project. They have become, by default, the de facto EDC expert at their organisation, which was not the plan and creates a new problem: they now own this expertise indefinitely, because no one else has it. The next client project that involves a dataspace connector will route back to them, and they will have to rebuild most of the same infrastructure from scratch, because nothing they assembled is cleanly portable. Each implementation is bespoke. None of it scales.
In the background, the regulatory deadline or client commitment that drove the project in the first place is not pausing while they debug identity configuration.
For the Application Builder, the problem starts when the engineer opens the EDC documentation and finds that what they assumed was an API integration task is actually an infrastructure and protocol problem. The documentation describes components, not how to build a working integration from the perspective of a backend developer. Getting an existing API registered as a data asset in a dataspace, configuring the right access policies, and implementing the consumer-side pattern, catalog request, contract negotiation, transfer initiation, EDR acquisition, each step has EDC-specific semantics that take time to understand and get right.
For the provisioner sub-type, the problem compounds: they need to automate connector provisioning per end customer, manage credential lifecycle across a growing participant base, and handle contract negotiation programmatically as part of their product’s workflow. Self-hosting this at any meaningful scale is not a product problem they should be solving.
The second problem, which surfaces later, is maintenance. Dataspaces evolve. EDC releases breaking changes. A Jupiter to Saturn transition requires integration updates, retesting, and in some cases recertification. For a company whose primary product is carbon management or tyre lifecycle tracking or supplier qualification, that maintenance cost is invisible until it lands, and when it does, it pulls a developer away from their product roadmap.
What is the most important customer benefit?
For the SI Builder, time recovery. A deployment that previously consumed six to eight weeks of focused engineering effort can reach production in a day. That is not a marginal improvement in developer experience; it is a structural change in the economics of taking on dataspace projects. When the infrastructure work compresses from weeks to hours, the Builder spends the rest of the engagement doing what they are actually good at, integration, architecture, client-specific configuration, rather than becoming an accidental specialist in a stack they did not choose.
For a Builder at a systems integrator, the compounding effect matters as much as the per-project saving. Every subsequent client project that requires an EDC deployment benefits from the same foundation. The operator is portable. The knowledge transfers. The next client does not require starting from scratch.
The other dimension is trust without lock-in. Because the EDC operator is Apache 2.0 licensed and the full stack is auditable, the Builder is not trading a Java codebase they do not control for a managed platform they cannot inspect.
For the Application Builder, the most important benefit is the ability to ship dataspace connectivity as a product feature in one sprint and not think about it again. Sign up for a managed connector, read the API documentation, register an existing API as a data asset, implement the consumer pattern in application code, and ship. The connector mechanics, the identity infrastructure, the credential lifecycle, the trust anchor management, none of it is their problem.
For the provisioner sub-type, the programmatic connector provisioning API is what makes their product’s dataspace feature viable for more than one or two customers. Each new customer onboarded via the API is a new managed connector Kaphera operates, not a new infrastructure deployment their team must manage.
The downstream benefit that matters most operationally is version insulation. When EDC releases a breaking change, Kaphera absorbs it. The Application Builder’s integration keeps working. That promise, maintained quietly, never requiring a page, is worth more to them than any individual feature. They can run the operator on their own infrastructure if they need to. The managed platform earns its place by being demonstrably easier to operate, not by being impossible to leave.
How does the company know what the customer wants or needs?
Kaphera’s understanding of the Builder comes from direct experience, not survey data. Think-it, Kaphera’s founding integration partner, has operated EDC Connectors in production for clients across automotive and freight, including running the Krypton managed connector platform as the prior operator of the Mobility Data Space. The team building Kaphera is the team that encountered every configuration failure, every upstream breaking change, every gap between what the documentation describes and what production requires.
The “six-to-eight weeks to staging” figure is not a research finding; it is a description of what Think-it’s own engineers lived through before building the operator to make it unnecessary. The feedback that ~€100/connector/month was difficult to justify per client project came directly from early Builder conversations during GTM validation and drove the structural decision to price the shared tier at €20 for the control plane. Andrea, an EDC core committer within Think-it’s engineering team, provides ongoing visibility into where the upstream Eclipse EDC specification is heading, which grounds product decisions in protocol reality rather than lagging documentation.
What is the customer journey?
The Builder’s journey with Kaphera begins before they have heard of Kaphera. It begins with a project requirement landing on their desk and the familiar experience of EDC documentation that describes what components exist but not how to run them end-to-end. Discovery happens through the community channels Lars already frequents, a project debrief on the EDC community Discord, a presentation at the IDSA Ecosystem Building Call, or a reference from a colleague who has shipped something similar. The signal that registers is not a product pitch; it is someone describing a real project and the infrastructure time it actually took. Once the name is in his head, Lars goes to GitHub to read the source and form his own view. The two-operator architecture is the first thing he examines: the EDC operator under Apache 2.0, managing the full connector lifecycle; the Kaphera Cloud operator, source-available, managing the platform infrastructure layer. He can read both in full before committing to anything.
If it does, adoption on the current project follows. At this stage they may be self-hosting, running the operator on infrastructure they manage, which is a fully supported path. The managed platform enters the picture when the operational overhead of self-hosting becomes the constraint: multiple client deployments to manage, certificate lifecycles to track, credential updates to handle as dataspace specifications evolve. The inflection point is different for every Builder, but for most it comes when the cost of the managed tier is visibly less than the coordination overhead of doing it themselves.
Onboarding to the managed platform is designed to be fast. Through the web console, the kaphera CLI, or the Terraform provider, the Builder provisions a connector, selects a profile (MDS or Tractus-X at GA), and the platform handles identity establishment, credential issuance, and connector provisioning automatically. For a Builder who manages connectors on behalf of multiple clients, the organisational model lets them operate distinct environments without context-switching overhead.
Expansion is natural. Each subsequent client project that requires an EDC deployment is a new participation on the same platform rather than a new infrastructure build. For a Builder at a systems integrator, this is the point at which Kaphera stops being a project tool and starts being a standard part of their delivery stack.
Sub-types compared
flowchart LR subgraph SI["System Integrator (Lars)"] SI1["deploys for clients"] SI2["self-host EDC + Cloud operators"] SI3["CLI + Terraform + ArgoCD"] SI4["multi-client portfolio"] end subgraph APP["Application Builder (Leila)"] A1["integrates into own product"] A2["managed connector subscription"] A3["control plane API + REST"] A4["one or many connectors per customer"] end EOP["kaphera-edc-operator (Apache 2.0)"] --> SI MS["managed-server (cloud backend)"] --> APP
Related
- lars-hoffmann, System Integrator sub-type; senior platform engineer at a Munich SI
- leila-brandt, Application Builder sub-type; backend engineer at a sustainability SaaS scale-up
- builder-playbook, sales playbook for the builder archetype
- from-workshop-to-working-dataspace, Lars’s journey
- dataspace-connectivity-as-a-product-feature, Leila’s journey
- kaphera-edc-operator, the Apache 2.0 operator the archetype reaches via open-source
- kaphera-cli, the developer tool that fits the archetype’s delivery toolchain