Lars · Six-participant pilot from one Terraform workspace

The lead platform engineer at a systems integrator inherits a six-participant industrial dataspace pilot covering an OEM with a strict on-premise policy, a tier-1 with a dedicated managed instance, three SMEs needing handholding, a logistics provider wanting full IaC, and a governance authority who needs help configuring policies. He builds it as one Terraform workspace with a fifteen-minute make demo target, hands off cleanly, and demonstrates everything to the steering group on schedule.

Scenario

Lars is the lead platform engineer at his systems integrator, assigned to a nascent industrial dataspace pilot that has been in workshop mode for four months. The governance authority, a mid-size industry consortium, and six pilot participants have agreed on three use cases: carbon footprint data exchange across the supply chain, quality recall traceability at component level, and Digital Product Passport attestation for tier-1 to OEM handoff. The workshops are done. The use cases are documented. Now Lars has to build the infrastructure that makes them run.

The participant landscape is complicated. One participant is a large OEM with a strict on-premise policy, their legal team will not approve connector infrastructure running on a third-party cloud. A tier-1 supplier wants a dedicated managed instance: they have a small IT team, multiple participations in scope, and a security review that requires resource isolation. Three tier-2 and tier-3 SME suppliers need complete handholding, they have no infrastructure capability and will not touch a CLI. The sixth participant is a logistics provider who is technically capable but wants everything defined as code so their own DevOps team can take it over after the pilot.

Lars is also the de facto technical advisor to the governance authority, who need help configuring their dataspace profile, defining the access policies for each use case, and customising the contract negotiation flow to enforce data quality and usage constraints that came out of the workshop process.

He needs to be able to demonstrate the whole thing working, all participants, all use cases, to the pilot steering group at four weeks, and again at eight weeks with production-grade infrastructure in place. He has done an EDC project before. He knows exactly what is coming.


Storyline

Step 1: Recognition: this again

Action

Three days after the project lands on his desk, Lars is in the EDC community Discord. Someone is running through a debrief of a recent Catena-X implementation, not a product pitch, a project retrospective. Midway through, they mention the Kaphera operator as the reason the infrastructure setup took two days instead of six weeks. Lars notes the name. That evening he is on GitHub reading the source: the EDC operator under Apache 2.0 manages the full connector lifecycle, control plane, identity wallet, credential issuer, data plane. The Kaphera Cloud operator, source-available, manages the platform infrastructure layer, PostgreSQL, Vault, NATS, Keycloak. Two operators, clean separation of concerns, the full stack abstracted behind a consistent CRD interface. He pulls the repository and tries the EDC operator in a staging cluster the following morning. It works the way the code suggests it should.

Thoughts

“I have done this before. I know how long it takes and what breaks. If this operator actually does what it looks like it does, I am not rebuilding that stack from scratch for a third time. And the source-available licence on the Cloud operator means I can read everything before I commit to anything.”

Emotions

Cautiously optimistic, and a little sceptical in the way that experienced engineers are sceptical of anything that claims to solve a genuinely hard problem. The source code is what settles it. He trusts what he can read.


Step 2: Mapping the infrastructure landscape

Action

Lars opens the workshop output documentation and maps each participant to an infrastructure requirement. The OEM: on-premise deployment of both operators on their private Kubernetes cluster in Frankfurt, Lars’s SI acting as the service provider throughout the pilot. The tier-1: dedicated managed instance on Kaphera Cloud. The three SMEs: managed shared, onboarded through the console with his help. The logistics provider: everything defined in Terraform, handed off as a module the provider’s DevOps team can run independently.

flowchart LR
  TF["Single Terraform workspace<br/>(make demo)"]
  GA["Governance Authority<br/>(profiles + policies)"]
  OEM["OEM<br/>on-premise"]
  T1["Tier-1<br/>dedicated managed"]
  SME["3 SMEs<br/>managed shared"]
  LOG["Logistics provider<br/>IaC handoff"]
  TF --> GA
  TF --> OEM
  TF --> T1
  TF --> SME
  TF --> LOG
  GA -.profiles.-> OEM
  GA -.profiles.-> T1
  GA -.profiles.-> SME
  GA -.profiles.-> LOG
``` He maps the governance authority's requirements separately: dataspace profile registration, three use-case-specific policy sets, and a negotiation flow that enforces a data quality attestation step before any contract is finalised. He writes this as a dependency graph before touching any tooling.

**Thoughts**

"Six participants, four infrastructure models, three use cases, one governance authority who needs hand-holding on policy configuration. I need a single IaC foundation that produces all four participant configurations from the same base, otherwise I am maintaining six divergent setups indefinitely."

**Emotions**

Focused and methodical. The complexity is real but legible. What concerns him most is not the infrastructure variety but the governance authority's policy requirements, which are the least specified and the most likely to change as the steering group sees the first demo.

---

### Step 3: Establishing the IaC foundation

**Action**

Lars initialises a Terraform workspace for the pilot using the Kaphera Terraform modules as the base. He structures the workspace into four layers: the dataspace profile and governance authority configuration, the OEM on-premise deployment, the dedicated managed instance for the tier-1, and the shared managed participant configurations for the SMEs. Each layer references shared variables, participant identifiers, dataspace profile name, trust anchor references, so that changes propagate consistently across all participant deployments. The Kaphera Terraform modules provide a consistent interface across all four deployment types, whether they land on managed cloud infrastructure or the OEM's private cluster; the difference is configuration, not module structure. He writes a `make demo` target that provisions a full end-to-end demonstrator environment, seeded with test data and pre-configured with all three use cases, in under fifteen minutes. He settles on a deliberate tool split for the project: Terraform for anything that needs to be owned by someone else after he leaves; the `kaphera` CLI for day-to-day operations and automation; the web console for every conversation where a non-technical stakeholder needs to see and understand what has been configured.

**Thoughts**

"The `make demo` target is the most important thing I build in week one. Every steering group meeting goes better if I can spin up a clean demonstrator in the time it takes them to get settled. And the module structure needs to be clean enough that the logistics provider's DevOps lead can read it without calling me."

**Emotions**

In his element. Structuring a complex multi-participant infrastructure as a coherent Terraform workspace is exactly the kind of problem Lars enjoys.

---

### Step 4: Configuring the governance authority: dataspace profile and identity model

**Action**

Lars sits with the consortium's technical lead for half a day. Together they register the dataspace profile on Kaphera Cloud, define the identity model, credential types participants must hold, trust anchors the profile recognises, and onboarding validation steps each participant must pass before their connector is active. Lars runs the configuration through the `kaphera` CLI, then walks through the result in the web console with the consortium's technical lead, who is not comfortable reading YAML. He commits the final configuration to the Terraform workspace so it is version-controlled and reproducible.

**Thoughts**

"Using the CLI for the actual configuration and the console for the review conversation is the right split. She needs to understand what she is approving, not how to write it."

**Emotions**

Patient and deliberate. This step is as much about building the consortium's confidence as it is about the technical configuration. A governance authority that does not understand what it has approved will create problems the moment policy questions come up in front of the steering group.

---

### Step 5: Defining use-case policies and negotiation flow customisation

**Action**

Lars translates the three use cases from the workshop documentation into ODRL policy sets on the platform. The carbon footprint use case requires a data quality attestation constraint, data can only be offered under contract if the offering participant has passed a third-party quality check in the last twelve months. The quality recall use case requires a purpose-limitation constraint preventing data use outside recall investigations. The Digital Product Passport use case requires a chain-of-custody constraint that records every access in the audit log. He configures a custom negotiation step on the carbon footprint use case that the governance authority can trigger manually during the pilot period. He tests each policy against a pair of test connectors before touching participant deployments.

**Thoughts**

"The attestation step in the negotiation flow will get the most scrutiny in the demo. I need to have run this enough times that I can narrate what is happening while it is happening."

**Emotions**

Careful and slightly tense. Policy configuration is the least forgiving part of this work, a misconfigured constraint fails silently until a test contract negotiation exposes it.

---

### Step 6: On-premise deployment for the OEM

**Action**

Lars arranges access to the OEM's private Kubernetes cluster in Frankfurt through a VPN connection and a minimum-permission service account. As the SI acting as service provider for the OEM throughout the pilot, he deploys both the EDC operator and the Kaphera Cloud operator from the Terraform workspace's OEM layer. The Kaphera Cloud operator is pointed at the OEM's existing Vault instance and internal PostgreSQL cluster. The EDC operator provisions the connector with the pilot dataspace profile and the OEM's participant credentials issued by the governance authority. Lars verifies contract negotiation against a test counterpart and documents the deployment configuration in the Terraform workspace, the OEM's infrastructure team can see the full state, but Lars's SI retains operational responsibility for the pilot duration.

**Thoughts**

"Their Vault instance has a different secret engine path than the module default. Fifteen minutes to find it, five minutes to fix it. On-premise deployments always have a tail of this, you never know what the local infrastructure looks like until you are inside it."

**Emotions**

Relieved when it works. The Terraform module abstracted most of the difference and kept the surface area for surprises smaller than he expected.

---

### Step 7: Dedicated managed instance for the tier-1 supplier

**Action**

Lars provisions a dedicated managed instance for the tier-1 supplier through Kaphera Cloud, then recreates the same configuration using the Terraform provider to ensure it is captured in the workspace. He sets up the tier-1's organisational context with two participants, two separate legal entities requiring separate connector identities, and configures dataspace profile onboarding for both. He adds the tier-1's RBAC requirements to the managed instance configuration so their security review can proceed without changes to the deployment. He walks the tier-1's IT lead through the multi-organisation view in the console, they can monitor both connector identities' status, active contracts, and audit logs in a single interface without switching contexts or calling Lars.

**Thoughts**

"Two legal entities, two connector identities, one dedicated instance. The dedicated tier makes the security conversation straightforward. And them seeing both connectors in one view from day one means I am not their monitoring system."

**Emotions**

Efficient and systematic. The dual-entity structure is the main complexity, and the platform's organisational model handles it without requiring a workaround.

---

### Step 8: Managed shared onboarding for the three SME participants

**Action**

Lars onboards the three SME participants through the Kaphera Cloud console. For each, he creates the organisation, establishes the digital identity, and initiates dataspace profile onboarding on their behalf. He configures role-scoped access so each organisation's designated contact can see their own connector's status, active contracts, and audit log in plain language without being able to touch configuration. He prepares a one-page guide for each contact explaining what the console shows and what to do if a contract negotiation fails.

**Thoughts**

"If the default view shows Kubernetes resource states or YAML, they will call me every day. If it shows contract status and connection health in plain language, they will be self-sufficient."

**Emotions**

Pragmatic and a little protective. The SME participants' confidence in the pilot depends almost entirely on their first experience of the console.

---

### Step 9: IaC handoff to the logistics provider

**Action**

Lars shares the Terraform workspace with the logistics provider's DevOps lead and walks them through the module structure in a two-hour session. The logistics provider's DevOps lead runs the `make demo` target themselves with Lars watching and asks two questions, both of which Lars addresses with inline documentation comments. He adds a `participant_module` README covering the three most common configuration changes the logistics provider is likely to need during the pilot. The logistics provider takes over their own remote state backend before the session ends.

**Thoughts**

"If it takes longer than a half-day to hand off a Terraform module to a competent DevOps engineer, the module is not clean enough."

**Emotions**

Satisfied. The logistics provider is the participant who will tell other engineers about this tool.

---

### Step 10: API integration into the application layer

**Action**

The tier-1 supplier's IT lead raises a request Lars had anticipated: their internal procurement portal should surface contract negotiation status directly, procurement staff should not need to log into Kaphera Cloud to check whether a data contract has been agreed for an active order. Lars points them at the control plane management API. The tier-1's developer has a working integration the same afternoon: contract status, negotiation triggers, and audit log entries surfaced inside the portal their team already uses, with no awareness of the EDC stack underneath. Lars adds the API authentication configuration to the Terraform workspace. The connector mechanics are invisible. What is visible is a procurement workflow.

**Thoughts**

"This is what the platform is for. The moment the connector mechanics disappear from the user experience and what is left is the business workflow, that is when it has actually worked."

**Emotions**

Genuinely satisfied. This is the proof that the infrastructure is not the product; the product is what the infrastructure makes possible.

---

### Step 11: Four-week demonstrator: steering group review

**Action**

Lars runs `make demo` twenty minutes before the steering group meeting. The environment is up in twelve minutes. He walks the group through all three use cases using the web console for the narrative and the `kaphera` CLI for the parts that benefit from showing the underlying mechanics to the more technical members. The governance authority's technical lead presents the policy configuration herself, navigating the console independently. One steering group member asks whether the negotiation flow can require two-party approval before finalisation. Lars notes it as a follow-up.

**Thoughts**

"The two-party approval question will come up again at the eight-week review. I need to know whether the platform supports it before then, not after."

**Emotions**

Confident during the demo, already thinking ahead. The demonstrator worked. The governance authority presented their own configuration. The use cases were legible to non-technical steering group members.

---

### Step 12: Multi-organisation monitoring and proactive operations

**Action**

Lars runs a single `kaphera` CLI status command to get a health overview across all six participant connectors simultaneously, managed cloud and on-premise alike. He identifies that one SME participant has a credential configuration issue that prevented their connector from completing dataspace profile validation; they have not initiated any exchanges since onboarding and had not noticed. He resolves it through the console without contacting the participant. He also reviews the audit log for the OEM's on-premise connector remotely and confirms it is producing the chain-of-custody records required by the Digital Product Passport use case. He adds a monitoring check to the Terraform workspace that surfaces credential validation status across all participants in a shared dashboard.

**Thoughts**

"The credential issue would have been invisible without the multi-org status command. Finding it proactively and fixing it silently is the kind of operational overhead I want to absorb without making it into a support event."

**Emotions**

Operationally comfortable. A single command gives Lars visibility across six connectors running on four different infrastructure models. The pilot feels manageable.

---

## Key features

🔍 **Community-sourced discovery**: a project debrief in a trusted channel, not a product pitch; GitHub source validation before any commitment

🏗 **Terraform modules with consistent interface across all deployment types**: on-premise, dedicated managed, and shared managed all expressed in the same module structure, differing only in configuration

⚡ **Rapid demonstrator provisioning (`make demo`)**: IaC-driven full pilot environment in under fifteen minutes, seeded with use-case data and pre-configured participant identities

🖥 **Console as governance interface**: visual dataspace profile configuration, policy review, and onboarding status legible to non-technical contacts without engineering assistance

⌨️ **CLI for operations and automation**: single command to view credential health, connector status, and contract activity across all participant organisations simultaneously, regardless of deployment type

🔌 **Control plane management API**: integrate dataspace use cases directly into existing application workflows; connector mechanics invisible to end users from day one

🗂 **Multi-organisation view**: all participant organisations visible in a single console interface, with role-scoped access ensuring each participant sees only their own data

🔐 **Role-scoped console access per organisation**: participants see their own connector status, active contracts, and audit logs in plain language; configuration surfaces hidden unless the user holds the appropriate role

📋 **Use-case policy authoring and negotiation flow customisation**: ODRL policy sets per use case with configurable negotiation steps, attestation gates, and purpose-limitation constraints

🏠 **On-premise operator deployment**: both EDC operator (Apache 2.0) and Kaphera Cloud operator (source-available) deployable to private Kubernetes clusters via Terraform, with full visibility into the source for both

📦 **Self-contained Terraform module handoff**: sufficient for a competent DevOps engineer to take over independently after a two-hour session

📊 **Proactive credential validation monitoring across participants**: platform-level visibility into failures before they surface as participant support requests

---

## Related

- **[[lars-hoffmann]]**: the protagonist: lead platform engineer at a systems integrator, the SI Builder sub-archetype
- **[[builder]]**: the archetype: systems integrators and application builders standing up dataspaces and integrations on behalf of their customers
- **[[kaphera-cloud-terraform-modules]]**: the IaC foundation for his single-workspace, four-deployment-model pilot
- **[[kaphera-cli]]**: the CLI he uses for day-to-day operations and the multi-organisation status command
- **[[builder-playbook]]**: sales motion for the builder archetype