Leila · Catena-X as a product feature, not an integration
A backend engineer at a Berlin Scope-3 SaaS company has four months and no EDC experience to ship Catena-X PCF data exchange for the company’s largest customer. She uses Kaphera’s management API and sandbox to ship a working integration in three weeks, and later turns the same API into a connector-provisioner that scales the product to her customer’s tier-2 supplier network.
Scenario
Leila is a backend engineer at a Berlin-based SaaS company that helps enterprises manage and report their Scope 3 carbon emissions. The product has forty-odd enterprise clients, a growing API integration layer, and a backlog Leila has never quite caught up with. Three weeks ago, their largest automotive client notified the account team that from Q1 next year, PCF data exchange must happen through Catena-X. The client represents 20% of ARR. This is not a feature request.
Leila owns integrations. The task lands on her. She has four months, no EDC experience, and a sprint already committed to something else.
The scenario unfolds across two phases. In phase one, Leila ships the integration for her company as a single Catena-X participant, publishing their carbon calculation API as a data asset and implementing the consumer-side pattern for incoming supplier data. In phase two, that client asks whether Leila’s company can extend connectivity to the client’s own tier-2 suppliers as part of their service, turning the product into a connector provisioner for a participant network.
Storyline
Step 1: Reading the docs and recognising the problem
Action
Leila spends a day reading EDC documentation. She finds a connector architecture, a protocol specification, and a Java codebase. None of it maps onto the REST integration work she was expecting. She searches for “EDC API integration Python” and lands on a GitHub issue thread. Someone in the EDC community Discord links to a Kaphera developer guide that shows how to register a data asset and initiate a contract negotiation using the control plane management API, without operating a connector yourself. She reads it twice. She opens the Kaphera documentation.
Thoughts
“This looks like what I actually need. An API I can call, not infrastructure I have to run. If the sandbox works the way the docs suggest, I can have something running before end of week.”
Emotions
Relieved after a day of increasing unease. She was expecting an API integration and found a protocol engineering problem. The managed connector path restores the frame she started with.
Step 2: Signing up and hitting the sandbox
Action
Leila creates an organisation on Kaphera Cloud, selects the Tractus-X connector profile, and initiates onboarding. The platform handles identity establishment and credential issuance. Within the hour she has a connector endpoint and a control plane management API key. She opens her terminal and runs a curl request against the catalog endpoint. She gets a valid response. She spends the rest of the afternoon reading the API reference and sketching out the integration architecture: an asset registration endpoint for their carbon calculation API, a policy configuration, and a consumer module that handles catalog lookup, contract negotiation, and EDR acquisition for incoming supplier data requests.
Thoughts
“The sandbox works. The API is clean. The catalog-to-EDR flow is four steps and they’re all documented. I can implement this.”
Emotions
Focused and confident. The first working API call is the inflection point, once she can see the response, she knows she can ship the integration.
Step 3: Registering the carbon calculation API as a data asset
Action
Leila registers her company’s carbon calculation API as an EDC data asset using the control plane management API. She configures an HttpData asset pointing to the existing API endpoint, sets a usage policy that restricts access to her client’s BPN (business partner number), and publishes the asset to the catalog. She verifies that a test contract negotiation from the client’s connector succeeds and that the data plane forwards the request correctly to her API. No changes to the carbon calculation API itself, the connector acts as a transparent proxy.
Thoughts
“The policy configuration took a few attempts to get right, the ODRL syntax is not intuitive if you’ve never seen it. But once I had a working example from the documentation, it was straightforward to adapt.”
Emotions
Methodical. Policy configuration is the part of the integration that requires the most careful reading, and she gives it the time it needs rather than rushing through it.
Step 4: Implementing the consumer pattern in the product
Action
Leila implements the consumer-side integration in the product’s backend: a module that calls the management API to look up supplier connectors in the catalog, negotiates contracts on behalf of the platform, and acquires EDRs to pull supplier PCF data on demand. She adds caching for EDRs with proactive refresh before expiry to avoid re-negotiating on every request. She writes integration tests against a second connector she spins up in Kaphera’s sandbox environment. The test suite covers the full flow from catalog request to data transfer. She wires the module into the product’s existing supplier data ingestion pipeline, from the product’s perspective, the Catena-X supplier data arrives the same way any other supplier data does.
Thoughts
“The consumer pattern is four steps but they’re linear. Catalog, negotiate, initiate transfer, acquire EDR. The only non-obvious part is EDR caching, without it, contract negotiation latency shows up on every request. Once I understood that, the implementation was clean.”
Emotions
In her element. This is the integration engineering work she knows how to do. The EDC semantics required learning, but the implementation pattern is familiar.
Step 5: Shipping and handing off internally
Action
Leila ships the integration in week three of the four-month window. She writes an internal runbook covering the asset registration process, policy configuration, and the consumer module’s behaviour. She sets up a monitoring alert on the Kaphera connector status endpoint so the on-call team can detect connectivity issues without understanding EDC. She documents the Catena-X credential renewal process, notes that Kaphera handles it automatically, and marks that as a non-issue for the on-call runbook. She closes the sprint with one week to spare.
Thoughts
“The runbook needs to explain what the consumer module does without requiring the on-call engineer to understand contract negotiation. If I do this right, they can respond to an alert without calling me.”
Emotions
Satisfied. Shipping early is unusual. The integration is in production, the client is satisfied, and she has handed off cleanly enough that she is not the only person who can operate it.
Step 6: The version change arrives
Action
Six months later, Kaphera sends a notification that the Catena-X Saturn release will introduce a change to the contract negotiation callback format. The notification includes the specific API field that changes, the expected date, and a migration guide. Leila reads it in ten minutes, updates one field in her consumer module, deploys the change to staging, and confirms it works against Kaphera’s Saturn sandbox environment. The change goes to production the day before the Saturn go-live. The client’s data exchange continues without interruption.
Thoughts
“This is exactly what I needed when I shipped this, advance notice with enough detail to act. If I’d found out about this the day it broke, I’d have been debugging in production.”
Emotions
Relieved and grateful for the process. This is the moment that validates the managed platform decision over self-hosting.
Step 7: The provisioner request
Action
The automotive client’s procurement team asks whether Leila’s company can extend Catena-X connectivity to their tier-2 suppliers as part of the carbon management service. Each supplier would need their own connector and their own data asset registration. The client has 60 tier-2 suppliers in scope. Leila checks Kaphera’s provisioning API documentation. Provisioning a connector for a new participant organisation is a single POST request to the management API. She writes a provisioning workflow in the product that creates a Kaphera organisation for each supplier, provisions their connector, registers their supplier data submission API as a data asset, and configures the access policy so only the client’s connector can consume it. The first five suppliers are onboarded in an afternoon. She presents the workflow to the account team as a new product capability.
Thoughts
“60 connectors. If I’d had to self-host each one, this would have been a three-month infrastructure project. As a Kaphera API call in a provisioning loop, it’s an afternoon.”
Emotions
Genuinely excited. This is the moment the integration becomes a product differentiator rather than a compliance checkbox. The provisioner capability changes what the product can offer.
Key features
📖 Self-serve API documentation with sandbox access: a working curl request against the catalog endpoint within an hour of sign-up, before writing a line of integration code
🔗 Managed connector with no infrastructure to operate: identity establishment, credential issuance, and connector provisioning handled by the platform; the API key is the entire operational surface
📦 Asset registration via management API: register an existing API as an EDC data asset with an HttpData configuration; no changes to the upstream API
📜 Policy configuration with documented ODRL examples: access restriction by BPN, usage constraints, and purpose limitations configurable through the API with working examples in the documentation
🔄 Consumer pattern implementation support: catalog lookup, contract negotiation, transfer initiation, and EDR acquisition documented as a linear four-step flow with a reference implementation
⏰ Advance version change notifications: structured notification with specific field changes, migration guide, and sandbox environment for validation before the production go-live date
🏭 Programmatic connector provisioning: a single API call provisions a connector for a new participant organisation; provisioning 60 suppliers is a loop, not an infrastructure project
Related
- leila-brandt: the protagonist: backend engineer at a Berlin Scope-3 carbon SaaS, application builder rather than infrastructure operator
- builder: the archetype: companies embedding sovereign data exchange into their own product as a feature, not running connectors as a service
- kaphera-cloud-server: the control-plane management API she uses to register data assets, configure policies, and provision connectors for tier-2 suppliers
- kaphera-cloud-managed-server: the managed connector profile (Tractus-X) that ran her Phase-1 integration without her operating infrastructure
- builder-playbook: sales motion for the builder archetype