Leila Brandt
What she needs: A managed connector with a well-documented API and advance notice of breaking changes. What she will not do: Operate Kubernetes infrastructure or track upstream EDC releases. Why she buys: A tier-1 automotive customer (20% of ARR) made Catena-X a prerequisite.
Backend engineer at a sustainability SaaS scale-up, owning the integrations layer for 40-something enterprise clients, pulled into EDC because a tier-1 automotive customer made Catena-X a prerequisite. She wants a managed connector with a well-documented API, advance notice of breaking changes, and, if the product expands, a programmatic provisioning call rather than an infrastructure project.
Role: Backend Engineer, sustainability SaaS scale-up (40–80 employees)
Background
Leila studied computer science at TU Munich and spent two years at a logistics company building internal APIs before joining her current employer: a Berlin-based SaaS company that helps enterprises manage and report their Scope 3 carbon emissions. The product has been growing steadily, 40-something enterprise clients, a Series A closed eighteen months ago, and a roadmap that keeps expanding. She is one of four backend engineers. She owns the integrations layer: the connectors to ERP systems, sustainability data providers, and increasingly, dataspaces.
She came to EDC because a tier-1 automotive client told her company that from next year, PCF data exchange would need to happen through Catena-X. That client is 20% of ARR. It was not a feature request. It was a prerequisite.
Responsibilities
Leila designs and maintains the integration layer of the product: inbound and outbound data flows between the company’s platform and its clients’ systems, third-party data providers, and now, dataspaces. She reads OpenAPI documentation for breakfast, has opinions about REST contract design, and can write a Kafka consumer or a webhook handler without reaching for a tutorial. Kubernetes is something she encounters at the edges of her work, the platform runs on it, but she is not the one operating it.
Challenges
The EDC documentation described an infrastructure and protocol problem when Leila was expecting an API integration task. Getting her company’s carbon calculation API registered as a data asset behind a connector, configuring the right access policies so only their automotive client can access it, and understanding the consumer-side flow, catalog request, contract negotiation, transfer initiation, EDR acquisition, all of this was EDC-specific and took her significantly longer than any comparable API integration she had shipped before. She is now the person in her company who understands how this works, which was not the plan.
The deeper concern is maintenance. EDC releases breaking changes on a cadence she cannot predict. The Jupiter to Saturn transition required her to re-examine her consumer pattern implementation and rerun their certification tests. That was three days she had not planned for and a sprint she had to renegotiate. Every future EDC release carries the same risk.
For the provisioner dimension: as the product grows, clients are starting to ask whether Leila’s company can provision Catena-X connectivity for their own tier-2 suppliers as part of the service. That would mean a connector per supplier per client, a topology she cannot manage with a self-hosted approach. The infrastructure overhead would consume her team.
Goals
Leila wants to ship the Catena-X integration, maintain it with minimal ongoing attention, and move on to the parts of the product that differentiate her company. She wants an API she can call to provision a connector, register a data asset, configure a policy, and initiate or respond to a contract negotiation, without managing what is underneath it. She wants to know in advance when something in the dataspace is changing that will affect her integration, not discover it when a test fails. If the product expands into connector provisioning for clients’ suppliers, she needs that to be a programmatic API call rather than an infrastructure project.
Technology use
Leila works primarily in Python and TypeScript. Her stack is REST APIs, async queues, and PostgreSQL. She uses Docker for local development and is comfortable reading Kubernetes manifests when she needs to, but cluster operations are not her domain. She evaluates integration options by reading API documentation and running against a sandbox environment, the faster she can get to a working request-response, the more likely she is to commit. She follows OpenAPI tooling communities and occasionally reads EDC GitHub issues when something breaks, which is the extent of her involvement in the EDC community.
Needs from Kaphera Cloud
Leila needs a managed connector she can subscribe to without running infrastructure. She needs the control plane management API to be well-documented and immediately testable, a sandbox she can hit with curl before writing a line of integration code. She needs to register her company’s existing carbon calculation API as a data asset with a policy that allows only her client’s connector to access it. She needs to implement the consumer-side pattern (catalog lookup, contract negotiation, EDR acquisition) in her application code using documented API calls, not bespoke protocol knowledge. She needs to receive advance notice, ideally structured, ideally webhook-driven, when a CX version change will affect her integration. And if the product expands to provisioning connectors for clients’ suppliers, she needs that to be a single programmatic API call that Kaphera handles end-to-end.
Quote
“I shipped the Catena-X integration. Now I need it to stay shipped.”
Related
- builder, the archetype Leila grounds (Application Builder sub-type)
- dataspace-connectivity-as-a-product-feature, Leila’s two-phase journey, ship then provision
- builder-playbook, sales playbook for the builder archetype
- kaphera-cloud-managed-server, the managed control-plane API Leila integrates against
- kaphera-cloud-server, the self-hosted control plane for the provisioner phase
- lars-hoffmann, same archetype, infrastructure depth rather than integration depth