Lars Hoffmann

What he needs: A portable connector stack he can deploy for clients in a day and hand off without a knowledge transfer. What he will not do: Become a permanent EDC specialist or rebuild the same stack from scratch each engagement. Why he buys: Faster client delivery and escape from the EDC maintenance trap.

Senior platform engineer at a systems integrator, nine years of Kubernetes and DevOps experience, made the de facto EDC expert at his firm by an unrequested project eighteen months ago. He wants to ship a working connector in a day, hand a portable foundation to colleagues, and stop reassembling the same EDC stack from scratch on every client engagement.

Role: Senior Platform Engineer, mid-size systems integrator (automotive and industrial clients)


Background

Lars studied computer science at TU Berlin and has spent nine years in platform and infrastructure engineering, the last four at a systems integrator focused on automotive and industrial clients. He came up through a DevOps background and built his core skills running production Kubernetes clusters before managed services made that straightforward. He is comfortable with Go, knows enough Rust to read and review it, and has strong opinions about GitOps, operator-pattern architecture, and secrets management in production. Eighteen months ago he was handed his first dataspace project with a six-week timeline and documentation that described what the components were but not how to run them, an experience that made him the de facto EDC expert at his organisation, a title he did not seek and would happily give up.

Responsibilities

Lars designs and delivers the infrastructure layer for client projects involving data exchange, connectors, identity systems, API integrations, and the Kubernetes environments they run on. He owns the technical architecture of each project from discovery through to handoff, writes the runbooks his colleagues need to operate what he builds, and is the person his manager calls when a client’s dataspace deadline is approaching and nothing is working yet. He also informally evaluates new tooling for his team, maintaining a short list of components and operators he trusts enough to reach for first on a new project.

Challenges

Each dataspace project requires Lars to reassemble most of the same infrastructure from scratch, because nothing he builds is cleanly portable across clients or reusable by colleagues without a significant knowledge transfer. The EDC Connector stack (control plane, identity, credentials, secrets, database) is complex enough that every implementation surfaces new edge cases, and the upstream documentation rarely covers what production actually requires. He has become the person at his organisation who knows how this works, which means he owns that expertise indefinitely. He is also working against external deadlines (regulatory timelines, client commitments, consortium requirements) that do not adjust to account for how long the infrastructure layer takes to build correctly.

Goals

Lars wants to ship reliable dataspace infrastructure for clients without becoming a permanent EDC specialist. He wants deployments that are portable, something he can hand to a colleague without a two-hour knowledge transfer, and foundations he can reuse across projects rather than rebuilding from scratch each time. He wants to be confident that what he puts in front of a client is production-grade and auditable: something he can stand behind when a client asks how it works or whether it can be inspected. In the longer term, he wants to establish a standard delivery pattern for dataspace projects at his organisation that makes the next project faster than the last.

Technology use

Lars works in a Kubernetes-native environment and manages infrastructure as code using Terraform. He uses ArgoCD for GitOps-based deployment and relies on Vault for secrets management. He evaluates open-source tooling by reading the source before committing to it on a client project, if he cannot inspect it, he does not trust it. He follows the Eclipse EDC project on GitHub and participates in the EDC community Discord, where he tracks upstream breaking changes and evaluates new tooling through community discussion. He attends the IDSA Ecosystem Building Call occasionally, a weekly session of roughly 50 to 60 people across the dataspace ecosystem, and follows Tractus-X GitHub Discussions for anything specific to the Catena-X profile. These are not places he goes looking for products; they are the places where he first hears a name mentioned by someone whose work he already respects, which is what drives him to GitHub to form his own view.

Needs from Kaphera Cloud

Lars needs an operator that abstracts the full EDC stack (control plane, identity, credentials, secrets, database) without hiding how it works. He needs to be able to run it on a client’s own infrastructure if required, which means Apache 2.0 licensing is not a nice-to-have; it is a prerequisite for some clients. He needs the managed platform to be demonstrably easier to operate than self-hosting, so that upgrading is a rational decision rather than a forced one. He needs connector profiles, MDS, Tractus-X, that are pre-built and validated, so that the dataspace-specific credential and trust anchor work does not fall on him to figure out from consortium documentation. He needs a CLI and Terraform provider that fit into his existing delivery toolchain without requiring him to context-switch into a web console. And he needs confidence that what he adopts today will track upstream EDC changes without requiring him to manage the upgrade path manually on every client deployment.


Quote “I don’t want to be the EDC expert. I want to be the engineer who shipped a working connector in a day and moved on to the actual integration work.”