Let me be upfront about one thing before we get into the architecture: the role ended because the business unit was closed in a strategic restructuring. Not because Mo broke anything. The systems were running fine. Fintech is weird like that — you can ship excellent infrastructure and then watch the org chart fold under you anyway. Anyway. On to the actually interesting part.

What “event-driven credit assessment” means when you stop being polite about it

Here is the version that shows up on every fintech job description: “high-volume transaction processing in an event-driven architecture.” Here is what it actually means at a loan marketplace doing serious volume:

A borrower submits an application. That application is not a row in a database that a cron job will pick up eventually. It is an event, immediately, and a cluster of downstream services are waiting for it: the credit-scoring pipeline, the fraud-detection layer, the regulatory reporting hooks, the partner lender routing logic. Each of those is its own domain, each has its own failure mode, and the requirement is that none of them block any of the others. The happy path has to work. The unhappy paths — network timeout on a credit bureau call, a partner lender API that throttled — have to be handled without the borrower knowing that anything happened at all.

Kafka is the backbone here because you need durability, replay, and independent consumer progress. The credit bureau call fails? Your consumer retries from the committed offset with a clean state. You need to replay three days of events because a downstream service had a bug? You rewind the consumer. You need to onboard a new partner lender that needs to process events that already happened? Replay again. This is why teams in this space reach for Kafka and not just a message queue: queues forget, Kafka remembers.

The regulatory constraint shapes everything downstream

Fintech infrastructure is not just an API that calls external services. Every decision — every loan offer, every credit limit, every rejection — has to be explainable. This is not optional, it is a legal requirement in Germany and across the EU. GDPR, BaFin, the Consumer Credit Directive. The audit trail is not a nice-to-have; it is the product.

This changes how you design the event pipeline. You cannot just process-and-forget. Events need to be immutable, attributed, and retrievable. The schema for an event in a credit workflow carries more metadata than payload in some cases — who initiated the decision, which model version was consulted, which rule fired, what the input data looked like at the time the decision was made. All of that has to survive for years.

On Kubernetes this means you’re also thinking about exactly-once semantics at the consumer side, idempotent handlers, and dead-letter queues that are actually monitored instead of the kind that accumulate silently until someone notices the alerts were muted. Ask me how many dead-letter queues I’ve seen that were treated as write-only storage.

Shipping AI/LLM automation in a regulated backend is a different problem

Here is the thing about AI integration in financial services that is missing from most conference talks: the model is not the hard part. Getting LLM automation past compliance review, inside a production pipeline, in a regulated financial environment — that is the hard part.

When I was doing this work at auxmoney, the challenge was not capability. Modern LLMs are genuinely useful for document understanding, classification, extracting structured data from unstructured inputs. The challenge was governance. How do you explain a decision that an LLM assisted in? How do you version the model such that a decision made in January can be reproduced and explained in October when a regulator asks? How do you test that the model’s output is being used as a signal in a human-in-the-loop flow versus a fully automated one, and how is that documented?

The answers involve a lot of engineering that is invisible to the user. A model call in a credit workflow is not a raw API call; it is a versioned, logged, auditable invocation with a stable prompt template pinned to a specific model version, with output stored alongside the decision record. The LLM becomes a component in an auditable pipeline, not a black box you call and forget. This is more conservative than what most teams do in consumer apps, and it should be. The stakes are different.

Playing Captain in a legacy PHP + Java codebase

“Highly hands-on in the codebase as a Playing Captain” is corporate for: I was a principal architect who also wrote code, reviewed code, and fixed things when they were on fire. In a fintech codebase that has been running for a decade, this means navigating the actual archaeology of what exists. PHP services that predate the Kubernetes migration sitting next to Java microservices that predate the Kafka adoption sitting next to fresh Go utilities that someone wrote last quarter.

The architecture work is inseparable from the existing code’s reality. You cannot design a beautiful new event-driven credit pipeline without accounting for the PHP service that is the current source of truth for borrower state and is not being rewritten this quarter. So you build adapters, you publish domain events from the boundaries of the legacy systems, you define the contract at the Kafka topic level and let each service own its own implementation. This is not glamorous work. It is the only work that actually ships.

AWS underneath all of it: EKS for the Kubernetes workloads, MSK for the managed Kafka, RDS for the relational state that needs to be relational. The infrastructure is not exotic. The problems are in the domain, not the stack.

What this kind of work actually leaves you with

Lending infrastructure done right is one of the most technically thorough domains I’ve worked in. The combination of high transaction volume, hard latency requirements, regulatory explainability, and the financial consequences of getting it wrong produces engineering discipline that I haven’t seen replicated in as many other environments.

I’ve shipped healthcare systems at hospital scale (EHR, pharmacy dispensing, clinical workflows), I’ve built consumer apps, I’ve done the enterprise integration archaeology. Fintech credit infrastructure sits in a specific category: it punishes sloppiness at every layer, and when you get it right, you’ve built something that is genuinely robust. The business unit being closed does not change what the system was doing before the org chart changed.

I’m currently between W-2 seats, doing open-source work on Fulcrum — a local-first agent control plane — and consulting. If you’re building event-driven financial infrastructure and want to talk architecture, you know where to find me.