Over the past three years, I’ve been working with a major European energy company (28,000+ employees) to rebuild their gas trading platform. Here’s an anonymized look at the project.

The Challenge

The existing system had grown organically over 15 years. It worked, but barely:

  • Manual processes everywhere. Gas nominations - the daily process of declaring how much gas you’ll deliver - required hours of spreadsheet work.
  • Batch-based pricing. Position and P&L calculations ran overnight. Traders couldn’t see real-time exposure.
  • Siloed data. Market data, positions, and trades lived in different systems that didn’t talk to each other.
  • Scaling limits. The architecture couldn’t handle increasing market volatility and transaction volumes.

The business goal: real-time visibility into positions and risk, automated nominations, and a platform that could scale with market growth.

The Solution

We didn’t do a big-bang rewrite. Instead, we built new components alongside the legacy system and migrated incrementally.

Architecture

                    +------------------+
                    |   Market Data    |
                    |   (External)     |
                    +--------+---------+
                             |
                             v
+----------------+  +--------+---------+  +----------------+
|   Gas          |  |   Ingestion      |  |   Price        |
|   Scheduling   |<-|   Service        |->|   Engine       |
+----------------+  +------------------+  +----------------+
        |                   |                     |
        v                   v                     v
+-------+-------------------+---------------------+-------+
|                        Kafka                           |
+-------+-------------------+---------------------+-------+
        |                   |                     |
        v                   v                     v
+----------------+  +------------------+  +----------------+
|   Nomination   |  |   Position       |  |   Risk         |
|   Service      |  |   Calculator     |  |   Service      |
+----------------+  +------------------+  +----------------+
        |                   |                     |
        +-------------------+---------------------+
                            |
                            v
                    +-------+--------+
                    |   Trading UI   |
                    |   (Angular)    |
                    +----------------+

Kafka sits at the center. Every price update, every trade, every position change flows through it. Services consume what they need and produce what they calculate.

Key Components

Ingestion Service

Connects to external market data providers (ICE, European Energy Exchange) and normalizes data into a common format. Handles connection failures, data validation, and deduplication.

Position Calculator

Consumes trades and market data. Calculates real-time positions per instrument, per portfolio, per trader. Publishes position updates to Kafka. The UI subscribes to these updates via WebSocket.

Nomination Service

Automates gas nominations. Takes position data, applies scheduling rules, generates nomination files, and submits to grid operators. What used to take hours of manual work now runs automatically.

Trading UI

Angular frontend with real-time updates. Traders see positions, P&L, and market data updating live. No more waiting for batch runs.

Tech Stack

LayerTechnology
BackendJava 17, Spring Boot 3.x
MessagingApache Kafka (Confluent)
FrontendAngular 16
DatabasePostgreSQL, Redis
CloudAWS (EKS, RDS, MSK)
CI/CDGitLab, ArgoCD

Results

After 18 months of incremental migration:

Real-time positions. Traders now see position updates within seconds of a trade, not the next morning.

Automated nominations. 80% of nominations are now fully automated. The remaining 20% require human review for edge cases.

Faster onboarding. New instruments can be added in days, not months. The old system required significant development work for each new product.

Improved reliability. Zero unplanned downtime during peak trading hours in the past year. The event-driven architecture handles spikes gracefully.

Regulatory compliance. Full audit trail for every trade and position change. Regulators can request transaction history and get answers in minutes.

Lessons Learned

Incremental migration works. We ran old and new systems in parallel for months. This reduced risk and let us validate the new system with real data before cutting over.

Invest in observability early. We built dashboards and alerting from day one. When issues arose (and they did), we could diagnose them quickly.

Domain knowledge matters. Understanding gas trading - nominations, balancing, imbalance charges - was as important as technical skills. I spent the first month just learning the business.

Event sourcing is powerful but complex. Storing all events lets you rebuild any historical state. It also means you need robust tooling for replaying events and debugging.

My Role

I joined as a senior full-stack engineer and gradually took on architecture responsibilities:

  • Designed the Kafka topology and event schemas
  • Built the position calculation engine
  • Led the migration from legacy batch processing
  • Mentored junior developers on event-driven architecture
  • Coordinated with trading desk on requirements

Tech Deep Dives

I’ve written more detailed posts on specific aspects:


Interested in similar work? Reach out: [email protected]