Scaling Order Tracking with Kafka, Domain Events, and Auto-Ingestion

I’ve been thinking about how to keep evolving my order-tracking project , and once the basic functional records were in place, the next natural step was to ensure a minimal yet solid scalability layer. After all, what’s the point of a clean domain model if it can’t handle growth? And what better way to achieve that than by moving to an event-driven architecture with Kafka. That shift allowed me to decouple ingestion from processing, improve throughput, and prepare the service for future enhancements without locking myself into synchronous bottlenecks.

The problem we were solving

Originally, the service exposed a REST endpoint that synchronously inserted tracking events into the database (one or many per POST). That ties request latency to DB work and caps horizontal scalability.

The goal: decouple ingestion from processing so the REST layer stays thin and we can scale processing independently. Enter domain events + Kafka.

Why Redpanda for Kafka?

I wanted the Kafka API without the operational overhead for local/dev and small deployments. Redpanda speaks the Kafka protocol (drop-in compatible with Kafka clients) and ships as a single binary, which makes Docker Compose setups trivial and fast. That let me focus on the app’s design instead of babysitting infra.

Same Kafka clients, simpler ops for local/dev; we can move to managed Kafka later with zero code changes.

The design (DDD/Hexagonal)

At a high level:

  sequenceDiagram
    participant Client
    participant REST as REST API
    participant Domain as Domain Layer
    participant Kafka as Kafka (Redpanda)
    participant Consumer as Tracking Processor
    Client->>REST: POST /order/tracking (batch)
    REST->>Domain: publish(TrackingEvent) x N
    Domain->>Kafka: send to "order-tracking.events" (key = orderId)
    Kafka-->>Consumer: messages (ordered per key)
    Consumer->>Consumer: map to domain
    Consumer->>DB: update timeline + append event

Key ideas:

Code walk-through

1) Domain event

JAVA
// app/src/main/java/com/egobb/orders/domain/event/TrackingEvent.java
public record TrackingEvent(String orderId, Status status, Instant eventTs)
    implements DomainEvent {}
Click to expand and view more

Simple, immutable, and serializable.

2) Publishing via a domain event handler (infra)

JAVA
// application port
public interface PublishDomainEventPort {
  void publish(DomainEvent event);
}
Click to expand and view more

…and implement it in infrastructure, listening to domain events and publishing to Kafka:

JAVA
@Component
@RequiredArgsConstructor
public class TrackingEventDomainHandler implements PublishDomainEventPort {
  private final TrackingEventPublisher publisher;

  @Override
  public void publish(DomainEvent event) { this.onDomainEvent(event); }

  @EventListener
  public void onDomainEvent(DomainEvent event) {
    if (event instanceof TrackingEvent ev) {
      final var msg = TrackingEventMapper.toMsg(ev);
		this.publisher.publish(ev.orderId(), msg); // key = orderId
    }
  }
}
Click to expand and view more

This keeps domain clean and places Kafka concerns in infrastructure. The key = orderId ensures per-order ordering within a partition.

3) Message mapping

JAVA
public class TrackingEventMapper {
  public record TrackingEventMsg(String orderId, String status, Instant eventTs, String schemaVersion) {}
  public static TrackingEventMsg toMsg(TrackingEvent ev) {
    return new TrackingEventMsg(ev.orderId(), ev.status().toString(), ev.eventTs(), "v1");
  }
  public static TrackingEvent toDomain(TrackingEventMsg msg) {
    return new TrackingEvent(msg.orderId(), Status.fromString(msg.status()), msg.eventTs());
  }
}
Click to expand and view more

A tiny wire model decoupled from domain and versioned (schemaVersion = "v1").

4) The Kafka producer

JAVA
@Component
@RequiredArgsConstructor
public class TrackingEventPublisher {
  private final KafkaTemplate<String, TrackingEventMapper.TrackingEventMsg> template;

  public void publish(String key, TrackingEventMapper.TrackingEventMsg msg) {
	  this.template.send(KafkaTopics.TRACKING_EVENTS, key, msg);
  }
}
Click to expand and view more

The topic constant:

5) The Kafka consumer

JAVA
@Slf4j
@Component
@RequiredArgsConstructor
public class TrackingEventListener {
  private final TrackingProcessor processor;

  @KafkaListener(
    topics = KafkaTopics.TRACKING_EVENTS,
    groupId = "order-tracking-processor",
    concurrency = "6")
  public void onMessage(TrackingEventMapper.TrackingEventMsg msg) {
    log.info(">>> Consumed from Kafka: {}", msg);
    final var domain = TrackingEventMapper.toDomain(msg);
	  this.processor.processOne(domain);
  }
}
Click to expand and view more

6) The REST endpoint (now asynchronous)

JAVA
@RestController
@RequestMapping(path = "/order/tracking")
@RequiredArgsConstructor
public class TrackingController {
  private final ProcessTrackingUseCase useCase;

  @PostMapping(consumes = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
  public ResponseEntity<Void> ingest(@Valid @RequestBody TrackingEventsDTO body) {
    final var events = body.event().stream().map(this::toDomain).toList();
	  this.useCase.ingestBatch(events);
    return ResponseEntity.accepted().build(); // 202
  }
}
Click to expand and view more

Why 202? Because actual processing is asynchronous behind Kafka.

7) Processing logic is unchanged (by design)

JAVA
public void processOne(TrackingEvent e) {
  final var maybe = repository.findById(e.orderId());
  final var tl = maybe.orElse(new OrderTimeline(e.orderId(), null, null));
  if (!sm.canTransition(tl.lastStatus(), e.status())) return;
  tl.apply(e.status(), e.eventTs());
  repository.save(tl);
  appender.append(e);
}
Click to expand and view more

The consumer reuses the exact same domain logic you had before; we only changed how events get there.

Kafka/Redpanda configuration

YAML
spring:
  kafka:
    bootstrap-servers: localhost:19092
    producer:
      acks: all
      retries: 10
      properties:
        enable.idempotence: true
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
    consumer:
      group-id: order-tracking-processor
      auto-offset-reset: latest
      properties:
        isolation.level: read_committed
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
    listener:
      ack-mode: RECORD
Click to expand and view more

Redpanda’s Docker Compose quickstarts make exposing a single broker + Console straightforward; this project uses that style of layout for local dev

Running it locally

  1. Start infra:
BASH
make up   # boots Postgres + Adminer + Redpanda (Kafka API)
Click to expand and view more
  1. POST a batch:
BASH
curl -X POST http://localhost:8080/order/tracking   -H 'Content-Type: application/json'   -d '{
        "event": [
          {"orderId":"A-1","status":"PLACED","eventTs":"2025-09-22T08:00:00Z"},
          {"orderId":"A-1","status":"SHIPPED","eventTs":"2025-09-22T10:00:00Z"},
          {"orderId":"B-7","status":"PLACED","eventTs":"2025-09-22T09:00:00Z"}
        ]
      }'
Click to expand and view more
  1. Watch the consumer logs and inspect the DB/Adminer.

(Tip: Document in your README the Redpanda Console URL if you expose it in Compose.)

Integration testing

What to test (ideas):

Why the domain event handler pattern?

Why a private Kafka topic?

Next steps


Copyright Notice

Author: Enrique Goberna

Link: https://enriquegoberna.com/posts/scaling-order-tracking-with-kafka-domain-events-and-auto-ingestion/

License: CC BY-NC-SA 4.0

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Please attribute the source, use non-commercially, and maintain the same license.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut