I used Kafka at work. Producing messages, consuming them, checking monitoring dashboards — that was the extent of it. I had never configured a cluster from scratch or made decisions from topic design to consumer group strategy.
Hexagonal Architecture was similar. I understood the concept and followed port/adapter patterns in existing code, but I had never structured layers from an empty project.
I wanted to build it myself. So I decided to create a chat system.
Why Chat
Chat aligns naturally with Kafka’s pub/sub model. Publishing messages and delivering them to subscribers mirrors the core behavior of a chat system.
Real-time communication over WebSocket, event-driven architecture, message synchronization across multiple instances — I decided a single project could cover all three.
Technology Choices
Go + Java
I built the chat service in Go. Lightweight goroutine-based concurrency suited a WebSocket server well. The user authentication service used Java with Spring WebFlux. The Spring Security ecosystem provided solid OAuth2 + JWT support, and I was already familiar with the framework.
The API Gateway used Kotlin with Spring Cloud Gateway. It ran on the same reactive stack as the user-service, maintaining consistency within the Java ecosystem.
MongoDB
Chat messages fit naturally into a document structure. Rooms and messages resembled unstructured data, and I expected frequent schema changes.
I started with Redis. It worked well for quick prototyping, but I switched to MongoDB when message persistence became necessary.
Kafka KRaft
I configured Kafka in KRaft mode — Kafka managing its own metadata without depending on ZooKeeper. No need to operate a separate ZooKeeper cluster, which simplified the infrastructure.
I set up a 3-node cluster using Docker Compose, with each node serving as both controller and broker.
Architecture Evolution
The project was not designed all at once. It evolved incrementally through pull requests.
Starting Point
I started with two services: user-service (Java) and chat-service (Go). The chat-service handled WebSocket connections, room management, message storage, and broadcasting. Redis served as the data store.
Redis → MongoDB
Messages needed persistent storage. Redis was not suitable due to its in-memory nature, so I replaced it with MongoDB. During this process, I experienced the benefit of only needing to swap the repository layer — a direct advantage of Hexagonal Architecture.
Hexagonal Architecture Cleanup
I restructured the user-service first. Packages that had been loosely organized were rearranged into domain/entity, port/driving, port/driven, adapter/driving, and adapter/driven. I then applied the same structure to the chat-service.
Kafka Integration
I implemented the Kafka producer first, then added the consumer. This is when I encountered concurrency issues.
Race conditions occurred when users joined or left chat rooms while messages were being broadcast simultaneously. I introduced a two-level lock strategy in the RoomManager: an RWMutex at the RoomManager level for room list access, and a separate RWMutex per LiveRoom for participant access. This reduced contention.
Service Separation
As the chat-service grew, I split it into messenger-service and message-service. The messenger-service handles Kafka producer/consumer and WebSocket connections. The message-service handles message storage and retrieval.
Fat Domain
Initially, domain entities only held data. I moved domain logic into entities and introduced the use case pattern in the application layer. Each use case has a single Handle method, responsible for one business operation.
Kafka as Chat Message Broker
The message flow:
sequenceDiagram
participant C as WebSocket Client
participant S as SendUseCase
participant DB as MongoDB
participant K as Kafka
participant B as MessageBroker
participant R as RoomManager
C->>S: Send message
S->>DB: Store message
S->>K: Kafka publish
K->>B: Consumer receives
B->>S: OnReceive callback
S->>R: Broadcast
R->>C: WebSocket delivery
SendUseCase directly implements the MessageSubscriber interface and registers itself with the MessageBroker — the Observer pattern. When the consumer receives a message, it calls OnReceive on all registered subscribers. Each subscriber uses the RoomManager to deliver the message to every WebSocket client in the corresponding room.
The advantage is horizontal scaling. When multiple chat service instances run, a message from one instance reaches other instances through Kafka. Users connected to different instances can still exchange messages within the same room.
Retrospective
I started this project because I wanted hands-on experience with Kafka.
I confirmed that Hexagonal Architecture works naturally in Go. Go’s implicit interfaces made defining ports and implementing adapters straightforward. Assembling dependencies directly in the main function without a DI framework turned out to be explicit and easy to trace.
Concurrency control taught me the most. I initially protected the entire room list with a single RWMutex, which created a bottleneck. Switching to a two-level strategy — separate locks for room list access and per-room participant access — showed a clear difference in benchmarks. Understanding concurrency in theory and experiencing it through benchmarks were different things entirely.
There are regrets. Test coverage was insufficient. One key benefit of Hexagonal Architecture is easy testing by swapping ports with mocks, but I did not write enough tests to take full advantage of this.
I also configured gRPC but never applied it to inter-service communication. All services currently communicate over REST. gRPC integration remains for the next iteration.
I started because I wanted to work with Kafka directly, and I gained more than that. Architecture design, concurrency control, service decomposition — encountering them together within a single system was a different experience from studying each one separately.