Event Sourcing in .NET Core – part 3: broadcasting events
Event Sourcing in .NET Core – part 3: broadcasting events

Event Sourcing in .NET Core – part 3: broadcasting events

2020, Apr 14    

Hi all! Welcome to the third part of the series about Event Sourcing. This time we’ll see how we can tell other parts of our system that something has happened by broadcasting the events. And we will be doing this by pushing them to a distributed queue.

Last time we discussed how we can leverage EventStore to keep track of the events for every Aggregate Root in a separate stream.

But that is only half of the story: once the events are persisted how can we query our data back?

Well, in order to query our data we first need to store it in a “query-friendly” way. We’ll get more into the details in the next article of this series.

Now, as we discussed before, one of the prerequisites of Event Sourcing is CQRS. And sooner or later, we’ll need to build the Query Models somehow. So we can leverage the Domain Events for that, by pushing each one to a distributed queue.

Then all we have to do is write a Background Worker that subscribes to those events and reacts to them by refreshing our Queries DB.

If you haven’t pulled the code of the small demo, this is the right time for it 🙂

The EventProducer class is responsible for pushing the events to a Kafka topic. We’ll be using a specific topic for each Aggregate Root type. In our example, we’ll have an events_Customer and an events_Account stream.

For the sake of completeness, we could have used two other strategies: either use a single topic for all the events or a topic for every single entity ever created by our system.

To be fair, the latter seems a bit unpractical: we will basically end up with an unbounded number of topics, which would make things like indexing and aggregating a lot more complicated.

The former option is actually quite fine, as it also gives a nice way to keep track of all the events generated. It’s a matter of choice, I guess it could depend on the specific use-cases.

A note on Kafka now: as many of you know already, it is a high-performance, low-latency, scalable and durable log that is used by thousands of companies worldwide and is battle-tested at scale. It’s an excellent tool for implementing Event Sourcing.

We could have used RabbitMQ, but for Event Sourcing I think it’s better to keep most of the logic in Producers and Consumers, and not in the communication channels themselves. This is something that Martin Fowler describes as “smart endpoints and dumb pipes“. Check it out.

Don’t get me wrong, RabbitMQ is an excellent tool, and its routing strategies are awesome. We talked already in the past about how to make good use of them. But performance-wise Kafka is able to provide a bigger throughput, which is always nice. This, of course, comes at the expense of increased complexity at the endpoints, but nothing we cannot handle.

Now, the EventConsumer class is responsible for consuming the events. We will have one consumer per Aggregate Root type, and all of them will belong to the same Consumers Group.

Consumer Groups are a nice way to differentiate Consumers and broadcast the same message to different subscribers. By default, Kafka guarantees “at least” one safe delivery to a single consumer per group.

This basically means that for example, we could have a Group for re-building our Query DB, another one for logging, another one for turning on the coffee machine and so on.

This covers the events broadcasting part. Next time we’ll see how we make proper use of those events to generate the Query Models.

Did you like this post? Then