I have quite a fascination for Event Sourcing. Not sure why this pattern got my attention a while ago, but since then I’ve been using it at work (whenever possible and meaningful) and for some personal projects too.
I have also been blogging about it for some time, as some of you might know.
Now, being a very curious person, I have always been interested in understanding how things work, in software and in (almost) anything else. I have been reading a bunch of very interesting books about data-intensive applications and after a bit I started playing with the idea of writing my own database engine.
Of course this is definitely not an easy feat to pull off, and for sure not one that can be accomplished by just one guy in his bedroom during his spare time.
But at the same time I’m having a lot of fun so far and I am definitely learning a lot :)
So here it is: EvenireDB!
I got the name from the Latin word “Evenire”, present active infinitive of ēveniō, “to happen”.
The basic idea behind Evenire is quite simple: events can be appended to stream and later on, retrieved by providing the stream ID.
Every stream is kept in memory using a local cache, for fast retrieval. A background process takes care of serializing events to a file, one per stream.
Reads can be done from the very beginning of a stream moving forward or from a specific point. This is the basic scenario, useful when you want to rehydrate the state of an Aggregate.
Another option is to read the events from the end of the stream instead, moving backwards in time. This is particularly interesting, for instance, when you are capturing data from sensors and wish to retrieve the most recent state.
For those interested, I invite you to take a look at the repository on Github and the small sample application I included.
In the next weeks, I’m planning on writing more on how Evenire works internally, and the technical choices I’ve taken so far.