Just a short blog post today to point folks to a very well-written article on database consistency models titled On the Futility of Custom Consistency Models (posted on the Hacking, Distributed blog by
This blog post does a very nice job of discussing the current trend of more and varied consistency models for database systems. The general idea seems to be that you can eliminate some of the overhead of ensuring data consistency for some applications. And, sure, there is some validity there. But how many different consistency models do we really need?
Also, the notion that relational/SQL DBMS products do not allow for flexible consistency is absurd. For example, I’ve written about isolation levels in this blog before, and you can adjust how locking behaves — and therefore the consistency of query results — by adjusting the isolation level. For example, an uncommited read (or “dirty” read”) can be used to eliminate read locks in DB2, and therefore return dirty data. Applications using dirty reads are more efficient, but they might return bad data to the application. For some use cases this might be fine, but I sure wouldn’t want my bank to use dirty reads on my financial transactions!
So the next time you start reading about eventual consistency and how it is revolutionary, step back, reconsider, and think about what you are reading. There is merit to it for some use cases (e.g. my seller account on amazon.com), but not for most (e.g. when the data absolutely must be accurate every time you read it).