NoSQL, RDBMS, and Contention for Shared Data: Part 2

NoSQL, RDBMS, and Contention for Shared Data: Part 2

December 21, 2017

Previously in part one of this blog series, I explained the problems with two methods of database interactions. Traditional JDBC-style interactions mean that locking and latching take up ~30% of compute time. This overhead skyrockets when you try to use a distributed system. A NoSQL key-value interaction allows for better scalability, but for non-trivial use cases there are problems with data access control, complexity, and accuracy. This brings us to our third approach:

Method #3: Move the problem to the data, do what needs to be done, send an answer back.

In the Hadoop ecosystem it’s taken for granted that moving computation is cheaper than moving data, and the same principle can be applied to transactional work. Both of the two methods we’ve looked at incur significant costs because multiple clients want simultaneous access to the same data at the same instant in time, and the resulting costs of preventing conflicts (with SQL + Sessions) or cleaning up conflicts (with a KV store) escalate exponentially as throughout and complexity increase. By doing all the ‘thinking’ on the server, we can reduce the overhead of simultaneous access, as we aren’t moving data across the wire all the time. So the logical thing to do is use some form of stored procedural logic on the server.

Stored Procedures? Seriously?

Stored procedures are in fact the most efficient way to handle high volume OLTP workloads.

Developers have traditionally disliked them because they made development more complicated and required more interaction with DBAs. They also usually have their own peculiar language which will somehow manage to be both complicated and limited, such as PL/SQL, which is one of the few computer languages where a “Hello World” program is non-trivial.

But their real problem is that legacy RDBMS implementations allow uncommitted transactions to exist while the client decides when or if a transaction should be committed. As we’ve already explained, this creates a massive server side overhead as well as resource contention issues. But if you make three changes you get some fairly astonishing results:

  • We can eliminate most ‘locking and latching’ by making it a rule that all calls to the database must commit or rollback by the end. This creates issues for some use cases, but for the vast majority of OLTP/IOT use cases works really well, as the trip to the database exists because somebody or (in the case of the IoT) something needs a clear answer to do their job, instead of wanting to mull things over and decide to commit or rollback on the client side at some arbitrary point in the future.
  • If we force requests to execute one at a time, we design out contention for resources. The legacy alternative leaves us with residual contention among requests that are ‘live’ on the server at the same time. Even with traditional stored procedure approaches, this will sooner or later become a problem, as figuring out what resource is supposed to be locked in what order is trickier than it looks. But by partitioning the data and then forcing each partition to execute serially, there is no longer any contention for resources. Bear in mind that if nobody is allowed to hold resources between calls, and that if each call is the only thing touching that partition of data, there is no reason why it shouldn’t run like greased lightning – the database doesn’t have to devote any time to keeping track of what other ‘sessions’ are seeing and the client doesn’t have to devote resources to dealing with requests overwriting each other or needing to be coordinated.
  • The final step is to partition the data by CPU core, which means that you can have a single threaded process running on each core handling request after request. Zero CPU cycles are spent figuring out what to do next. This is what we’ve done at Volt Active Data. On a consumer grade desktop I can easily get between 5-9K TPS per core, with each transaction updating multiple tables and being 100% ACID.

Conclusion

Since the legacy RDBMS’s architecture was nailed down in the 1980’s, dramatic advances in both computer architecture and anticipated transaction volumes have completely changed the world that we now work in. As a consequence, we need to realize that the real constraint for high performance write-centric OLTP systems is not RAM, or CPU, or network, but instead is contention for shared data. Volt Active Data is built around this assumption. To learn more about the differences in modern databases and how to pick the right one for your application, view our pre-recorded webinar “The Changing Landscape of the Database Industry.” Or, read our blog and checklist: 8 tips for evaluating a database in 2018.

  • 184/A, Newman, Main Street Victor
  • info@examplehigh.com
  • 889 787 685 6