Total cost of ownership (TCO) has become a major headache for most tech companies whose product and/or services rely on being fast while staying reliable and accurate.
It’s wonderful if your applications run at the speed of light, but what’s that worth if they have downtime or drive your customer service agents insane with the number of help calls and complaints they’re getting?
Add the increasing importance of being able to crunch oodles of edge and IoT data as it’s being generated and you have a recipe for, maybe not financial disaster, but near financial disaster due to all the complexity-adding layers you’re going to have to add to your tech stack, not to mention churned users.
And that’s where our recent little experiment comes in.
Many Volt customers are OEM software vendors who want a single platform that runs economically for the smallest of their customers to the biggest. So the question we asked ourselves was: We know Volt can run at 800,000 transactions per second on a 7-node cluster of big servers, but how small can we go for the sake of TCO and tech stack simplicity? Can we run the exact same software on a really simple ARM platform, such as a Raspberry Pi 4B with 8GB of RAM, and still get useful results?
The answer is yes!
We built a 5-node cluster of Volt using Raspberry Pi 4B’s, each with 8GB of RAM and a high-end 128GB CCTV grade microSDXC card. We then ran the same benchmarks we’d previously run on AWS.
In the first benchmark, all data was stored on two different Pis (‘k=1’ in Volt terminology) and we did regular snapshots to the micoSDXC card. With a cutoff point of the 99th percentile latency of 10ms, we were able to get to 2,159 transactions per second on the Volt Charging benchmark, and 4,834 TPS on the Volt Key Value benchmark.
In the second benchmark, we decided to go for increased survivability. We used ‘k=2’ to store all data on three Pis, and turned on command logging to minimize data loss during a catastrophic power outage.
Using the same latency cutoff point of 10ms, we saw quite different results with Charging. At 10ms, we were still able to get 392 TPS. If we loosened our standards and accepted a 20ms 99th percentile, we could get 882 TPS. Bear in mind that the average response time was still 4ms. The hard maximum TPS was around 5,000, but that was with average latencies fluctuating from 60ms to 100ms.
For Key Value the story was similar, with a drop to 489 TPS if we wanted to remain under a 99th percentile latency of 10ms, increasing to 1,665 if we relaxed the 99th percentile latency to 20ms.
The hard maximum TPS for Key Value was around 9,000, with 65ms average latency.
While these numbers might not seem that impressive at first sight, consider:
- The same application code and Volt software stack can scale to at least 800,000 TPS with no code changes and minimal configuration changes.
- Operating costs are low. Really low. Per day we used 0.83Kwh, which, even at Ireland’s high energy cost of 0.44 Euros per Kwh, works out to just 0.37 Euros per day. This is comically affordable when compared to cloud hosting.
- The hardware cost is 817.30 Euros. Assuming we write off and replace the entire hardware after a year, that’s 2.24 Euros a day, or 953 Euros a year to operate.
Now, would you ever actually deploy a real-world system on a bunch of Pis? For a lot of traditional Volt use cases, probably not. But we’re already holding conversations with ARM appliance manufacturers, who are starting to see real interest in deploying significant compute hardware at the edge, where traditional software stacks are somewhere between ‘top heavy’ and ‘complete overkill’ in the grand scheme of things. For such use cases, a Volt-based stack running on what is effectively an appliance makes a lot of sense.
A decade ago, if you were planning on deploying an OLTP system that could support 800 TPS your baseline assumption would be costs of well into six figures for hardware and a non-trivial effort to get up and running. And that’s before you get into the potential problems of physically deploying it in a geographic location that could be challenging or expensive to get to.
Regardless of whether you feel okay about running big applications on small hardware or not, it’s hard to ignore that the current software paradigm of over-complicated stacks is long overdue for disruption by low-cost, appliance-grade hardware and that we have a radically reduced hardware TCO opportunity here.
Want to learn more about lowering your TCO with Volt? Let’s chat.