Home < Blog < Measuring Real-Time

Measuring Real-Time

4 min read

Editor’s note: This blog post first appeared on the InfoWorld blog on August 28th

The term “real-time” is thrown around a lot these days, but it’s a buzzword that is often surrounded by ambiguity. Every day, it seems a new product is announcing its real-time capability. But how is real-time measured? It certainly isn’t measured in days (or even hours)—so is it measured in:

Nanoseconds?
Microseconds?
Milliseconds?
Seconds?
Minutes?
All of the above?

Everyone, from developers to software corporate marketing departments to even consumers, seems to have a slightly different answer. So let’s explore the question “What does ‘real-time’ really mean?”

Must Read: Enterprise 5g

Let’s begin with the dictionary definition:

Real-time—“of or relating to applications in which the computer must respond as rapidly as required by the user or necessitated by the process being controlled.”

While this definition continues with the subjective theme, it does confirm that the correct answer to how to measure real-time is “All of the above.” The meaning of the term real-time varies based on application need—the amount of time a computer (the application) takes to respond and the acceptable latency is as fast as required by the problem domain.

Rather than look at applications and determine if they are real-time or not, let’s examine various time units and understand the types of real-time applications that require those response rates:

Nanoseconds: A nanosecond (ns) is one billionth of a second. Admiral Grace Hopper famously explained a nanosecond using an 11.8-inch wire, as that is the maximum distance electricity can travel in one nanosecond. This quick video of Hopper is worth watching if you haven’t yet seen it.

With this in mind, it is easy to see why nanoseconds are the unit used to measure the speed of hardware, such as the time it takes to access computer memory. Worrying about nanosecond latency is at the bleeding edge of real-time computing and is primarily driven by innovation with hardware and networking technology.

Microseconds: A microsecond (µs) is one millionth of a second. Real-time applications that worry about microsecond latency are high-frequency trading (HFT) applications. Financial trading firms spend large sums of money investing in the latest networking and computer hardware to eliminate microseconds of latency within their trading platforms. A trading decision has to be made in as few microseconds as possible in order to execute ahead of competition and thus maximize profit.

Milliseconds: A millisecond (ms) is one one-thousandth of a second. To put this in context, the speed of a human eye blink is 100 to 400 milliseconds, or between a 10th and half of a second. Network performance is often measured in milliseconds. Real-time applications that worry about latency in milliseconds include telecom applications, digital ad networks, and self-driving cars. The decision on what optimal ad to display or whether there is enough balance to let a cellphone call proceed must be made on the order of 100 milliseconds.

Seconds: We’re starting slow down here. We’re still in the realm of real-time, but we are now venturing into near real-time. Sub-minute processing time is often more than good enough for applications that process log files, computing analytics on event streams, as well as alerting applications. These real-time applications drive actions and decisions that are made in human-reaction time rather than machine-time. Reducing the response time by one tenth of a second (100ms), which may be costly to implement, has no change in value for the application.

Minutes: Waiting minutes may seem like an eternity to a high-frequency trading application. However, consider package shipment and delivery alerts or ecommerce stock availability notifications. Those applications certainly feel real-time to me—the fact that I receive a “delivery notification” text message within 10 minutes of a delivery made to my home is very satisfying.

Finally, though I discounted it up front, let’s briefly consider hours and days. While this time range is generally not regarded as true real-time, if you’ve been getting finance or sales reports on a monthly, weekly, or daily basis, and now you can get up-to-date reports every hour, that may be as real-time as you need. The modernization of these applications is often termed as upgrading from “batch” to “real-time.”

The old proverb is correct: Time is money. Throughout history, the ability to make real-time decisions has meant the difference between life and death, between profit and loss. The value of time has never been higher and therefore speed has never been more critical to business applications of all kinds.

Luckily, we live in an age where fast computing is very affordable and making decisions in real-time is economically achievable for most applications. The first step is determining the appropriate definition of real-time that aligns with the needs of your business applications.

Adrian Scholes