top of page
  • Christopher Ho

A Note on Hard Real-Time

Updated: Jun 27, 2019

One of the problems in the sciences and engineering--really in every field--is the overloading of terms, and an inconsistency in notation. Not only are there reserved letters and keywords in some fields (like j is the imaginary number for electrical engineers instead of i), but also terms mean slightly different things depending on who you ask.


If I were infinitely powerful and infinitely smart, I would reshape the world so that all the notation and keywords are consistent and unique. Granted, with infinite power, there are many other things I could, or should be doing, but I figure getting everyone’s notation straight is a worthy first step.


But I’m not infinitely powerful, and not even particularly smart, so my capabilities are limited to writing this post.


Real-Time, Formally


Coming from a semi-academic background, and as a current member of the embedded and safety-critical application community, I have found that, distressingly, there are a couple of definitions of real-time.


Actually, there are really just two: the hand-wavy definition that most people subscribe to (rightfully so, even), and the very rigorous and well-defined term that those in embedded systems use.


The generic term, real-time, is usually used to refer to an application that can handle inputs online. That is, the application runs fast enough (on average) to reasonably handle a live input of data. For camera-based processing, this is usually around 20 FPS (50 ms) or faster.


Real-time in the embedded sense means that an application can, or should, complete computation within a fixed deadline. This means that finishing computation in 50 ms on average doesn’t cut it, rather I have to (or should) finish within 50 ms every time, which is a much more rigorous definition than the one used generally.


And then there’s the terms hard, soft, and firm real-time. These broadly specify how tolerant the system is towards missing a deadline. The prefixes correspond to being not at all tolerant, occasionally tolerant, and somewhere in between respectively. Wikipedia probably explains it better than I do.


Building Real-Time Applications


How you make a real-time application is where the money is made. The principles are simple, though; your algorithm’s core path must have an upper-bounded number of operations (e.g. a bounded for-loop), and each of those operations must have a finite and upper-bounded runtime.


Achieving this in practice is a bit involved.


There’s the algorithms side, which I alluded to before. You first have to make sure you have a fixed maximum number of operations. This is doable if you know about this requirement before choosing your algorithm. Then you need to make sure each of the operations is bounded in time and not blocking. The latter part is achievable through good and disciplined engineering, i.e. making your application static. The former requires getting closer to the metal and a solid framework that makes sure all potentially blocking calls can timeout.


This leads me to the OS side of real-time. Having a finite number of operations, each of which is bounded in clock cycles, is great, but insufficient if you cannot guarantee your application will get that CPU time at a fixed rate. Generally speaking, this doesn’t happen on stock OS’s, such as Ubuntu, or Windows. You can’t really say how much CPU time your application will get, since the OS might arbitrarily decide to spend more time rendering your favorite GIF rather than computing your mission-critical result. There are, however, a few kernels and operating systems which do provide these guarantees of CPU time.


Real-time operating systems, such as Linux RT-PREEMPT, QNX, eSOL, and Wind River, can provide these guarantees. Such operating systems achieve these performance guarantees by using a real-time scheduling policy that obeys priority, such as FIFO or round robin, or those that additionally take into account timing characteristics, such as deadline-based or adaptive scheduling. Such policies guarantee that for some time period, each thread and process running on the kernel is guaranteed a certain number of CPU cycles, according to the process and thread priority.


There is no free lunch, of course. Making a system real-time also comes with a cost: the additional complexity of configuring the system for real-time, and a performance hit. The complexity comes because you now need to apply an appropriate priority to every process and thread in your system, and this needs to be mapped to the requirements of your system. If you don’t get this right, then your high priority threads might be getting too much CPU time, wasting work, whereas your lower priority threads might not be getting a sufficient number of cycles, creating a bottleneck. What’s more, even if you tune everything perfectly, typically you'll still take a small global performance hit from some of the architectural compromises (such as not using cache memory) inherent to timing deterministic systems.


So that’s real-time from an embedded perspective. Like most things, it’s challenging, but not impossible to do, requiring changes at the kernel, OS, framework, and application level. I can speak competently on this topic because at Apex.AI, we’ve been hard at work building a hard real-time framework, which addresses these challenges and abstracts the complexity away from the developer, as well as hard real-time applications and algorithms on top as well.

762 views

Recent Posts

See All
bottom of page