At GDC a company called Tactual Labs was claiming to have the worlds fastest touch controller. SemiAccurate takes such claims with a large grain of salt, but Tactual did have some really interesting explanations about why and how they came to that conclusion.
The basic idea is that a normal touch controller has around 200+ms of lag, you can see this in gaming and demos where the cursor tries to follow your finger. In fast movements the cursor lags. This is coupled to a scan rate for most controllers around the screen refresh time, 60-100Hz. Together they can make lag.
Tactual says typical lag is around 15ms for the controller, 15ms for the display, and 200ms for the software stack and parsing. This isn’t to say that typical equates to a 5Hz scan rate, just that lag is around 1/5th of a second. If you could eliminate the software side, that would drop to 30ms, a far more tolerable number. Tactual’s tech which they call Fast Multi-Touch (FMT) claims to drop latency to around 40 microseconds.
How do they do this? They start with a scan rate of up to 4000Hz and don’t scan the touch grid, each point or line is scanned each cycle. This does take a bit more power but Tactual claims it isn’t much of an adder compared to traditional methods. Of course energy use varies with scan rate so you can always dial it down on the fly if needed.
From there their secret sauce is how to eliminate the software and other sources of that 200ms of overhead the stack contains. Given that their claim is far less than 1ms it looks like they have a solution but that wasn’t detailed at GDC. Part of it involves a controller that can run a few algorithms on it to more close the loop with the sensor, but there are undoubtedly other tricks. The idea is to eventually offload some of the gesture recognition to the touch controller itself to both drop latency and free up the OS to do other things. Tactual isn’t there yet but claims to have controllers planned that this can be implemented on.
Better yet the Tactual controller is only a controller, it works with most touch sensors on the market today. They claim ITO compatibility along with most up and coming technologies too. Since they don’t have a sensor of their own and did have working demos, it does look like it works with at least a few off the shelf sensors.
That brings up the question of why anyone would want a 4KHz scan rate for a device with a maximum refresh rate in the double digits. The answer there is easy, you can perceive 2ms of lag according to their tests so more is better. If the screen has between 15-20ms refresh rates, what does far less buy you?
In short it buys you ready data every time your OS or app needs it. If you expect a <100Hz scan rate, software and OSes need to have mechanisms to wait if the data isn’t there when needed. This can take time and wreak havoc with power management mechanisms, not to mention interrupt overhead. If the data is guaranteed to be there, the software stack can be vastly simplified and just read instead.
Better than that, if you have a low rez stream of touch input data, you can lose motion between frames or samples. If you move your finger over a screen really fast, it isn’t hard to make a phone or tablet lose track of what is going on. With the scan rates Tactual is capable of, you can either sample much faster or have the controller interpret things for you. The controller could for example give you the points on a much lower rez than it samples, but then add hints or metadata that an event is being missed, and what it was.
This of course means that the OS and device would need to have a substantially revised software stack to properly use the increased resolution data. If done right it could simplify the entire input path and less complexity usually means more power savings. If offloading can be done to the sensor, the main CPU needs to only pull pre-parsed gestures so it can sample far less frequently and get more relevant data at the same time. If would be a hardware gesture recognition engine rather than a mere touch controller.
In essence the Tactual Labs method has a lot of potential that is far more complex than just faster inputs. The knock-on effects from the higher scan rates are power savings and offloading of intelligence to a dedicated device for better quality information. None of this is completely done yet, but the first generation controller is working and the demos at GDC show that it does support their claims. It will be interesting to see what happens from here, the potential upside is pretty compelling.S|A
Have you signed up for our newsletter yet?
Did you know that you can access all our past subscription-only articles with a simple Student Membership for 100 USD per year? If you want in-depth analysis and exclusive exclusives, we don’t make the news, we just report it so there is no guarantee when exclusives are added to the Professional level but that’s where you’ll find the deep dive analysis.
Latest posts by Charlie Demerjian (see all)
- HyperX ships it’s 60 millionth enthusiast memory module - Oct 15, 2018
- Bittware/Nallatech water cools 300W of Xilinx FPGA - Oct 12, 2018
- More on Intel’s 10nm process problems - Sep 17, 2018
- Intel puts out another 14nm 2020 server platform - Sep 11, 2018
- Why Can’t Intel Supply Enough 14nm Xeons? - Sep 10, 2018