Rambus is back, and they are making much more than light bulbs, their new Binary Pixel tech is aimed at image sensors. The idea is simple enough, read an analog sensor in a more digital fashion to increase its range.
Binary Pixel is not a single technology, it uses multiple methods to increase the range of a silicon image sensor by a claimed 15x with more possible. If you know how digital cameras work, they measure the light that hits each pixel while the shutter is open across three discrete color ranges. If you have a high shutter speed, you limit range on the low-end and shadows are lost. Open it too long and everything gets muddy and dark. Momma bear, err middle ground timings have a little of both problems but usually are the best compromise.
This is the problem in a single graph
There are solutions to all of this in one form or other with HDR being the most common purported fix. This takes three pictures at fast, middle and slow shutter speeds, then combines them to make a single image with a much higher dynamic range than is possible from a single exposure. If the picture changes at all between the three discrete shots, it doesn’t work out so well. This relegates the technique to still images and some types of landscapes. Other techniques have similar drawbacks.
That is what Binary Pixel is aiming to change, and they do it with their own image sensor design. The main idea behind it is to effectively supersample the sensor, but do so in a unique and intelligent way. If you take a sensor, each pixel can be seen as a bucket that the photons fill with electrons. This charge is read as an analog signal and converted to binary to be digitally processed. If your shutter is opened and shut quickly, the absolute number of photons to hit any pixel is small. The resultant image has a small dynamic range because you are measuring how close to empty each bucket is. If you leave it open too long, the range is again small because the buckets are all much closer to full. One extreme isn’t a fix for the other, and the middle is better, but still falls short of the range a human eye can perceive.
What Rambus is doing with Binary Pixel is simple enough to explain, they take a conventional sensor at the pixel level and sample it after a given time period that is less than the shutter length. These pixels are read in a binary fashion, zero or one, and then the cycle is repeated. One advance is that the second sample period is different from the first, and can be algorithmically determined by the controller. Going back to the bucket analogy, when the first one fills, they are all emptied, and all pixels are assigned to a zero or one based on some pre-programmed threshold. Reset the timer to a different number and repeat. And repeat. And repeat.
The exact values for shifting the time window, how initial time frames are determined, the cutoff for a zero or one, and so on and so forth are trade secrets but the idea is simple enough. At the end, each pixel has its values added up to determine the color assigned to that pixel. In theory you can’t really botch a picture, but with ham-handed photographers like this author, we will find a way to make things go wrong. We are a clever bunch. Most people will only get a 15x improvement in dynamic range with more promised in future revisions as the tech is refined.
Binary pixel ends up as a combination of time based oversampling with elements of spatial oversampling but in reality isn’t quite either one. If this sounds terribly complex and painful to the die area of a sensor, well your initial estimates would be wrong. The tech only requires a few more control lines, and a little more processing power off the sensor, but the sensor itself is not all that different from a conventional sensor. You can see the bare die and the sensor test chip below, it is only a six element design for basic technology development and testing. According to Rambus, it does work like they hoped it would.
The sensor and a test chip made with it
The aim is to make a sensor that is compatible with current phone and camera SoCs just like a conventional sensor, but has more than an order of magnitude higher dynamic range. If their test results are to be believed, they did it. What you can end up buying when the consumer versions come out is still not completely nailed down though, it is a work in progress. The current thoughts are to have the time windows dynamically change for the whole sensor, but it could be done on a per-pixel basis as well. Similarly the algorithms to control how windows shift is being refined too, it is really a technology in a late development stage. As Rambus refines the algorithms, the dynamic range will enlarge a bit more which is where the 1-2x added range bit comes from.
If everything works out well, future cameras using Binary Pixel tech should do HDR level shots or better with every picture. The die size and complexity adder for the tech also shouldn’t be all that high either, the extra control lines are not very complex to add, nor is a bit more image processing and temporary storage capability. As long as the results are what Rambus claims, it looks like a winner.S|A
Charlie Demerjian
Latest posts by Charlie Demerjian (see all)
- Microsoft Hobbles Intel Once Again - Sep 20, 2024
- What is really going on with Intel’s 18a process? - Sep 9, 2024
- Industry pioneer Mike Magee has passed away - Aug 12, 2024
- What is Qualcomm launching at IFA this year? - Aug 9, 2024
- SemiAccurate is back up - Aug 7, 2024