HDR Helps Lens and Camera Systems Overcome Poor Lighting

HDR is a background technology that help lens and camera systems overcome bad lighting in machine vision, QA and factory automation applications. While the human eye can easily adjust to low lighting, cameras do not. When did your machine vision application ever have perfect lighting like that of a supermodel photo shoot?

 

Dim lighting or indirect lighting in a machine vision application can be overcome by the use of HDR technology. It essentially expands the exposure range of the lens and camera. Within milliseconds, over-exposed shots are collected along with under-exposed ones, then combined for a final output.

What is HDR?

HDR is short for high dynamic range. It expands the range of colors and contrasts that a camera/lens combination is able to produce. By having a bigger contrast of information between neighboring pixels, the output is easier for target identification in factory automation, machine vision, and quality inspection applications. In short, here is a buzz phrase to impress the boss with: “HDR provides a wider range of luminosity.”

Think of the sunset or sunrise photos you have tried to take. Ever notice they come out darker on your smart phone then you see them with your naked eye? Now open the camera application on your smart phone. Look for the ‘HDR’ option. It is usually clustered near the options for the flash. Put your drivers license into a shadowed corner then take a couple photos at various distances with the HDR activated. Now de-activate HDR and take a couple more. Then go back and zoom in on the pictures to see the difference of clarity in the fine details.

A Great Example of HDR Processing

A great example of HDR in use within our industry is on a support page from Pixelink. They provide a great visual showing how HDR technology pulls in a low exposure shot along with a high exposure shot of a circuit board. Instead of just showing the combined image from those two sources, it demonstrates how varied of an exposure the low is from the high.

On our smart phones, we only see the resulting combined image. But for machine vision applications, you can readily understand how helpful it is to have so much control over the image submitted for target processing.

Share Button