banner

Blog

Jun 26, 2023

A Quick Overview Of Industrial Camera Interfaces

Source: janiecbros/Creatas Video via Getty Images

When selecting a camera for a machine vision system, designers need to consider the image sensor, resolution, the lens mount, and other features, such as how the camera will connect to the processor. Achieving optimal machine vision and imaging system performance often depends on choosing the best camera interface to meet data bandwidth, data reliability, determinism, power, cable length, and other system requirements.

This article includes information intended to help machine vision and imaging system designers navigate through the various camera interface standards and options to select an interface that best meets the requirements of the machine vision or imaging task at hand.

As machine vision and imaging applications accelerated in the late 1990s, the need for standardization became apparent. Standardizing the more common elements of vision systems, such as cameras and lens mounts, makes machine vision and imaging systems less expensive and easier to integrate by driving down costs, simplifying system design and installation, and ensuring component interoperability.

FIGURE 1: Two layers of software help with camera integration. First, the transport layer enumerates the camera, provides access to the camera’s low-level registers, retrieves stream data from the device, and delivers events. The second layer is the image acquisition library, which is part of a software development kit. It uses the transport layer to access camera functionality and allows for grabbing images. Source: Association for Advancing Automation

Interface standards ensure that components such as cameras, frame grabbers, and software — often from various suppliers — interoperate seamlessly. By codifying how a camera connects to a PC, interface standards provide a defined model that allows simpler, more effective use of machine vision and imaging technology.

It all started in the year 2000 with the introduction of Camera Link, the first high-speed digital camera interface standard. Since then, many other camera interface standards have been created. The most popular include GigE Vision, USB3 Vision, Camera Link HS, and CoaXPress (see Table 1).

While the Camera Link standard has been around for 22 years and is basically in maintenance mode, it’s still widely used in machine vision applications. GigE Vision is a low-cost standard that uses existing Ethernet infrastructure and is scalable to any speed grade of Ethernet. It was essentially developed concurrently with GenICam and was consequently the first camera interface to require it.

With GigE Vision, developers can connect it directly to the ethernet port ona PC, or buy a standard switch if the application requires it— and it will work. No specialized hardware is required. And widely available internet networking hardware has been around for a long time and is getting faster and faster, so GigE Vision is much cheaper than many other solutions. USB3 Vision, introduced in 2012, is another low-cost and convenient standard that uses the widely available native computer interface ports, however maximum cable lengths are limited to 3-5 meters, or up to 60 meters with active optical cabling.

CoaXPress, introduced in 2010, is a very high-speed coaxial cable interface that was pioneered in Japan, which then had a large infrastructure of analog cameras connected via coaxial cable. Camera Link HS was introduced in 2012 as the next generation replacement for Camera Link in extremely high-speed applications. CoaXPress, Camera Link, and Camera Link HS require framegrabber-based architectures that enable greater levels of data processing and dedicated connectors.

There are two types of camera interface standards: hardware and software. Developers of machine vision and imaging systems must generally complete four basic tasks: finding and connecting to the camera, configuring the camera, grabbing images from the camera, and dealing with asynchronous events signaled by or to the camera.

Accomplishing the four basic tasks requires two layers of software. First is the transport layer, which enumerates the camera, provides access to the camera’s low-level registers, retrieves stream data from the device, and delivers events. The transport layer is governed by the hardware interface standard. Depending on the type of interface, the transport layer may require a dedicated frame grabber or a bus adapter (see Table 2).

The second layer is the image acquisition library, which is part of a software development kit (SDK). As either a stand-alone item, provided with a frame grabber, or part of an image processing library, the SDK uses the transport layer to access camera functionality and allows for the grabbing of images (see Figure 1). In addition to the four principal transport layer interface standards and GigE Vision protocol already mentioned (Camera Link, USB3 Vision, CoaXPress, and Camera Link HS), there are two principal software interface standards, GenICam and IIDC2.

The GenICam (Generic Interface for Cameras) software standard defines how data is managed over digital imaging camera interfaces. The generic programming interface is identical regardless of which interface technology is used or what features are implemented.

The GenICam standard includes two parts: the GenApi application programming interface (API) and the GenTL transport layer (TL). GenApi specifies an XML file describing the camera’s characteristics, such as manufacturer, model number, and other predefined tags, using the Standard Feature Naming Convention (SFNC). GenTL provides a software interface that is independent of the camera’s design that gives access to transport layer control, streaming, and event channels regardless of the implementation (see Table 3).

GenICam is very powerful because it contains so much information. For example, the camera’s gain tag not only specifies the current gain value but also indicates whether it’s a fixed point integer or a floating point value. It indicates which direction the data flows for policing the bit, indicates the maximum and minimum gain, and includes any other gain restrictions when registering gain information within the camera. All this means that the software package can recognize and interface with the camera, which gives users a consistent interface, regardless of whose camera is connected to the computer.

With the transmitter and receiver using the standardized protocol over the same transport layer, cameras can be connected to any driver or frame grabber. Similarly the APIs of software interface standards ensure that developers can use drivers/SDKs from different vision libraries. As a result, developers using standards-based SDKs can exchange standardized cameras, drivers, or even whole interface technologies without having to make significant changes to the software.

There are many differentiating characteristics among the various camera interface standards. In addition to considering GenICam support, systems integrators and machine vision and imaging system designers must also consider factors such as bandwidth, data reliability, data determinism, and cable length. Standards also differ in regard to the number of cables possible, frame grabber requirements, fiber optic compatibility, camera power options, and the ease with which cables can be terminated in the field (see Table 2).

To narrow down camera interface standard choices, a good place to start is with data bandwidth requirements. Bandwidth is essentially the size of the pipe through which data flows. As camera manufacturers continue to push the limits of resolution with smaller pixel sizes, increasingly higher pixel counts, and higher frame rates, designers must look beyond just frames per second. Instead, bandwidth can be calculated by resolution x frame rate x bit depth.

Data bandwidth boils down to multiplying pixels/second by frame bit depth to calculate total megabits/second (Mb/sec). The larger the frame size and the higher the speed, the bigger the data pipe needs to be. If bandwidth is constrained, consider reducing the frame rate and/or the image size.

Bandwidth (and cable length, see below) is largely determined by the physics of the physical interface cabling and associated silicon. The major global Vision Associations have cooperated to put together a very useful brochure titled “Guide To Understanding Machine Vision Standards” which is available from the standards section of their websites. This brochure is especially useful as it was put together by the standards chairs, ensuring that it is more like a technical data sheet verses a marketing brochure.

For some applications, such as those with very long cable runs, the ease with which cables can be terminated in the field might be an important consideration when choosing a camera interface. With very long cable runs, it can be difficult to predict with high accuracy the exact length of cable that will be needed for each camera.

In such applications, designer may err on the side of caution and specify extra-long cables. With this approach, it may be necessary to hank up extra cable inside the control cabinet. Extra loops of cables can be avoided by choosing an interface that can be easily terminated in the field. Examples of this are using RJ45 connectors for GigE Vision, BNC connectors for CoaXPress and fiber teminations for Camera Link HS. (see Table 2).

Likewise, in some applications being the advantage of reducing wiring by powering the camera over the same cable that’s transmitting the data may be advantageous. While Camera Link HS does not provide for this in its current specification, all the others do (see Table 2). In longer cable runs, however, it’s important to consider the impact that greater resistance may have on power. Designers may have to install a power supply near the camera.

Another important factor to consider is cable length. Depending on the application, cable lengths may range from a fraction of a meter up to 100 meters or even more. For example, in most factory automation applications, the distance between the camera and the processor, usually an industrial computer, can be measured in meters. However, cable lengths in applications related to stadium sports analytics or transportation may be hundreds of meters.

Camera Link allows cable lengths up to 15 meters. For copper, GigE Vision leads with 100 meter cables. The USB3 Vision standard works well for applications with cable lengths ranging from 3 to 5 meters. However with active optical cables (AOC), the distance can be increased up to 60 meters.

CoaXPress specifies cable lengths from 25 to 85 meters, and Camera Link HS specifies cable lengths up to 15 meters. However, with single -mode fiber, cable lengths of up to 5000 meters can be achieved using Camera Link HS. USB3 Vision cable lengths increase from 3-5 meters with copper and up to 60 meters using AOC (see Table 4).

These are just a few key factors to consider when trying to quickly narrow down which camera interface is most suitable for a particular application. However, the number of cameras, system complexity, data determinism, data reliability, and cost will also factor into the selection process. As camera interface technology continues to evolve, it is important to keep up with new developments. For the latest information on machine vision standards development, please visit www.automate.org/a3-content/vision-standards.

Top

Bob McCurrach, Director of Standards Development – Vision & Imaging, Association For Advancing Automation (A3).

FIGURE 1:Bob McCurrach
SHARE