Giving sight to vision-enabled automotive technologies

February 2nd, 2015, Published in Articles: EE Publishers, Articles: EngineerIT

 

Cars continue to become smarter and smarter, integrating new and cutting-edge technologies to make the driving experience safer and more enjoyable.  In a White Paper entitled “TI Gives insight into vision-enabled automotive technologies” Texas Instruments’ authors Zoran Nikolic, Gaurav Agarwal, Brooke Williams and Stephanie Pearson provide insights into this fascinating subject. See the full article on http://www.ti.com/lit/wp/spry250/spry250.pdf

With the goal of reducing roadway fatalities, enabling these new advanced driver assistance systems (ADAS) applications takes extensive processing and specialised peripherals while still needing to manage power consumption in a very challenging environment.

ADAS applications have high demands for compute performance, require small system footprint and need to operate at extreme temperatures. The opposing requirements create a very challenging environment. To deliver maximum compute performance at extreme temperatures while encapsulated in miniature enclosures means that the parts need to be extremely energy-efficient. All electronic components have to be able to withstand temperature variations and extremes while still maintaining stringent performance requirements.

Does a car really need vision?

ADAS systems use specialised processors and software to provide real-time information and feedback to drivers based on data captured from sensors positioned inside and outside the vehicle (Fig.1).

ADAS systems are designed to increase the safety of the driver, passengers and pedestrians while also increasing the enjoyment and ease of the driving experience.

With the advances in automotive vision taking place now, autonomous cars truly are in the near future. Google announced in August 2012 that they had developed and tested a self-driving car. They drove over 480 000 km in their testing with complete computer controls. Daimler has also released plans to develop a driverless car by the year 2020. At CES 2013, Audi demonstrated their version of a self-piloting car that was able to drive itself in a traffic jam as well as negotiate the tight spaces of a parking garage. Other car manufacturers, like BMW4 and Toyota, are also in the testing phase of self-driving cars. But none of this is possible without the most advanced automotive vision solutions that are able to reliably deliver real-time processing of visual data at the right cost and power savings.

Fig.1: An overall view of an ADAS system.

Fig.1: An overall view of an ADAS system.

Front camera (FC) systems use either one or two cameras mounted between the rearview mirror and windshield, facing forward. With applications like collision avoidance and lane departure warning, FC systems help reduce accidents and fatalities on the road. FC systems using two cameras are able to provide scene depth information to increase the data available therefore accuracy. Next-generation FC systems are integrating with as many as five algorithms in the same system at a low power footprint.

Front camera applications may include lane departure warning/lane keep assist, pedestrian detection, road sign recognition, forward collision warning/avoidance and intelligent head beam assistance. Lane departure warning systems for example, monitor the car’s location relative to the lane markings and send a warning signal to the driver that he is drifting. In lane keep assist systems, the vehicle actually corrects the steering for the driver, moving the vehicle back into the lane. Pedestrian detection systems use vision analytics to recognise pedestrians and warn the driver of the potential hazard. Similarly, with traffic sign recognition the vehicle’s camera system is able to recognise road signs and pass on that information automatically to the driver. Forward collision warning and avoidance technology enables the vehicle to automatically track hazards around the vehicle and detect an imminent crash. The vehicle can then send a warning to the driver or, in avoidance systems, take action to reduce the severity of an accident.

Advanced front-lighting systems optimise the headlight beam based on road curvature, visibility and weather conditions as well as vehicle speed and steering angle. Other intelligent headlight systems are able to automatically adjust head beams so that the headlights do not blind the driver in an oncoming car. FC systems typically use high-dynamic range imaging sensors with more than eight bits per pixel and resolutions of WVGA and higher.

Surround view (SV) systems present a vast amount of useful information to the driver in real time, with minimum glass-to-glass latency. The 360° view as well as park assist and bird’s-eye view allow the driver to be fully aware of vehicle’s surroundings. For example, the system could allow a driver to see a bicycle that has been left behind a parked car or it can show how close the vehicle is getting to other vehicles when trying to park in a tight spot. In fully automated systems, the vehicle is able to use this information to actually park itself. SV systems increase the safety of operating vehicles by allowing drivers to see objects they might otherwise not see as well as make driving these vehicles easier and more fun. A surround view system typically consists of at least four cameras (front camera, two side cameras on each side of the car and a rear camera) as well as ultrasound sensors (see Fig.1). SV systems typically require video streams with at least 1280 × 800 pixel resolution on each camera and a frame refresh rate of 30 fps. The cameras stream video to the central processor either as analogue NTSC, digital uncompressed via low voltage differential signalling (LVDS/FPD-Link) or compressed via stead say Gigabit Ethernet (GbE). The camera streams can then be stitched together to form a cohesive and seamless view around the outside of the car. Outputs from the surround vision system are sent to a heads-up display(s) at VGA or higher resolution. This display makes it easy for the driver to recognise and react to any hazards surrounding his car. Additionally, in the future, 3D rendering could be used for highly realistic surround views.

The White Paper discusses the surround view data flow in greater detail. Most surround view systems use between four and six external high-resolution, high-dynamic range video inputs. For LVDS-based surround view, this translates to the processor requirement to support four or more camera inputs wider than 8 bits/pixel.

ADAS systems demand high processing power and peripherals to support real-time vision analytics. TDA2x SoCs deliver this with scalable, highly integrated SoCs consisting of a general-purpose dual-Cortex-A15, dual C66x DSPs, integrated peripherals and a Vision AccelerationPac to offload vision analytics processing while reducing the power footprint. TDA2xSoCs also include video and graphics for advanced front camera, park assist and surround view applications.

With the TDA2x SoC, TI has efficiently mapped out the ARM general-purpose processing cores to manage core control processing. Mid- to high-level processing is performed by one or more DSP cores optimised for real-time functions such as object detection, and low- to mid-level processing is handled by the Vision AccelerationPac.

The Vision AccelerationPac was specifically designed to offload the processing of vision algorithms from the TDA2x DSP and ARM cores, yielding the best performance for low- to mid-level vision processing at the lowest power footprint.

The TDA2x SoC is the best choice for highly integrated SoCs targeted at ADAS applications in large part to the cutting-edge, dedicated VisionAccelerationPac designed specifically for optimising demanding vision capabilities in a system while reducing the overall power footprint. Engineers can feel confident designing with the TDA2x SoC with the quality to meet ISO 26262 safety requirements. With TI, next-generation advanced driver assistance systems are now possible.

To download the full White Paper see http://www.ti.com/lit/wp/spry250/spry250.pdf

Contact  Erich Nast, Avnet Kopp, Tel 011 319-8600,  erich.nast@avnet.eu

 

Related Articles

  • Picture Gallery 1: Nedbank/ EE Publishers seminar on Gas sector in SA
  • Picture Gallery 2: Nedbank/ EE Publishers seminar on Gas sector in SA
  • Energy limitation for peace of mind: Safe devices for Zone 0
  • ITU Telecom World 2019 highlights tech innovations improving lives
  • Spectrum sharing opportunities for 5G and beyond