Black Sesame Technologies: Ensuring Safety in Innovation

Charles Qi, VP, SoC Architecture
Sitting in the backseat of an autonomous vehicle, looking out of the window, and enjoying the journey—it is indeed a fascinating thought. However, it comes to a halt when one considers the gray areas in safety. From an engineering standpoint, the first step toward the safe functioning of an autonomous vehicle is an efficient perception system design. Much like the driver’s eyes and ears, perception capabilities gives the autonomous vehicle the ability to detect and track all moving and static objects, and extract relevant data from its immediate surroundings. With a number of service providers and seemingly endless options of sensor systems available today, choosing the optimal system for autonomous vehicle application becomes a tedious task. Black Sesame Technologies (BST) provides a complete visual perception solution, based on an embedded visual perception platform, which integrates image processing, artificial intelligence, and computational imaging for advanced driver-assistance systems (ADAS), and autonomous driving. “Our core mission is to become the leading autonomous driving platform provider and offer an ubiquitous perception platform for self-driving vehicles,” asserts Charles Qi, VP, SoC Architecture at Black Sesame Technologies.

Part of the reason that perception accuracy is not satisfying, may be attributed to the complex operating conditions of camera sensors—such as lighting, whether and vehicle motion—which drastically affect the captured raw image data. Without properly enhancing the image quality, even the most advanced AI perception would fail to recover information fully. As a digital image processing, computer vision and AI technology firm that creates solutions for real-world challenges, BST takes a wholistic approach to design the perception pipeline. It provides automotive grade high-resolution HDR sensors support with light-control in sensor module design. BST can also integrate high-performance automotive ISP pipelines with HDR and low-light support. The company delivers advanced AI neural network architecture training for autonomous driving with challenging scenes and use cases, and it can optimize and integrate advanced algorithms to a high-performance ASIC.

The company focuses on level 3 (L3) and above autonomous driving System on a Chip (SoC) design and has built its SoC as a sensing platform enabled by core image sensing, computer vision, and AI algorithms developed in house and by its partners/ customers. By taking a holistic approach to design silicon chips, BST specializes in co-designing industrial AI solutions from the chip level to its application, following ISO-26262 standard.


Our core mission is to become the leading autonomous driving platform provider and offer an ubiquitous perception platform for self-driving vehicles

“We take the entire sensing platform pipeline, and after considering the sensory input, optimize the image quality and perception processing and put it in a single silicon chip.We aim to provide both, performance in sensing accuracy and performance in computation capacity on a single device, as needed for an automotive platform,” explains Qi.

Being experts in chip design and AI application engineering, the company undertakes core RnD functions, including core algorithm development, application-specific integrated circuit (ASIC) design, software system, and ADAS engineering application. BST offers its customers high quality, reliable technology and applications through its team expertise and sensor selection, sensor image processing, pipeline optimization, and the perception algorithm.

BST has developed in-vehicle monitoring systems with facial recognition capability, which monitors driver fatigue and bad driving behavior.. The driving monitoring solution has gained traction in industries like insurance and fleet management, and it aims to improve this feature in L3 autonomous driving systems.“We are taking mature L2 driver monitoring system product we developed and extending into L3 driver monitoring system. By further morphing it into broad range of in-vehicle monitoring application deployable in ride-sharing, public transportation etc. fields, we are leveraging our algorithm knowhow to expand our market share. Those are the directions that we are trying to expand into because of their synergy we can bring in with the existing driving monitoring capabilities,” states Qi.

For the future, the company plans to enhance its total technology by adding adjacent capabilities such as sensor fusion, localization, and mapping technology, to develop autonomous vehicle capabilities.

Company
Black Sesame Technologies

Headquarters
Santa Clara, CA

Management
Charles Qi, VP, SoC Architecture

Description
The company provides a complete visual perception solution, based on an embedded visual perception platform integrating image processing, artificial intelligence, and computational imaging, for advanced driver-assistance systems (ADAS) and autonomous driving. BST develops algorithms for computational imaging, machine vision, such as camera array solutions for mobile devices, vision perception system for ADAS and autonomous drive and algorithm for sensors fusion. As an AI digital imaging technology firm that creates solutions for real-world AI challenges, the company offers solutions for image processing, computing images, and embedded sensing platforms

Black Sesame Technologies