Advanced autopilot systems have learned to respond appropriately to dangerous situations, but they are still let down by their "eyes". The bottleneck is the rate of data processing from optical cameras. With a recording rate of 120 frames per second, the appearance of any random light source in the field of view leads to the generation of extra gigabytes of information. You have to spend computing power in order to simply weed them out in the end.
Professor Chen Shoushun from Nanyang Technological University in Singapore proposes to change everything - cameras, image processing algorithms, and control methods. Instead of collecting complete information about what is happening and subsequent pattern recognition, his system reacts to point changes in the reception of light by individual pixels of the matrix. The signal check interval is about a nanosecond, which provides instant analysis of the scene.
By "instantaneous" we mean here that the system has time to react and send a command to the control module faster than the objects will move in real space. For example, a car in the oncoming lane has just begun a dangerous maneuver, and the drone has already concluded that this speck of light is dangerous and needs to be dodged aside. And you can identify the object as a truck, mount and write the complete video into memory only later.
A special data format with compact size files has been developed for the camera named Celex. A working prototype, camera and situation analysis module are shown at the EI 2017 symposium. They are interested in the business structure of Hillhouse Tech and if things go smoothly, a commercial version of the ultra-fast drone camera may appear by the end of this year.