Novel Computer Vision Algorithms Development
In scope of R&D project a fast-growing EU technology startup required support in novel computer vision algorithms development for event-based camera vision platform. While conventional frame-based camera computer vision algorithms are widely available in open source and described in papers, the new event-based (neuromorphic) vision platform requires a new approach and has a lot of room for innovations.
Objectives and Scope
The primary objective of the project was to study the client’s technology underlying patents in theoretical physics and develop a functional Proof of Concept (PoC) for low latency real time data processing from event sensors. This was aimed at laser beam tracking and precise trajectory detection. Key goals included:
- Creating a highly reliable embedded solution with efficient, low-latency multithreading data processing from multiple synchronized event sensors.
- Developing a computer vision pipeline and defining the necessary algorithms to achieve the target data quality.
- Modeling and estimating algorithm performance using offline data and a Python environment.
- Implementing real-time algorithms on an embedded platform.
- Analyzing the outcomes and filing patents to enhance the customer’s IP portfolio.
Approach
Our team thoroughly explored the project landscape, starting with a detailed review of the client’s intellectual property to identify key claims and areas for further development. Our research and analysis focused on several critical areas:
- Event Camera Data Processing: Event cameras do not capture images using a shutter as conventional (frame) cameras do. Instead, each pixel inside an event camera operates independently and asynchronously, reporting changes in brightness as they occur, and staying silent otherwise. This required a novel approach for low-latency, real-time processing algorithms. We captured sensor data and modeled it in a Python environment using an extensive research toolkit, which allowed us to quickly test hypotheses, estimate algorithm performance, and assess output quality.
- Laser Beam Tracking: Working with event sensors for laser beam tracking involved a deep understanding of sensor physics and hardware implementation. Close collaboration with the sensor supplier was essential to fine-tune the sensor and achieve the best data quality. Some hardware limitations required additional external hardware and software solutions for sensor clock synchronization.
- Real-Time Algorithm Implementation: The algorithms were ported to C/C++ for real-time execution on an embedded platform. We designed a highly efficient architecture to ensure lock-free, efficient event buffering, parallel processing on multiple CPU cores, and minimized memory copy and CPU cache synchronization operations.
- Flexible and Scalable Pipeline: The computer vision pipeline was initially developed on an x64 Linux environment but can be cross-compiled for any ARM embedded platform. It can run both in a containerized setup or without containers, simplifying deployment, testing, and continuous integration across multiple platforms.
Technology
For accurate 3D perception using laser scanning technology, precise laser beam tracking is essential. This is achieved using event camera sensors with 1 microsecond time resolution. The pipeline recognizes potential laser beam positions in the event stream with low latency, predicts spot movement and trajectory, filters noise events, and reconstructs the laser beam position with subpixel resolution.
Python, NumPy, Linux, Docker, C/C++, OpenCV, Metavision SDK
Outcome
Our team delivered a fully functional pipeline running on an embedded platform as part of a larger R&D project. The PoC was successfully demonstrated at a public industry event. The research outcomes were analyzed for novelty, leading to a patent application. A PCT patent was granted, enhancing our customer’s IP portfolio.