The rich data provided by the swiftly developing sensor technologies can be processed by improving data fusion techniques. This results in highly accurate multiple sensor integration that supports high-level computer vision and robotics tasks effectively.
Forming Point Clouds and Heat Maps
Since standard image processing becomes ineffective in critical applications such as autonomous driving there is a need to generate 3D point clouds to map objects and efficiently track movements of anything moving on the highway system.
Once a point cloud is formed, an accurate heat map can be generated and uploaded to the universal library. As drivers drive around more and more autonomous vehicles, the data in the universal library shall become more specific and precise with every input from various sensors on multiple cars in the fleet.
Machine Learning and Predictive Algorithms can be used along with GPS, IMU and various other sensors in order for the car to learn driving patterns and road-sensing. With every drive the neural network would add some input to its universal cloud, making the car a better driver.
We focus on the right things
Many companies, focused on the tracking of an object in the point cloud or image, ignoring motion of the platform, which should be estimated to find/locate moving objects. There are some algorithms that discriminate moving objects and static objects in images, but these are not reliable for critical applications, such as traffic monitoring.
Therefore, the motion of the platform should be measured using external sensors, such as the integration of Global Positioning System (GPS) and Inertial Measurement Unit (IMU), forming a GPS/IMU navigation solution. The use of this solution enables the transfer of local laser scanner data to the global coordinate system and allows the use of prior information, such as Geospatial Information System (GIS) maps.
The success rate of the object tracking mainly depends on the accuracy of the pose estimation and the efficiency of the estimator. The estimator may become unreliable and diverge from the correct solution as the pose estimation algorithms are sensitive to rigidity assumption of objects and may wane in the presence of noise and occlusion. Like, in countries with a large population where congestion on the roads become unpredictable, RADAR or LiDAR might not be able to fetch accurate data which may affect the accuracy of the 3D point cloud and Heat Maps in turn affecting the efficiency of the autonomous driving platform itself.
Therefore, we propose a simple Kalman filter, which is more resilient to instability, and non-holonomic constraint, which is used to estimate the orientation of objects. This in tandem with the use of highly advanced optical cameras and orientation sensors will provide the levels of efficiency never witnessed before. It is important to note that the entire road system is designed to be perceived by humans with their eyes (Optics) and orientation (Movement). It shouldn’t be of any surprise that the machine learning platform should do the same.
Fully Autonomous Driving will be a remarkable breakthrough in the automation and automotive world, there are numerous constraints which need to be addressed in order to achieve fully autonomous driving. However, experts believe it is not a sheer impossible task.