Get in Toucharrow

CONTACT US

GPS – GIS, IMU & HEAT MAPPING

Heat Mapping Solutions using GPS-GIS rich data provided by the swiftly developing sensor technologies can be processed by improving data fusion techniques. This results in highly accurate multiple sensor integration that supports high-level computer vision and robotics tasks effectively.

 

Prominent Features

Integrates various sensor data, like LiDAR, and information from Geospatial Information System (GIS) maps, a bank of Kalman filters with a non-holonomic constraint to provide a better orientation estimation of moving objects, and Inertial Measurement Unit (IMU).

Machine Learning and Predictive Algorithms to provide an optimized and safe automated experience.

An innovative combination of 3D Point Clouds and Heat Mapping methodologies.

Providing an edge in location and navigation services by focusing on the motion of the platform rather than the tracking of an object in the point cloud or image.

Forming Point Clouds and Heat Mapping Solutions using GPS-GIS

 

Since standard image processing becomes ineffective in critical applications such as autonomous driving there is a need to generate 3D point clouds to map objects and efficiently track the movements of anything moving on the highway system.

 

Once a point cloud is formed, an accurate heat map can be generated and uploaded to the universal library. As drivers drive around more and more autonomous vehicles, the data in the universal library shall become more specific and precise with every input from various sensors on multiple cars in the fleet.

 

AI Integration

 

Machine Learning and Predictive Algorithms can be used along with GPS, IMU, and various other sensors in order for the car to learn driving patterns and road sense. With every drive the neural network would add some input to its universal cloud, making the car a better driver.

 

We focus on the right things in Heat Mapping Solutions using GPS-GIS

 

Many companies, focused on the tracking of an object in the point cloud or image, ignoring the motion of the platform, which should be estimated to find/locate moving objects. There are some algorithms that discriminate between moving objects and static objects in images, but these are not reliable for critical applications, such as traffic monitoring.

 

Therefore, the motion of the platform should be measured using external sensors, such as the integration of the Global Positioning System (GPS) and Inertial Measurement Unit (IMU), forming a GPS/IMU navigation solution. The use of this solution enables the transfer of local laser scanner data to the global coordinate system and allows the use of prior information, such as the Geospatial Information System (GIS) maps.

 

The efficiency of Heat Mapping Solutions using GPS-GIS

 

The success rate of object tracking mainly depends on the accuracy of the pose estimation and the efficiency of the estimator. The estimator may become unreliable and diverge from the correct solution as the pose estimation algorithms are sensitive to the rigidity assumption of objects and may wane in the presence of noise and occlusion. Like, in countries with a large population where congestion on the roads becomes unpredictable, RADAR or LiDAR might not be able to fetch accurate data which may affect the accuracy of the 3D point cloud and Heat Maps in turn affecting the efficiency of the autonomous driving platform itself.

 

Therefore, we propose a simple Kalman filter, which is more resilient to instability, and a non-holonomic constraint, which is used to estimate the orientation of objects. This in tandem with the use of highly advanced optical cameras and orientation sensors will provide the levels of efficiency never witnessed before. It is important to note that the entire road system is designed to be perceived by humans with their eyes (Optics) and orientation (Movement). It shouldn’t be of any surprise that the machine learning platform should do the same.

 

Fully Autonomous Driving will be a remarkable breakthrough in the automation and automotive world, there are numerous constraints that need to be addressed in order to achieve fully autonomous driving. However, experts believe it is not a sheer impossible task.

 

USE CASE – 01: AUTONOMOUS DRIVING

 

The Need

 

The key component of any autonomous driving platform is surveilling vehicular road traffic. It is important to be capable of tracing paths around objects, and predict their locations and trajectories to discern and track the moving objects. Road Traffic Monitoring can be further divided into object segmentation, object tracking, and object recognition. Optic sensing is a design, any national road system, conceived to perceive by the human eyes so that humans can look, perceive/comprehend, and act accordingly. But with autonomous driving, a system that can perceive roads, objects, and road signs needs to be developed.

 

The Problem

 

Due to the progression of image and point cloud processing algorithms, object tracking approaches have become stronger. An excellent observation of the area around vehicles has been provided by the laser sensors but the point cloud of objects may be noisy and occluded, which may prone to different errors. Thus, especially for low-quality point clouds object tracking is a problem.

 

In addition, a point cloud may be sporadic due to inexpensive laser scanners. A systematic error in the point cloud may be introduced due to the motion of the platform. We focus on point cloud processing since image-processing algorithms are generally less reliable for autonomous vehicles.

 

The Solution

 

To segment and track moving objects in a scene, a pipeline to integrate various sensor data and prior information is provided such as a Geospatial Information System (GIS) map, where even a low-quality GIS map can improve the tracking accuracy, as well as decrease processing time by using GPS and GIS.

 

In addition, a non-holonomic constraint is applied to provide a better orientation estimation of moving objects. Moving objects can be accurately detected and tracked over time based on modest-quality Light Detection And Ranging (LiDAR) data, a coarse GIS map, and a fairly accurate Global Positioning System (GPS) and Inertial Measurement Unit (IMU) navigation solution.

 

USE CASE – 02: GPS/IMU FOR THE VISUALLY IMPAIRED

 

The Need

 

It is imperative that impaired people are able to navigate and move across their cities without any external supervision. The aim of this work is to improve navigation solutions for the visually impaired, specifically, problems with the heading given by the current GPS antennas. This heading is not reliable when the speed of the pedestrian or of the car is less than 10 km/h.

 

The Solution

 

The solution proposed is the use of one inertial measurement unit (IMU) together with a GPS, giving the navigation information in the way of heading and distance to the final destination. We feel using GPS/IMU and generating heat maps from various sensors which could be embedded into a small device can be used for step navigation for impaired humans. Laser sensors can generate a point cloud and heat map of objects and city infrastructure to help better navigate.

 

We have insight into systems using IMU (compass, gyroscope, and accelerometer) developed in the laboratory. We can develop a user interface for smartphones running on the Android OS coupled to the IMU using Bluetooth transmission. Furthermore, we have tested systems in bad GPS reception conditions across various locations. The other two GPS systems (Navigation and Ariadne GPS) can be used for testing the best way of giving the information: either “car navigation information: turn left or right at 100 meters… ” or “heading and distance to the final destination at 2 o’clock, 150 meters”.

 

The main finding is that we can have a better way of navigation (for the visually impaired) using an IMU, coupled with a compass and a GPS antenna in cities. Using the information of “heading and distance to the final destination” can be helpful.

 

AI Integration

 

It is important that predictive algorithms are used so that the navigation becomes better as more and more data is gathered. Convolutional Neural Networks can then process the big data and relevant inputs which would help generate a more precise heat map and a generic map for step navigation.