Get in Toucharrow



To encourage efficient and safe driving, drivers are screened and evaluated on driving habits. Advanced DriverSense AI or sensor-based technologies to detect and monitor behaviour and fatigue levels of drivers are emerging which makes the cars more intelligent for avoiding accidents on roads. Systems are being developed for real-time monitoring of vehicles which controls the speed of the vehicle and fatigue level of the driver to prevent accidents. The primary components of such a system will be microcontrollers along with some sensors like an eye blink, gas, impact sensors, alcohol detecting sensor and fuel sensors. GPS and Google Maps APIs are used to track the location of the vehicle which can be sent to a predefined number in the system.


In a single day, 250,000 drivers fall asleep on the road, and at least  one crash occurs due to driver fatigue every 25 seconds.

In the United States, approximately 9 people are killed and more than  1000 are injured due to distracted driving.

In order to tackle this, there must be a system in place that is able  to sense the cognitive presence and drowsiness of the driver. For the sake of the driver’s safety and that of his/her passengers.



DriverSense AI – Answer to Driver Fatigue


Using a driver-facing camera, the system detects the different nuanced  cognition levels of the driver using machine learning and neural networks.


We use a combination of different parameters like


  • Lane Departure
  • Driver’s Eyes/Mouth Tracking (eye-blink rate, yawning etc)
  • Steering Monitoring


This is to determine the ‘wakefulness’ of the driver.


Sensing Fatigue


The system uses automotive-grade image  sensors that captures infrared images of  the driver’s eyes, using patented pupil identification technology and a high speed  digital signal processor to analyze & identify if the driver is either drowsy or distracted.

The contactless setup and a sophisticated  algorithm gives the device the ability to understand the state of drivers even in  tricky conditions like in presence of strong lighting during noon hours, drivers wearing  sunglasses etc.


Safe Lane Keeping & ADAS


Advanced Driving Assistance System, based on AI, Machine Learning & our  proprietary neural engine & computer vision technology is capable of reducing  collision accidents with early warning when it detects potential collision dangers  during driving, which also helps to improve driving behaviors and overall driving safety.


Our ADAS Technology provides the following benefits:


  • Forward Collision Warning (FCW)
  • Lane Departure Warning (LDW)
  • Headway Monitoring & Warning (HMW)
  • Pedestrian Collision Warning (PCW)
  • Front Vehicle Departure Warning
  • Over Speed Warning (OSW)


Forward Collision Warning: When the system detects an imminent collision danger with cars ahead, it will trigger visual and audible warnings up to 2.7 seconds in advance.


Headway Monitoring & Warning (HMW): When the vehicle is too close to the vehicle ahead, the system will issue visual and audible warnings.


Front Vehicle Departure Warning: When the vehicle in front starts to move forward in a  traffic jam or at the traffic light, but your car remains still, the system will issue visual and  audible alert.


Virtual Bumper: It’s designed to warn drivers of unintentional forward movement during a  traffic jam, reducing the incidence of non-fatal fender bender accidents.The system will  issue visual and audible warnings.


Over Speed Warming (OSW): Injuries caused by overspeeding are mostly fatal.  When the system detects vehicle speed over the predetermined value, it will trigger  warning alerts to make sure the driver slows down.


Lane Departure Warning (LDW): When the vehicle departs from the current lane  without turning signals, the system will issue visual and audible alerts. The images  below show an example of how lane mapping is done using IR and Optical Camera to create a cloud point like architecture.


Facial Recognition


  • Our platform provides facial recognition using IR Cameras, Geometrical Face Mapping and Portrait Database to ensure personalised experience for drivers.
  • IR Cameras and Geometrical 3D Mapping create a precise image of the  person. Neural Networks and Machine Learning are used to improve recognition efficiency and accuracy through image classifiers and a GPU  based neural engine.
  • Facial recognition enables custom settings for different drivers of the car.  Drivers may have choices of playlist, predetermined speed limits, automated climate control etc.

DriverSense AI | driver monitoring system