
One of the largest challenges in the popularization of autonomous vehicles is safety and reliability. In order to ensure safe driving for the user, it is crucial that the autonomous vehicle with precision, effectively and effectively monitors and recognizes the environment as well as the safety risks for the occupants.
While Tesla does her best so as not to publish disengagement data that other companies that develop autonomous driving systems provide, a group of beta tesla fsd testers has been Report data independently For a while.
Based on this limited data set, the Tesla FSD Beta can only drive a few kilometers between disengagementWhile other autonomous driving programs such as Waymo and Cruise report an average of tens of thousands of kilometers between closings.
HAS WaymoOne of the methods used to assess drivers safety is tests based on the scenario – a combination of virtual driving, test tests and the real world.
To identify the appropriate test scenarios, they use existing driving data from Waymo's years of experience, crash data such as the databases of police accidents and accidents captured by dashboard and expertise in the operational design sphere, including geographic areas, driving conditions and types of roads. Over time, Waymo continues to add new and representative scenarios they meet on public roads and in simulations, or when they develop in new territories.
The Waymo scenario database, developed since 2016, is based on millions of kilometers hunted on public roads, as well as thousands of real accidents, and offers complete coverage of dangerous situations. Since the most common types of accidents are similar, no matter where you drive, their database can be used as a reference for any city, allowing faster scalability. It covers a wide range of common situations that can occur almost everywhere, such as a pedestrian crossing against a signal or when a car withdraws from an alley.
In a recent study Published in IEEE Transactions of Intelligent Transport Systems, a group of international researchers led by Professor Gwangil Jeong from the National University of Incheon, in Korea, has developed an intelligent end-to-end system for the detection of 3D objects in real time, based on depth learning and specialized for self-winning situations.
“We have designed a detection model based on YOLOV3, a well -known identification algorithm. The model was first used for the detection of 2D objects and then modified for 3D objects”, develops Prof. Jeon.
The team has nourished the RGB images collected and the data cloud data in the entry into Yolov3, which, in turn, take out the classification labels and the delimitation boxes with trust scores. They then tested its performance with the Lyft data set. The initial results have shown that Yolov3 had reached extremely high detection accuracy (> 96%) for 2D and 3D objects, surpassing other current detection models.
This method can be applied to autonomous cars, autonomous parking, autonomous delivery and future autonomous robots, as well as applications requiring detection, monitoring and visual location of objects and obstacles.
