Trends

Driverless cars need to see the world like we do

 

In August, speaking to Bloomberg, artificial intelligence celebrity Andrew Ng posited that the quickest way to create reliable autonomous vehicles is to fix the pedestrians, not the cars. “What we tell people is, ‘Please be lawful and please be considerate,” Ng said to Bloomberg.

Ng’s remarks, which come at an especially sensitive time in the short history of driverless cars, caused a commotion in the AI community, drawing criticism and approval from different experts.
In the past months, self-driving cars have been involved in several incidents, with one of them resulting in the death of a pedestrian.
Most researchers and AI experts agree that driverless cars still haven’t made enough progress to let them roam in the streets without having a redundant human driver supervising them and be ready to jump on the steering wheel if anything goes wrong.
But that’s about where the agreements end. There’s a large divide on when driverless cars will be road-ready, what the transition phase will be like and how to meet the challenges of autonomous driving.

How self-driving cars understand the world around them

For vehicles to be able to drive by themselves, they need to understand their surrounding world like (or better than) human drivers, so they can navigate their way in streets, pause at stop signs and traffic lights, and avoid hitting obstacles such as other cars and pedestrians.
The closest technology that can enable cars to make sense of their surroundings is computer vision, a branch of artificial intelligence that enables software to understand the content of image and video.
Modern computer vision has come a long way thanks to advances in deep learning, which enables it to recognize different objects in images by examining and comparing millions of examples and gleaning the visual patterns that define each object. While especially efficient for classification tasks, deep learning suffers from serious limits and it can fail in unpredictable ways.
This means that your driverless car might crash into a truck in broad daylight, or worse, accidentally hit a pedestrian. The current computer vision tech used in autonomous vehicles is also vulnerable to adversarial attacks, where hackers manipulate the AI’s input channels to force it to make mistakes.
For instance, researchers have shown they can trick a self-driving car to avoid recognizing stop signs by sticking black and white labels on them.
One day, AI and computer vision might become good enough to avoid making the erratic mistakes that driverless cars currently make. But we don’t know when it will come, and the industry is divided on what to do until then.

Improving the computer vision technology of driverless cars

Tesla, the company founded by the eccentric Elon Musk, believes it can overcome the limits of the artificial intelligence that powers autonomous vehicles by throwing more and more data at it. That is based on the general rule that the more quality data you provide deep learning algorithms, the better they become at performing their specific tasks.
Tesla has equipped its vehicles with an array of sensors and it is collecting as much data from those sensors as it can. This data enables the company to constantly train its AI on the data it gathers from the hundreds of thousands of Tesla cars that are driving the streets in different areas in the world.
The reasoning is that, as its AI improves, Tesla can roll out new updates to all its vehicles and make them better at performing their autonomous driving functions. The benefit of this model is that it can all be packed into a consumer-level vehicle. It doesn’t need any additional, costly hardware attached to the car.
To be fair, this is a model that only a company such as Tesla can perform. Like many other things, automobiles are going through a transition as computation and connectivity becomes ubiquitous. In this regard, Tesla is further along the way than other companies, because rather than being an automobile manufacturer that is trying to adapt itself to new tech trends, it’s a tech company that manufactures cars.
Tesla’s cars are in fact computers running on wheels, and it can constantly upgrade them with over-the-air software updates, a feat that is more difficult for other companies to pull.
This means Tesla will be able to gradually improve its vehicles’ self-driving capabilities as it gathers more data and continues to train its models in improve its AI.
Tesla also has the opportunity train its AI through “shadow driving,” where the AI passively monitors a driver’s decisions and weighs it against how it would act if it was in a similar situation in self-driving mode.
This all works as long as the computer vision problem is one that can be fixed with more data and better training. Some scientists believe that we need to think of AI technologies beyond deep learning and neural networks. In that case, Tesla will need to restructure the specialized AI hardware that supports the self-driving functionalities of its vehicles.

Equipping self-driving cars with complementary technologies

Google and Uber, two other companies that have invested heavily in self-driving technology, have banked on several technologies to compensate for the shortcomings of driverless cars’ computer vision AI. Chief among them is “light detection and ranging” (lidar).
Lidar is an evolving domain and various companies are using different technologies to perform its functions. Lidar patents and intellectual properties have been at the center of a long legal battle between Google and Uber that was settled at $245 million earlier this year.
In a nutshell, lidar works by sending millions of laser pulses in slightly different directions to create a 3D representation of the area surrounding the car based on the time it takes for the pulses to hit an object and return. This is the revolving cylinder you see on top of  some self-driving cars (not all lidars look like that, but it has sort of become an icon of the industry).
In addition to lidar, these companies also use radar to detect different objects around the car and evaluate the traffic and road conditions. The following video shows how the technology works.
Adding all these technologies surely make these vehicles much better equipped than Tesla’s computer vision­–only approach. However, this doesn’t make their technology flawless. In fact, an accident that made the headlines earlier this year involved an Uber vehicle that was in self-driving mode.
Moreover, the approach of Google and Uber makes it a lot costlier and harder to deploy driverless cars on roads. Google and Uber have driven millions of miles with their self-driving technology and have gathered a lot of data from roads, but that doesn’t begin to rival the amount of data that the hundreds of thousands of sold Tesla vehicles are collecting. Also, adding all that gear to a car costs a lot.
Lidars alone add somewhere between $7,000 and $85,000 to the cost of a car, and their form factor is not very appealing. Add to that the costs of all the other sensors and gear that must be tacked on the vehicle post-production, and you might be doubling or tripling the cost of your car.
If scientists manage to crack the code of computer vision and create AI that can understand the surrounding world and well as human drivers can, then Tesla will be the winner of the race, because all it already has tons of data and all it’ll need to do is roll out a new update and all its cars will magically become capable of near-perfect autonomous driving.
On the other hand, if the current trends of narrow AI never manage to perform on par with human drivers, then Google and Uber will be the winners—that is if they manage to bring down the costs of lidar and other driverless car gear. Then automobile manufacturers might move toward equipping their vehicles with the self-driving technology without dramatically raising the costs.

Advancing autonomous driving by fixing the pedestrians

Andrew Ng is one of a handful of AI thought leaders who think that to shortcut our way to autonomous driving is to prevent pedestrians from causing driverless cars to behave in unexpected ways.
It basically means that if you’re jaywalking and an autonomous vehicle hits you, it’s your own fault. At the extreme, this would practically turn cars into trains, where pedestrians are responsible for whatever happens to them if they stand on the railroad.
Setting a strict rule of conduct for pedestrians and limiting their movements on roads will surely make the environment much more predictable and navigable for self-driving cars.
But not everyone is convinced by this proposition, and many bring it into question, including New York University professor Gary Marcus, who says the approach of changing human behavior will only “move the goal posts.”
Rodney Brooks, another AI and robotics legend, also dismisses Ng’s proposition. “The great promise of self-driving cars has been that they will eliminate traffic deaths,” he says, adding that Ng is positing “that they will eliminate traffic deaths as long as all humans are trained to change their behavior?” If we could change human behavior so easily, the thought goes, we wouldn’t need autonomous cars to eliminate traffic deaths.
But Ng doesn’t think that moving the goal posts is an absurd idea, arguing that humans historically tend to adapt themselves with new technology, just as they did with railroads. The same can very well happen with driverless cars.
Whatever the case, a compromise between fully intelligent cars that can respond to every possible scenario (such as a pedestrian suddenly jumping in the middle of the street with a pogo stick) and a railroad-style setting where pedestrians are completely prohibited from moving in areas where autonomous vehicles are driving will probably help smoothen the transition while the technology further develops and self-driving cars become the norm.

Adapting city infrastructures for self-driving cars

Another solution to meet the challenges of driverless cars is to fix the roads and environments that they will be operating in. This too has a precedent.
For instance, with the advent of automobiles, roads were upgraded and created that were suited for vehicles running on tires at very fast speeds. With the advent of airplanes, airports were created. In cities where bicycles are very popular, separate lanes were created for bicycles.
So what is the infrastructure for driverless vehicles? Academics from Edinburgh Business School propose in an article in Harvard Business Review to create smart environments for self-driving cars.
Currently, driverless cars have no way to interact with their environment and all they learn is from their sensors, lidars, radars and video feeds. By incorporating internet of things (IoT) elements into roads, bridges and other components of city infrastructures, we can make them more understandable for self-driving cars.
For instance, installing sensors at specific intervals on the sides or middle of the roads can help driverless cars to locate their limits regardless of whether the road clear or covered with snow or mud or buried under two inches of flood water.
Sensors can also provide self-driving cars with information about road and weather conditions, such as whether they’re slippery and require more prudent driving.
Driverless cars also need to be able to perform machine-to-machine (M2M) communications with other manual or autonomous vehicles in their vicinity. This will help them coordinate their movements and avoid collisions more accurately.
One of the challenges of this model is that vehicles live for decades. This means that cars that are manufactured today will still be on roads in the 2030s. So you can’t expect every single vehicle to be equipped with sensors and M2M capabilities. Also, we can’t expect all the roads in the world to suddenly grow smart sensors.
But driverless cars, which are currently very limited in numbers, can be equipped with technology to probe for smart sensors in their vicinity and, in case they exist, interact with them to provide a safer experience. And in case they can’t find any standard smart sensors in their environment, they can default to their own local gear for navigating their environment.

When will driverless cars become the norm?

There are different estimations on how long it will take for driverless cars to be driving in the streets along manual and semi-autonomous vehicles. But it has become evident that overcoming the challenges is much more difficult than we first thought.
Our cars might one day become smart enough to be able to address every possible scenario. But it won’t happen overnight, and it will likely take several steps and phases at different levels. In the interim, we need technologies and practices that will help smooth the transition until we can have autonomous vehicles that can make our roads safer, our cities cleaner and our commute less costly.
This story is republished from TechTalks, the blog that explores how technology is solving problems… and creating new ones. Like them on Facebook here and follow them down here:
Source: The Next Web
To Read Our Daily News Updates, Please Visit Inventiva Or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker