Driverless Cars

Your Future Car Will Have AI That Predicts Its Breakdowns

By listening for sounds of trouble, neural networks could make all vehicles safer, especially driverless ones

Driverless Cars
Vocativ
Dec 27, 2016 at 5:02 PM ET

A new “predictive maintenance” startup has developed an artificial intelligence system that can decipher your car’s weird sounds better and earlier than you can.

The Israel-based company, 3DSignals, uses neural networks to learn how a machine should sound when it’s functioning properly. The goal is that their system detects any disruption that might mean a problem is about to arise. According to IEEE Spectrum, the company is in discussion with European automakers about using the technology in self-driving ride-connection vehicles. “If you’re a passenger in a driverless taxi, you only care about getting to your destination and you’re not reporting maintenance problems,” 3DSignals co-founder Yair Lavi told IEEE Spectrum. “So actually having the 3DSignals solution in autonomous taxis is very interesting to the owners of taxi fleets.”

Neural networks mimic human thinking by taking in massive amounts of images, words or other data and using that information to “learn” patterns to attain a certain outcome. For instance, if someone feeds a neural network millions of pictures of dogs, the algorithm can learn to pick out a poodle. In the last couple years, many tech companies have invested heavily in neural network research and development. Facebook and Google’s facial recognition software relies on neural networks and Google Translate was recently bolstered with the help of this form of artificial intelligence. A recent New York Times Magazine feature predicts Google’s advancements in this field could “reinvent computing itself.”

But relatively little research has been done on neural networks that study sound patterns. Lavi told IEEE Spectrum that he wants 3DSignals to be the the “world leader in general acoustics deep learning.”

The company is currently working with a major steel manufacturing and a renewable energy company. Ultrasonic microphones in those factories and plants are recording audio so the neural network can learn how the machines are supposed to sound. Initially, these companies are only benefiting from a basic feature that predicts when some machine parts may wear out, based on physics models. But as the system records more data, it will be able to detect unusual sounds. Clients can then train the system by labeling sound patterns that connect to particular problems.

At the moment, the systems are only being used on industrial machinery, but Levi believes the technology will soon be adopted by auto manufacturers services.