Up until recently, Google's self-driving cars have been limited to real-world activity only in certain states (i.e., not on the federal level), and only under certain conditions, Quartz notes. But this week the National Highway Traffic Safety Administration posted a Feb. 4 letter saying Google's self-driving system, powered by artificial intelligence, could be considered the legal driver of the car—a move that could "substantially streamline" getting autonomous vehicles on the road, an auto research analyst tells Reuters in an exclusive report. "NHTSA will interpret 'driver' in the context of Google's described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle occupants," the agency says in its letter responding to Google's proposed design for a self-driving car that doesn't require a driver, which the company submitted in November along with a request that the NHTSA reconsider how it views such vehicles.
It's a big step for Google and other companies developing autonomous vehicles, expediting a process they say has been hampered by safety mandates at both state and federal levels. Google's cars, for instance, are currently required to have auto safety features such as brake pedals and steering wheels so the human driver (another requirement) would be able to take over the car's operation if need be—which Google insists can lead to trouble. The company "[expressed] concern that providing human occupants of the vehicle with mechanisms to control things … could be detrimental to safety because the human occupants could attempt to override the (self-driving system's) decisions," the NHTSA letter notes. Re/code, however, notes the NHTSA letter isn't law, but simply a "clarification" of how the law can be interpreted in the future. The agency still has to work up new guidelines for the self-driving cars, which it hopes to accomplish in six months or so, per Reuters. (A Google car got pulled over for going too slow.)