Recently, a group of researchers from Georgia Tech University and the Ben-Gurion University of the Negev has conducted an experiment to see if Tesla’s autopilot system is as good as it says. And it turns out the result is a little but not completely disappointing. What they did was to use a cheap projector to project an image of a pedestrian that look like the “slender man”. While the tesla’s Model X was driving towards it, the projection force it to change its behavior to slow down to the speed limit according to the road sign. However, it eventually figures out that it was just a projection, not an actual object.
The experiment proves that these so-called “phantom object” can be used in harmful ways, in which an attacker would not even have to hack the tesla system for the car to change its behavior (ex: stop, turn), it can simply use a drone with projector on it and cast a phantom on to the road to trick the system in thinking there was an obstacle ahead. this way creates a security problem but also creates difficulties for tracking criminals who attempt to use such a method in attacking people. Because by using, a drone, no IP will be shown, no face will be recognized, it is harder for authority to track these hackers.
To be safer and more generally accepted to use the autopilot system of a car, the company must ensure every cybersecurity aspect with any given means. otherwise driving an autopilot car would no less, but more dangerous than manual control