However you do need to verify that the controller in a real situation does the same as it did in the simulation. For obvious reasons, I'm not naming the project, vehicle provider or controller provider and not even which airfield runway it was on.
The scenario was a child appears from behind a wall and runs in front of the vehicle. The speed and time of the child was to be varied, so was the distance from wall to vehicle track. The "child" was a set of clothes stuffed with bubble wrap and sticks, mounted on plate with castors and pulled with a string. It was a windy day and in one experiment, the dummy child blew over in front of the vehicle just after it had correctly stopped.
Now when was a baby, my mother was ninja at playing peep-boo so I learned before I was 1 yr old that when someone vanishes, they haven't really gone away. The AV has no mother, so when the dummy child vanished from sight in front of the vehicle, it waited a few seconds, deduced the"obstacle" had gone and drove on.
I'm told the sound of bubble wrap being driven over is exactly not what you want to hear when its dressed up as a child.
The issue is we as people bring a huge amount of contextual knowledge of the world, such as when a child falls over (a) its not unexpected and (b) they haven't really gone away. A machine learning model for an AV controller has a very limited scope of knowledge and hence things like this will happen. The extent of domain knowledge in ML training is a big big issue and not just in the AV world. I think it'll bring "artificial intelligence" hype down to the real world quite soon.
... and no, I wouldn't get in an AV either unless it was slow speed and in a closed environment with AVs only and no other independently moving objects. And I would especially not trust anything Elon Musk says about the capabilities of his cars.