On autopilot. Just goes to show the elegance of the human design, even at it's most basic form. https://www.usatoday.com/story/tech...volved-autopilot-distracted-driver/609596002/
Hey folks! Step right up! For a mere $75,000 (more if you want a radio), you too can be a Tesla Beta autopilot tester! Fame and fortune await! When those highly flammable batteries mounted directly under your seat burst into flames you are guaranteed nationwide news coverage. If the autopilot drives over a woman pushing a bicycle, your face and name will be known around the world (Yes, I know that wasn't a Tesla). Just think about it, the worlds largest Boolean tree will be sending your car smoothly down the road or deftly into a K rail or fire truck. Over 6000 people a month are smart enough to see that this is a once in a lifetime opportunity, and you should too!
The classic Boolean tree was a piece of software called 'What is the difference between an alligator'. It used simple Boolean switching between branches based on answers to questions it asked and would, over time guess an object that you had picked in your mind. If you picked an item that the software had never encountered, when it ran to the end of a branch, it would give up, ask what the correct answer was, then ask for a difference between it's best guess and what you were thinking of. This built an ever larger database as it 'learned' new answers. This was early 70's computer science. There is a huge flaw, however. If a person lies to the software and someone else guesses the same thing shortly thereafter, the lie becomes a dead end, but if people keep guessing different things then the lie sits there while the data tree grows until the lie is caught and a large amount of data gets excised. Now consider if the new response was also a lie, then all that data is lost, but nothing is gained, except even more problems on a new branch. Modern software essentially uses the theory behind Wikipedia, that, on average, over time, the truth will overwhelm the lies. So extremely large databases are inspected together and compared to come up with a solution. Now, imagine that a group of teenagers take a cardboard cut out of a cat and shove it out in front of a Tesla a few hundred times and have the car receive no collision inputs after hitting it. That would train the car, and all Tesla's and other self driving cars, that cats are an artifact to be ignored because most of the time when a cat is seen, it doesn't run in front of the car.
The first program I wrote and executed on a computer was a calculator for pi. The card puncher we had was a manual one. The punch positions had to be set for each card one at a time and a lever pulled. There was a dumb terminal at my High School and a card reader. A lot of work, but feeding that stack of cards in and having pi start showing up calculated out to 20 places a few minutes later was amazing. Needless to say, there have been some minor technological advances since 1972.
Are you kidding? They were a pain (especially if you dropped your freshly-collated, ready-to-run job stack), but I had fun punching 80-column cards. My only problem in the actual punching was I can't type fast. That's when my classmate Gerry said that I was a "two-fingered hunt 'n' pecker."
Another success for auto-crash. http://ktla.com/2018/05/29/tesla-on-autopilot-crashes-into-laguna-beach-police-patrol-vehicle/ The batteries under the drivers seat failed to ignite though.
Self driving cars give me the heebeegeebees. I ain't gonna ever buy one. But if we look at the real world in which we drive, it could be that even with the accidents these new units are having, I have to imagine that the score is about even with some of the boneheads we share the road with. Some cars with human drivers don't make it very far before wreaking havoc somehow. Maybe Tesla and other companies should just let these things out the door of the factory and let them run about. Don't even put doors on 'em, no humans allowed.
That's two parked firetrucks and a parked police car (SUV). Seems the Tesla ADAS sensors can't see large reflectors. That and their drivers are morons who are way too trusting.
These cars, even when not working correctly, will save more lives than without them. Sad truth. Human error is the biggest cause of wrecks.
Humans do cause collisions, because they are driving. The other is a very bold assertion. It is worth noting the change by supporters of these things. At first they were claimed to be safer, now it has changed to 'will be' due to the fact that as soon as these things were let out of the controlled situations the death and collision rates rose well above average almost immediately. It is true that they are safer if operated by an alert engaged driver, but human nature dictates that if something requires less attention, it receives less attention.
...and yet again: https://www.cnbc.com/2018/05/29/tesla-in-autopilot-mode-hit-a-parked-california-police-car.html