More anecdotes. Every car company has lawsuits. A moments reflection on what automated driving entails will inform you that it is bound to have such cases, regardless of where the fault lies.
The problem is Tesla is using its purchasers to filed test its AI. I wonder how many owners who so like their Teslas know that. There is a very wealthy fellow named Dan O'Dowd running a quixotic campaign for governor of California because he wants to publicize one issue. Mr. O'Dowd runs a software company that creates the behind-the-scenes operating systems in a number of consumer products including cars. He is beyond furious that Tesla is testing its auto-drive software on public roads with unsuspecting Tesla buyers at the wheel thinking all is well and already safety-tested.
As far as I know, it is with permission of the Tesla owners. It is true that they almost certainly do not completely understand the risks but how could they? Even Tesla doesn't know the risks with any kind of certainty. That still doesn't mean that Tesla is reckless. I think some have unreasonable expectations of the process. It's not like Tesla could test them 100% using their own test drivers and promise no problems after delivery.
We also know that self-driving has the potential to reduce accidents compared to human drivers. This is hard to measure right now as self-driving is mostly only used on highways where it is probably already safer than human drivers. It also requires that the driver be ready to take over if needed but that's often violated. Perhaps it is unreasonable to expect this of a human driver. It is just not something people are good at. As with all technology, buyers have to be smart.
I don't own a Tesla but I've thought of getting one but I'm not sure I would get the FSD option. Still, I don't object to it being an available option. It will get better and better and it won't take long before it is better than a human driver. It's probably there now assuming the driver follows the rules.
As far as you know? At least you admit they do not fully appreciated the risks. https://www.cnn.com/2021/04/21/tech/tesla-full-self-driving-launch/index.html Furthermore, Dan O'Dowd, whose company creates, tests, debugs and delivers software and AI all day long, says the in-house debugging is far from sufficient.
You might remember that Microsoft often delivers buggy software and depends on users to report problems. It is one reason you have so many updates. But at least, "as far as I know" Microsoft bugs do not endanger lives. And buyers, like most people, cannot be expected to be smart. Assuming the driver is following the rules will not protect Tesla from liability. Right now with Tesla, better safe than sorry.
That there are opinions on both sides seems like it must come with the territory. How would you describe a "good" rollout of a self-driving AI? Would it have no bugs, no updates, no disputes? Not on this planet or universe. Ask a guy who "creates, tests, debugs and delivers software and AI all day long" and he's bound to say everyone else's software is released with too many bugs. I just don't find this stuff compelling.
I'm retired now but I used to run a software company. The issue of how much testing must be done before release is a very real issue for me. Customers are always outraged that a company would release software with bugs. As I used to tell my programmers, they don't pay enough for our product for us to guarantee it is bug free. It's not like it is going up in the Space Shuttle." Of course, lives DO depend on the FSD software but that just means the break-even point on the tradeoff is placed differently. It does not mean that customers should expect zero problems.
According to Dan O'Dowd, Tesla puts software that isn't even up to Beta standards on out public roads. Mr. O'Dowd has seen the code as part of negotiations with Musk. Musk agreed to stop doing that but reneged, just like he does with all his agreements, so the negotiations failed. Mr. Dowd felt he could not risk the reputational damage.
There's no definitive meaning to terms like "beta standards". Every company gets to make it up as it goes. With many software companies, "alpha" means a trial release with only internal distribution, whereas "beta" refers to a trial release given to those customers who have indicated that they are willing to work with trial software, understanding that it almost certainly has bugs but also has functionality that they want to try out. The alpha/beta terminology usually refers to the extent of distribution, and the quality only indirectly. Even when the customers acknowledge the conditions under which the beta is provided, they still complain when things don't go well. Such is life.
Mr. O'Dowd did not take on the expense of gubernatorial race whose only purpose is a public awareness program because of different views of what beta means . When I said "not even up to Beta standards," I was talking about Alpha. I was trying to be comprehensible to laypersons who might think Alpha is better than Beta.
More anecdotes. Every car company has lawsuits. A moments reflection on what automated driving entails will inform you that it is bound to have such cases, regardless of where the fault lies.
The problem is Tesla is using its purchasers to filed test its AI. I wonder how many owners who so like their Teslas know that. There is a very wealthy fellow named Dan O'Dowd running a quixotic campaign for governor of California because he wants to publicize one issue. Mr. O'Dowd runs a software company that creates the behind-the-scenes operating systems in a number of consumer products including cars. He is beyond furious that Tesla is testing its auto-drive software on public roads with unsuspecting Tesla buyers at the wheel thinking all is well and already safety-tested.
As far as I know, it is with permission of the Tesla owners. It is true that they almost certainly do not completely understand the risks but how could they? Even Tesla doesn't know the risks with any kind of certainty. That still doesn't mean that Tesla is reckless. I think some have unreasonable expectations of the process. It's not like Tesla could test them 100% using their own test drivers and promise no problems after delivery.
We also know that self-driving has the potential to reduce accidents compared to human drivers. This is hard to measure right now as self-driving is mostly only used on highways where it is probably already safer than human drivers. It also requires that the driver be ready to take over if needed but that's often violated. Perhaps it is unreasonable to expect this of a human driver. It is just not something people are good at. As with all technology, buyers have to be smart.
I don't own a Tesla but I've thought of getting one but I'm not sure I would get the FSD option. Still, I don't object to it being an available option. It will get better and better and it won't take long before it is better than a human driver. It's probably there now assuming the driver follows the rules.
As far as you know? At least you admit they do not fully appreciated the risks. https://www.cnn.com/2021/04/21/tech/tesla-full-self-driving-launch/index.html Furthermore, Dan O'Dowd, whose company creates, tests, debugs and delivers software and AI all day long, says the in-house debugging is far from sufficient.
You might remember that Microsoft often delivers buggy software and depends on users to report problems. It is one reason you have so many updates. But at least, "as far as I know" Microsoft bugs do not endanger lives. And buyers, like most people, cannot be expected to be smart. Assuming the driver is following the rules will not protect Tesla from liability. Right now with Tesla, better safe than sorry.
That there are opinions on both sides seems like it must come with the territory. How would you describe a "good" rollout of a self-driving AI? Would it have no bugs, no updates, no disputes? Not on this planet or universe. Ask a guy who "creates, tests, debugs and delivers software and AI all day long" and he's bound to say everyone else's software is released with too many bugs. I just don't find this stuff compelling.
I'm retired now but I used to run a software company. The issue of how much testing must be done before release is a very real issue for me. Customers are always outraged that a company would release software with bugs. As I used to tell my programmers, they don't pay enough for our product for us to guarantee it is bug free. It's not like it is going up in the Space Shuttle." Of course, lives DO depend on the FSD software but that just means the break-even point on the tradeoff is placed differently. It does not mean that customers should expect zero problems.
According to Dan O'Dowd, Tesla puts software that isn't even up to Beta standards on out public roads. Mr. O'Dowd has seen the code as part of negotiations with Musk. Musk agreed to stop doing that but reneged, just like he does with all his agreements, so the negotiations failed. Mr. Dowd felt he could not risk the reputational damage.
There's no definitive meaning to terms like "beta standards". Every company gets to make it up as it goes. With many software companies, "alpha" means a trial release with only internal distribution, whereas "beta" refers to a trial release given to those customers who have indicated that they are willing to work with trial software, understanding that it almost certainly has bugs but also has functionality that they want to try out. The alpha/beta terminology usually refers to the extent of distribution, and the quality only indirectly. Even when the customers acknowledge the conditions under which the beta is provided, they still complain when things don't go well. Such is life.
Mr. O'Dowd did not take on the expense of gubernatorial race whose only purpose is a public awareness program because of different views of what beta means . When I said "not even up to Beta standards," I was talking about Alpha. I was trying to be comprehensible to laypersons who might think Alpha is better than Beta.