Open Side Menu Go to the Top
Register
TSLA showing cracks? TSLA showing cracks?

03-22-2018 , 03:15 PM
You guys might be seriously overestimating a person's reaction time to a scenario like this. Also the ability to react in time...

We definitely have no idea if a person 100% paying attention would've spotted the pedestrian in time to brake. That depends on the visibility which I don't think we can really know with certainty from just the camera. Even in that moment everything could be happening too fast. Also, it's something completely unexpected and shocking. It's not like you're watching the road expecting a human or even an animal crossing your path. It's arguable that it's impossible to not have hit this pedestrian.

What's the braking power? Could she have swerved at 45mph? Would a human in general swerve? Just the human reaction alone is impossible to predict. Kinda like when people say what they'd do if a gun got pulled on them. They have no fkn clue what they'd do.
TSLA showing cracks? Quote
03-22-2018 , 07:59 PM
Quote:
Originally Posted by ToothSayer
Your position is ridiculous, sir. Something marketed as "Autopilot" with "Fully Self Driving" 3-6 months away, and having emergency braking and object detection should detect a big ****ing truck right across the road, even at the incredibly low bar of claiming it's level 2 and not attempting a 2/3 hybrid as Tesla are clearly trying to market.
Right, Tooth Sayer from the internet knows more than the NTSB, who concluded

Quote:
The Tesla’s automated vehicle control system was not designed to, and could not, identify the truck crossing the Tesla’s path or recognize the impending crash. Therefore, the system did not slow the car, the forward collision warning system did not provide an alert, and the automatic emergency braking did not activate.

The Tesla driver’s pattern of use of the Autopilot system indicated an over-reliance on the automation and a lack of understanding of the system limitations.
Tooth Sayer similarly seems to lack an understanding of what Autopilot was designed to do.

Now of course the NTSB did fault Tesla, but not for what Tooth Sayer is faulting them for:

Quote:
If automated vehicle control systems do not automatically restrict their own operation to conditions for which they were designed and are appropriate, the risk of driver misuse remains.

The way in which the Tesla “Autopilot” system monitored and responded to the driver’s interaction with the steering wheel was not an effective method of ensuring driver engagement.
TSLA showing cracks? Quote
03-22-2018 , 09:29 PM
Keed, you struggle with reading comprehension

No one is questioning the legal definition of tesla autopilot


What is being said is that the marketing is so misleading given how badly it sucks, and is in no way close to automous driving or even driver assistance, it's just pure garbage
TSLA showing cracks? Quote
03-22-2018 , 09:34 PM
I don't know anything about the marketing of Autopilot. But the adaptive cruise control system in the Tesla didn't fail, and could not fail (as the NTSB notes) because it was not designed to be engaged in that situation.
TSLA showing cracks? Quote
03-23-2018 , 02:29 AM
American drivers kill a pedestrian every 536 million miles

Uber self driving cars kill a pedestrian every 3 million miles.
TSLA showing cracks? Quote
03-23-2018 , 01:56 PM
Quote:
Originally Posted by OlafTheSnowman
American drivers kill a pedestrian every 536 million miles

Uber self driving cars kill a pedestrian every 3 million miles.
Nice brain. The sample size is literally one.

NFL players tear their ACL once every 500 games.

Carson Wentz tears his ACL every 29 games.
TSLA showing cracks? Quote
03-23-2018 , 01:59 PM
It isn't really a sample size but a selection bias issue. I mean it is both but more a selection bias problem imo.
TSLA showing cracks? Quote
03-23-2018 , 02:33 PM
Quote:
Originally Posted by Mori****a System
The TSLA Model 3 factory:
I imagine that this is what is happening right now to close the quarter at 2k model 3s in one week. Time to go long in the short term maybe.
TSLA showing cracks? Quote
03-23-2018 , 02:34 PM
Quote:
Originally Posted by SenorKeeed
I don't know anything about the marketing of Autopilot. But the adaptive cruise control system in the Tesla didn't fail, and could not fail (as the NTSB notes) because it was not designed to be engaged in that situation.
Isn't it a failure to equip a car with a system that can be used easily in a common situation? You could create a car that only drives straight and then say the car didn't fail because some idiot took it on a road with curves. Failure of design is failure.
TSLA showing cracks? Quote
03-23-2018 , 02:56 PM
Yes, of course that is a failure (as the NTSB notes). But that's not the failure that Tooth sayer was referring, he's saying that Autopilot not detecting the truck is a failure. But it wasn't designed to detect a truck pulling out across traffic like that.
TSLA showing cracks? Quote
03-23-2018 , 03:11 PM
Quote:
Originally Posted by SenorKeeed
Yes, of course that is a failure (as the NTSB notes). But that's not the failure that Tooth sayer was referring, he's saying that Autopilot not detecting the truck is a failure. But it wasn't designed to detect a truck pulling out across traffic like that.
What if you were on a limited-access highway and there was a semi jackknifed across the road from an accident? Does it detect the semi then?
TSLA showing cracks? Quote
03-23-2018 , 03:16 PM
The interior camera makes me wonder what sort of hiring and supervision standards Uber has for these test vehicle operators. Is anyone at Uber actually reviewing these videos to make sure that their drivers are paying attention 100% of the time?

The near term future of "autonomous" cars is that the driver is stuck behind the wheel, paying 100% attention, ready to correct inevitable and unpredictable system glitches at a second's notice. It's worse than active driving, IMO. At least when I am driving I am doing something to keep myself occupied/engaged. I also know that I won't randomly swerve or make evasive maneuvers out of the blue.

Quote:
Originally Posted by Didace
A human eye has an effective dynamic range of about 12-14 stops. A dash cam might have 5-7. Using that video to estimate what a person paying attention would see is foolish.
That's not really how our vision works. When we are staring at a bright headlight illuminated road in front of us, we don't see what's in the dark. Hence the most common night driving tip - don't out drive your headlight.

Most people analyze accidents based on 1 minute clips, with benefit of hindsight and with benefit of knowing that something is about to happen. But when you are on a 4 hour drive, bored and tired, you probably don't have your cat like reflexes ready to engage at a split second the whole time. It's just not the way normal people drive. This is armchair analysis at its finest. In this case, Uber should've made sure that their car operators can be 100% the whole time behind the wheel but they certainly won't get that if they have some gig worker driving 8 hour shifts.
TSLA showing cracks? Quote
03-23-2018 , 05:14 PM
Quote:
Originally Posted by somigosaden
What if you were on a limited-access highway and there was a semi jackknifed across the road from an accident? Does it detect the semi then?
I don't know.
TSLA showing cracks? Quote
03-23-2018 , 05:46 PM
Working from memory here, and yes I'm too lazy to read the NTSB report, but I think the argument was something along the lines the forward radar detected the semi but interpreted it as an overhead sign.

That sounds plausible to me, AP should have been limited to restricted access highways etc. etc., and major bearing in mind I don't have specific knowledge how the radar resolves objects, but imo it is a catastrophic failure of the autonomous system if at some point it can't detect a 15 meter by 3 meter object that is perpendicular to the intended automobile path and at least panic brake to mitigate damage.

In other words, the overhead sign argument falls apart the closer the vehicle got to the semi. Now maybe the forward facing radar alone can't resolve elevation, idk, but something needs to be able to distinguish between say a bridge, overhead sign, giant object in roadway, etc. That is a failure imo.
TSLA showing cracks? Quote
03-23-2018 , 05:50 PM
I am also baffled how LIDAR didn't detect (well, I'm assuming it was a detection issue) a woman 10m in front of the recent Uber car.
TSLA showing cracks? Quote
03-23-2018 , 07:50 PM
Quote:
Originally Posted by dc_publius
That's not really how our vision works. When we are staring at a bright headlight illuminated road in front of us, we don't see what's in the dark. Hence the most common night driving tip - don't out drive your headlight.
That is exactly how our eyes work. I said "effective dynamic range" for a reason. In any lighting condition, 10-14 stops is what our eyes can handle. It is much greater if we include total range when including a longer time for our eyes to adjust to brightness levels.

I agree that night-time driving presents challenges going from very dark to relatively light. But the point is that a dash cam has a much narrower range than a human and the video should not be assumed to be what an attentive driver could see. Could a driver see the woman and avoided her? I have no opinion on that.
TSLA showing cracks? Quote
03-25-2018 , 02:19 PM
Quote:
Originally Posted by OlafTheSnowman
American drivers kill a pedestrian every 536 million miles

Uber self driving cars kill a pedestrian every 3 million miles.
Quote:
Originally Posted by somigosaden
Nice brain. The sample size is literally one.
We can still draw inferences from a sample size of one, statistician ass covering notwithstanding.

If an adverse event first happens in a new untested system at 1/18 of the old system average trials for one adverse event, it becomes substantially more likely than not that the new system is more faulty than the old system.
TSLA showing cracks? Quote
03-25-2018 , 04:02 PM
Musk tweeting he has ordered delivery slowdown in Norway because they don't have the logistics to handle it is so lol

What a clown show
TSLA showing cracks? Quote
03-25-2018 , 04:24 PM
sounds like they're just ironing out some kinks
TSLA showing cracks? Quote
03-25-2018 , 05:30 PM
Quote:
Originally Posted by ToothSayer
We can still draw inferences from a sample size of one, statistician ass covering notwithstanding.

If an adverse event first happens in a new untested system at 1/18 of the old system average trials for one adverse event, it becomes substantially more likely than not that the new system is more faulty than the old system.
I sometimes believe you have multiple personalities.

For someone who claims to have a math degree and arguing that lighting played a major role, to tout out the above is so absurd.
TSLA showing cracks? Quote
03-25-2018 , 05:50 PM
Quote:
Originally Posted by Spurious
I sometimes believe you have multiple personalities.
Dogs believe their masters have multiple personalities too. A lower intelligence is unable to synthesize the actions of a higher intelligence into a coherent model, because the higher intelligence acts on rules and principles they can't understand.
Quote:
For someone who claims to have a math degree and arguing that lighting played a major role, to tout out the above is so absurd.
Ok, since you and somigosaden think one data point is absurd to draw inferences from, let's do a little thought experiment.

An old version of a black box machine that produces an outcome has a failure rate of 1 in 500 million.

We make a new version and test it. It produces an outcome that's a failure on the first try. Which of the following is true of the new version?

a) It's more likely than not, in its current state, to have a lower failure rate than the old machine
b) It's more likely than not, in its current state, to have a higher failure rate than the old machine
c) We can't say anything at all because it's a sample size of one
d) We can't say anything at all because we don't know the priors.

Looking forward to your answer.
TSLA showing cracks? Quote
03-26-2018 , 01:38 AM
What you didn't seem to grasp, superhuman, is the fact that the conditions are not taken into account. The system killed a human when conditions were significantly different than they are normally. That's why it is absurd to draw the conclusion that you are drawing. It obviously started with the ridiculous statement above.
TSLA showing cracks? Quote
03-26-2018 , 08:42 AM
It is quite funny to see TS using bayes theorem in this instance when he's spent most of his time arguing that a human hits the pedestrian 100% of the time.

If you have 2 systems (a human & computer) and the human does something 100% of the time, then not sure how when the computer does the same thing in the given conditions you can reach the conclusion that the computer is worse or that this one data point can be useful.

Now since his stances is that human's hit the pedestrian ~100% of the time, if the computer had been able to avoid the collision that information would be very useful.
TSLA showing cracks? Quote
03-26-2018 , 08:43 AM
Quote:
Originally Posted by NxtWrldChamp
It is quite funny to see TS using bayes theorem in this instance when he's spent most of his time arguing that a human hits the pedestrian 100% of the time.

If you have 2 systems (a human & computer) and the human does something 100% of the time, then not sure how when the computer does the same thing you can reach the conclusion that the computer is worse.
This is exactly what I was trying to say - I just can't express myself well enough.
TSLA showing cracks? Quote
03-26-2018 , 08:57 AM
Watching poker players try to use Bayes Theorem is one of the perennial joys of this site.
TSLA showing cracks? Quote

      
m