Science!
- Mongrel
- Posts: 21290
- Joined: Mon Jan 20, 2014 6:28 pm
- Location: There's winners and there's losers // And I'm south of that line
Re: Science!
Well, if you consider "protection" to be a kind of payment.
...
Don't be evil!
...
Don't be evil!
Re: Science!
Mongrel wrote:Well, if you consider "protection" to be a kind of payment.
...
Don't be evil!
I'm not saying "work for us for free or we'll run you over with a self-driving car". That's something Mongrel totally made up.
Re: Science!
You're also funding the captcha service in itself, which is pretty valuable for all those websites that don't have an income stream but do have a justifiable need for offsite captcha.
- Mongrel
- Posts: 21290
- Joined: Mon Jan 20, 2014 6:28 pm
- Location: There's winners and there's losers // And I'm south of that line
Re: Science!
Globe: It's turning out that we are all mutants
Based on initial results, almost everyone's genetic code is riddled with errors, from small "typos" to whole "paragraphs" gone wrong, many of which should kill us or cause severe diseases - yet, almost inexplicably, they don't.
That said, this Canadian project is only the first to yield results. The volunteers were self-selecting due to the high risks and personal cost to volunteer, and are almost all white adults working in health care (some of whom are actually project members). As the article mentions, much larger and more ambitious projects are under way in other countries so we shall see what sort of results we get across larger, more diverse populations.
Based on initial results, almost everyone's genetic code is riddled with errors, from small "typos" to whole "paragraphs" gone wrong, many of which should kill us or cause severe diseases - yet, almost inexplicably, they don't.
That said, this Canadian project is only the first to yield results. The volunteers were self-selecting due to the high risks and personal cost to volunteer, and are almost all white adults working in health care (some of whom are actually project members). As the article mentions, much larger and more ambitious projects are under way in other countries so we shall see what sort of results we get across larger, more diverse populations.
- Mongrel
- Posts: 21290
- Joined: Mon Jan 20, 2014 6:28 pm
- Location: There's winners and there's losers // And I'm south of that line
Re: Science!
Pretty amazing short thread about a recent paper which examined many of the ways in which Machine-Learning proto-AIs from a wide variety of research studies found rather, uh, novel ways to solve the particular problem they were presented with.
Like winning at Tic-Tac-Toe by finding a way to hack the system to crash the opposing AI. Take that, War Games.
Re: Science!
So the future of AI isn't Skynet, it's Kirk.
I'm okay with this. I've always wanted to fuck a robot.
I'm okay with this. I've always wanted to fuck a robot.
- Mongrel
- Posts: 21290
- Joined: Mon Jan 20, 2014 6:28 pm
- Location: There's winners and there's losers // And I'm south of that line
Re: Science!
Well, a number of the hacks involved ignoring the secondary requirements that humans survive, so
But then maybe being fucked to death by robot Kirk is how you want to go. Who am I to judge?
But then maybe being fucked to death by robot Kirk is how you want to go. Who am I to judge?
Re: Science!
Okay, so, Mirror-Kirk.
Or regular Janeway.
Or regular Janeway.
Re: Science!
A self-driving Uber killed a woman walking her bike across a street last night. South of the intersection at Mill and Curry, in case that means anything to anybody but me.
Uber has suspended all its self-driving cars pending an investigation.
I'm no fan of Uber, but halting the program and cooperating with the investigation is the right call. Course, we may find out that this was the result of negligence, or that they're not being as cooperative as they're claiming, but for now they seem to be making the right moves.
No real details known about why the accident happened. It sounds like the woman wasn't using a crosswalk, but if she was walking her bike she probably didn't come out of nowhere, either. It seems to me that either the car or, failing that, the tech behind the wheel should have seen her and stopped.
I drive past the Uber location on my way to work every morning (not this specific intersection, but I know it), so I encounter several of these cars every day. I think they're too slow at intersections and I don't like being behind them; I've never seen one on a freeway, and my wife said she saw one run a red light once. But all in all, I haven't observed any reason to think they're less safe than human drivers. And that's the bottom line -- while we should always strive to make cars as safe as possible, the baseline isn't perfect safety, it's human-equivalent safety. It's crass to reduce the death of a human being to a statistic, but man, human drivers sure kill a lot of people, and I've seen little reason yet to believe that autonomous cars are worse.
(Their potential security vulnerabilities and economic implications are separate issues, and I am concerned about them, but they are less immediately important than the question of how likely they are to collide with me or somebody I know.)
Uber has suspended all its self-driving cars pending an investigation.
I'm no fan of Uber, but halting the program and cooperating with the investigation is the right call. Course, we may find out that this was the result of negligence, or that they're not being as cooperative as they're claiming, but for now they seem to be making the right moves.
No real details known about why the accident happened. It sounds like the woman wasn't using a crosswalk, but if she was walking her bike she probably didn't come out of nowhere, either. It seems to me that either the car or, failing that, the tech behind the wheel should have seen her and stopped.
I drive past the Uber location on my way to work every morning (not this specific intersection, but I know it), so I encounter several of these cars every day. I think they're too slow at intersections and I don't like being behind them; I've never seen one on a freeway, and my wife said she saw one run a red light once. But all in all, I haven't observed any reason to think they're less safe than human drivers. And that's the bottom line -- while we should always strive to make cars as safe as possible, the baseline isn't perfect safety, it's human-equivalent safety. It's crass to reduce the death of a human being to a statistic, but man, human drivers sure kill a lot of people, and I've seen little reason yet to believe that autonomous cars are worse.
(Their potential security vulnerabilities and economic implications are separate issues, and I am concerned about them, but they are less immediately important than the question of how likely they are to collide with me or somebody I know.)
- Mongrel
- Posts: 21290
- Joined: Mon Jan 20, 2014 6:28 pm
- Location: There's winners and there's losers // And I'm south of that line
Re: Science!
While you're correct, the irrational fears of ROBOT DEATH MACHINES will probably make people reluctant to adopt them even.
People just seem to have higher standards for AI machines, demanding that they be "perfect". Also if the robots aren't seen as infallible, then you'll still get terrible drivers rationalizing to themselves that "they're better than some dumb robot." leading to disproportionate refusal of adoption by the worst demographic.
Let's say the broad introduction of autonomous cars reduces road deaths by 20% - that's a huge improvement right? But every time an autonomous car kills someone (especially if it's in a manner a human might have easily avoided, such as the Tesla that drove its owner into a truck), the narrative will always come back to MURDERCARS. Unfortunately, I think the death rate will have get below 50% of what it currently is for the psychological barrier against autonomous vehicles to start to break. I think a recent poll has "serious fears about the safety of autonomous cars" at about 78% of the North American population.
As an additional note, I think with this Facebook stuff and other recent issues, we might be entering a period where skepticism of tech company claims will rise quite significantly. So that may end up as another barrier to adoption.
People just seem to have higher standards for AI machines, demanding that they be "perfect". Also if the robots aren't seen as infallible, then you'll still get terrible drivers rationalizing to themselves that "they're better than some dumb robot." leading to disproportionate refusal of adoption by the worst demographic.
Let's say the broad introduction of autonomous cars reduces road deaths by 20% - that's a huge improvement right? But every time an autonomous car kills someone (especially if it's in a manner a human might have easily avoided, such as the Tesla that drove its owner into a truck), the narrative will always come back to MURDERCARS. Unfortunately, I think the death rate will have get below 50% of what it currently is for the psychological barrier against autonomous vehicles to start to break. I think a recent poll has "serious fears about the safety of autonomous cars" at about 78% of the North American population.
As an additional note, I think with this Facebook stuff and other recent issues, we might be entering a period where skepticism of tech company claims will rise quite significantly. So that may end up as another barrier to adoption.
- Mongrel
- Posts: 21290
- Joined: Mon Jan 20, 2014 6:28 pm
- Location: There's winners and there's losers // And I'm south of that line
Re: Science!
Oh and Uber has actually pulled all of their test cars, globally now. The ones they had in Toronto are all off the road as of this evening.
- Mongrel
- Posts: 21290
- Joined: Mon Jan 20, 2014 6:28 pm
- Location: There's winners and there's losers // And I'm south of that line
Re: Science!
A further thought on a world in which autonomous cars are not near-perfect: Liability could really throw a wrench into adoption as well.
If autonomous cars still cause deaths and accidents fairly frequently - even if this is well below current road deaths - chances are that the manufacturer or operator will now be liable for those accidents rather than the driver (there's lots of legal precedent for this in the massive class action lawsuits regarding vehicle recalls over the past 20 years).
I mean - who even buys the insurance in that case? Do both the manufacturer AND the driver have to buy insurance? Will the driver's insurance refer claims to the manufacturer, demanding payment from them? This may be too much of a financial burden to bear, either directly financially, or in organizational costs, or even in PR? ("Your company directly caused X deaths last year. Your product kills people!"). Has anyone even thought out how liability will work with regard to autonomous cars?
If autonomous cars still cause deaths and accidents fairly frequently - even if this is well below current road deaths - chances are that the manufacturer or operator will now be liable for those accidents rather than the driver (there's lots of legal precedent for this in the massive class action lawsuits regarding vehicle recalls over the past 20 years).
I mean - who even buys the insurance in that case? Do both the manufacturer AND the driver have to buy insurance? Will the driver's insurance refer claims to the manufacturer, demanding payment from them? This may be too much of a financial burden to bear, either directly financially, or in organizational costs, or even in PR? ("Your company directly caused X deaths last year. Your product kills people!"). Has anyone even thought out how liability will work with regard to autonomous cars?
Re: Science!
Mongrel wrote:But every time an autonomous car kills someone (especially if it's in a manner a human might have easily avoided, such as the Tesla that drove its owner into a truck)
That wasn't an autonomous car, it was a car whose driver ignored its repeated warnings that he needed to put his hands back on the wheel.
Re: Science!
Mongrel wrote:While you're correct, the irrational fears of ROBOT DEATH MACHINES will probably make people reluctant to adopt them even.
People just seem to have higher standards for AI machines, demanding that they be "perfect". Also if the robots aren't seen as infallible, then you'll still get terrible drivers rationalizing to themselves that "they're better than some dumb robot." leading to disproportionate refusal of adoption by the worst demographic.
Let's say the broad introduction of autonomous cars reduces road deaths by 20% - that's a huge improvement right? But every time an autonomous car kills someone (especially if it's in a manner a human might have easily avoided, such as the Tesla that drove its owner into a truck), the narrative will always come back to MURDERCARS. Unfortunately, I think the death rate will have get below 50% of what it currently is for the psychological barrier against autonomous vehicles to start to break. I think a recent poll has "serious fears about the safety of autonomous cars" at about 78% of the North American population.
As an additional note, I think with this Facebook stuff and other recent issues, we might be entering a period where skepticism of tech company claims will rise quite significantly. So that may end up as another barrier to adoption.
My understanding is that truly autonomous cars (not Tesla "Autopilot" which is just driver-assist and not actually self-driving) are an order of magnitude safer than humans already (IE 90% reduction in road deaths.)
I hate Uber as much as than the next person, but reportedly this is something that a human couldn't have prevented - for starters, because the human safety driver at the wheel of this car didn't notice anything wrong until the collision. Also, the cops reviewed the camera footage that is conveniently available because it was self-driving car, and preliminary thoughts are that Uber isn't at fault for someone stepping out of the shadows and directly into traffic. Worth noting that the car was speeding slightly - 38 in a 35 - but that stretch of road was a 45 mph speed limit last year so it may have been outdated speed limit information.
- Mongrel
- Posts: 21290
- Joined: Mon Jan 20, 2014 6:28 pm
- Location: There's winners and there's losers // And I'm south of that line
Re: Science!
It'll be interesting if liability returns to the pedestrian in such cases, because current legal doctrine in most of North America (even in most supposedly "no-fault" systems) tends to assigns a non-zero amount of blame to the driver in these situations, even when the driver really could not have possibly anticipated the act.
Also, the 90% number is very promising, but IIRC it's coming from the manufacturers - I'm not sure how much independent verification of that number has been done. In a world where automakers are apparently happy to gas chamber monkeys as part of their testing which they later falsify anyway, I'd prefer claims that are really, definitely concretely proven.
Also, the 90% number is very promising, but IIRC it's coming from the manufacturers - I'm not sure how much independent verification of that number has been done. In a world where automakers are apparently happy to gas chamber monkeys as part of their testing which they later falsify anyway, I'd prefer claims that are really, definitely concretely proven.
Re: Science!
Mongrel wrote:Also, the 90% number is very promising, but IIRC it's coming from the manufacturers - I'm not sure how much independent verification of that number has been done. In a world where automakers are apparently happy to gas chamber monkeys as part of their testing which they later falsify anyway, I'd prefer claims that are really, definitely concretely proven.
Mine is coming from Waymo, which... dunno about independent verification (although mandatory public reports of self-driving testing done in California indicate that Waymo has the lowest rate of safety-driver-takes-over-control among driverless cars and also doing the most testing on public roads), but Google's hopefully slightly more ethical than "gas chambering monkeys".
Re: Science!
While, again, my anecdotal experience is that these cars don't seem any more dangerous than human drivers, as far as actual statistical comparisons go I'm skeptical of any numbers this early in the game. These cars simply haven't driven enough miles to build a reliable statistical model of how safe they are compared to human drivers under similar conditions.
- Mongrel
- Posts: 21290
- Joined: Mon Jan 20, 2014 6:28 pm
- Location: There's winners and there's losers // And I'm south of that line
Re: Science!
Thad wrote:While, again, my anecdotal experience is that these cars don't seem any more dangerous than human drivers, as far as actual statistical comparisons go I'm skeptical of any numbers this early in the game. These cars simply haven't driven enough miles to build a reliable statistical model of how safe they are compared to human drivers under similar conditions.
They've also been - for the most part - confined to well-documented routes or otherwise "contained" environments. True, they're getting more and more real-world road time, but I recall there was already a series of articles last year which described the way Google was understating and even concealing-through-omission the problems their cars faced outside the perfectly-controlled and minutely-documented practise routes they were using for much of the initial testing.
Re: Science!
And of course the reason they're testing in the areas they are is that they're flat, have mostly-consistent clear weather, and are arranged in grids.
Re: Science!
Having seen the video (which cuts off just before impact, no gore, but still gonna link rather than embed this tweet) now, I'm gonna go back on it: I was under the impression there was an obstruction hiding the pedestrian that would prevent the LIDAR tech that Uber stole from working, but they absolutely should have been able to detect the person in the middle of the road.
Who is online
Users browsing this forum: No registered users and 10 guests