Science!

User avatar
Mongrel
Posts: 21290
Joined: Mon Jan 20, 2014 6:28 pm
Location: There's winners and there's losers // And I'm south of that line

Re: Science!

Postby Mongrel » Wed Jan 31, 2018 8:16 pm

Well, if you consider "protection" to be a kind of payment.

...

Don't be evil!
Image

User avatar
Grath
Posts: 2387
Joined: Mon Jan 20, 2014 7:34 pm

Re: Science!

Postby Grath » Thu Feb 01, 2018 12:05 am

Mongrel wrote:Well, if you consider "protection" to be a kind of payment.

...

Don't be evil!

I'm not saying "work for us for free or we'll run you over with a self-driving car". That's something Mongrel totally made up.

User avatar
IGNORE ME
Woah Dangsaurus
Posts: 3679
Joined: Mon Jan 20, 2014 2:40 pm

Re: Science!

Postby IGNORE ME » Sat Feb 03, 2018 3:38 pm

You're also funding the captcha service in itself, which is pretty valuable for all those websites that don't have an income stream but do have a justifiable need for offsite captcha.

User avatar
Mongrel
Posts: 21290
Joined: Mon Jan 20, 2014 6:28 pm
Location: There's winners and there's losers // And I'm south of that line

Re: Science!

Postby Mongrel » Sat Feb 03, 2018 3:51 pm

Globe: It's turning out that we are all mutants

Based on initial results, almost everyone's genetic code is riddled with errors, from small "typos" to whole "paragraphs" gone wrong, many of which should kill us or cause severe diseases - yet, almost inexplicably, they don't.

That said, this Canadian project is only the first to yield results. The volunteers were self-selecting due to the high risks and personal cost to volunteer, and are almost all white adults working in health care (some of whom are actually project members). As the article mentions, much larger and more ambitious projects are under way in other countries so we shall see what sort of results we get across larger, more diverse populations.
Image

User avatar
Mongrel
Posts: 21290
Joined: Mon Jan 20, 2014 6:28 pm
Location: There's winners and there's losers // And I'm south of that line

Re: Science!

Postby Mongrel » Thu Mar 15, 2018 2:37 pm



Pretty amazing short thread about a recent paper which examined many of the ways in which Machine-Learning proto-AIs from a wide variety of research studies found rather, uh, novel ways to solve the particular problem they were presented with.

Like winning at Tic-Tac-Toe by finding a way to hack the system to crash the opposing AI. Take that, War Games.
Image

User avatar
Friday
Posts: 6264
Joined: Mon Jan 20, 2014 7:40 pm
Location: Karma: -65373

Re: Science!

Postby Friday » Thu Mar 15, 2018 4:54 pm

So the future of AI isn't Skynet, it's Kirk.

I'm okay with this. I've always wanted to fuck a robot.
ImageImageImage

User avatar
Mongrel
Posts: 21290
Joined: Mon Jan 20, 2014 6:28 pm
Location: There's winners and there's losers // And I'm south of that line

Re: Science!

Postby Mongrel » Thu Mar 15, 2018 5:09 pm

Well, a number of the hacks involved ignoring the secondary requirements that humans survive, so

But then maybe being fucked to death by robot Kirk is how you want to go. Who am I to judge?
Image

User avatar
Friday
Posts: 6264
Joined: Mon Jan 20, 2014 7:40 pm
Location: Karma: -65373

Re: Science!

Postby Friday » Thu Mar 15, 2018 6:12 pm

Okay, so, Mirror-Kirk.

Or regular Janeway.
ImageImageImage

User avatar
Thad
Posts: 13165
Joined: Tue Jan 21, 2014 10:05 am
Location: 1611 Uranus Avenue
Contact:

Re: Science!

Postby Thad » Mon Mar 19, 2018 10:20 pm

A self-driving Uber killed a woman walking her bike across a street last night. South of the intersection at Mill and Curry, in case that means anything to anybody but me.

Uber has suspended all its self-driving cars pending an investigation.

I'm no fan of Uber, but halting the program and cooperating with the investigation is the right call. Course, we may find out that this was the result of negligence, or that they're not being as cooperative as they're claiming, but for now they seem to be making the right moves.

No real details known about why the accident happened. It sounds like the woman wasn't using a crosswalk, but if she was walking her bike she probably didn't come out of nowhere, either. It seems to me that either the car or, failing that, the tech behind the wheel should have seen her and stopped.

I drive past the Uber location on my way to work every morning (not this specific intersection, but I know it), so I encounter several of these cars every day. I think they're too slow at intersections and I don't like being behind them; I've never seen one on a freeway, and my wife said she saw one run a red light once. But all in all, I haven't observed any reason to think they're less safe than human drivers. And that's the bottom line -- while we should always strive to make cars as safe as possible, the baseline isn't perfect safety, it's human-equivalent safety. It's crass to reduce the death of a human being to a statistic, but man, human drivers sure kill a lot of people, and I've seen little reason yet to believe that autonomous cars are worse.

(Their potential security vulnerabilities and economic implications are separate issues, and I am concerned about them, but they are less immediately important than the question of how likely they are to collide with me or somebody I know.)

User avatar
Mongrel
Posts: 21290
Joined: Mon Jan 20, 2014 6:28 pm
Location: There's winners and there's losers // And I'm south of that line

Re: Science!

Postby Mongrel » Mon Mar 19, 2018 10:33 pm

While you're correct, the irrational fears of ROBOT DEATH MACHINES will probably make people reluctant to adopt them even.

People just seem to have higher standards for AI machines, demanding that they be "perfect". Also if the robots aren't seen as infallible, then you'll still get terrible drivers rationalizing to themselves that "they're better than some dumb robot." leading to disproportionate refusal of adoption by the worst demographic.

Let's say the broad introduction of autonomous cars reduces road deaths by 20% - that's a huge improvement right? But every time an autonomous car kills someone (especially if it's in a manner a human might have easily avoided, such as the Tesla that drove its owner into a truck), the narrative will always come back to MURDERCARS. Unfortunately, I think the death rate will have get below 50% of what it currently is for the psychological barrier against autonomous vehicles to start to break. I think a recent poll has "serious fears about the safety of autonomous cars" at about 78% of the North American population.

As an additional note, I think with this Facebook stuff and other recent issues, we might be entering a period where skepticism of tech company claims will rise quite significantly. So that may end up as another barrier to adoption.
Image

User avatar
Mongrel
Posts: 21290
Joined: Mon Jan 20, 2014 6:28 pm
Location: There's winners and there's losers // And I'm south of that line

Re: Science!

Postby Mongrel » Mon Mar 19, 2018 10:36 pm

Oh and Uber has actually pulled all of their test cars, globally now. The ones they had in Toronto are all off the road as of this evening.
Image

User avatar
Mongrel
Posts: 21290
Joined: Mon Jan 20, 2014 6:28 pm
Location: There's winners and there's losers // And I'm south of that line

Re: Science!

Postby Mongrel » Mon Mar 19, 2018 10:43 pm

A further thought on a world in which autonomous cars are not near-perfect: Liability could really throw a wrench into adoption as well.

If autonomous cars still cause deaths and accidents fairly frequently - even if this is well below current road deaths - chances are that the manufacturer or operator will now be liable for those accidents rather than the driver (there's lots of legal precedent for this in the massive class action lawsuits regarding vehicle recalls over the past 20 years).

I mean - who even buys the insurance in that case? Do both the manufacturer AND the driver have to buy insurance? Will the driver's insurance refer claims to the manufacturer, demanding payment from them? This may be too much of a financial burden to bear, either directly financially, or in organizational costs, or even in PR? ("Your company directly caused X deaths last year. Your product kills people!"). Has anyone even thought out how liability will work with regard to autonomous cars?
Image

User avatar
Thad
Posts: 13165
Joined: Tue Jan 21, 2014 10:05 am
Location: 1611 Uranus Avenue
Contact:

Re: Science!

Postby Thad » Tue Mar 20, 2018 1:45 pm

Mongrel wrote:But every time an autonomous car kills someone (especially if it's in a manner a human might have easily avoided, such as the Tesla that drove its owner into a truck)

That wasn't an autonomous car, it was a car whose driver ignored its repeated warnings that he needed to put his hands back on the wheel.

User avatar
Grath
Posts: 2387
Joined: Mon Jan 20, 2014 7:34 pm

Re: Science!

Postby Grath » Tue Mar 20, 2018 2:35 pm

Mongrel wrote:While you're correct, the irrational fears of ROBOT DEATH MACHINES will probably make people reluctant to adopt them even.

People just seem to have higher standards for AI machines, demanding that they be "perfect". Also if the robots aren't seen as infallible, then you'll still get terrible drivers rationalizing to themselves that "they're better than some dumb robot." leading to disproportionate refusal of adoption by the worst demographic.

Let's say the broad introduction of autonomous cars reduces road deaths by 20% - that's a huge improvement right? But every time an autonomous car kills someone (especially if it's in a manner a human might have easily avoided, such as the Tesla that drove its owner into a truck), the narrative will always come back to MURDERCARS. Unfortunately, I think the death rate will have get below 50% of what it currently is for the psychological barrier against autonomous vehicles to start to break. I think a recent poll has "serious fears about the safety of autonomous cars" at about 78% of the North American population.

As an additional note, I think with this Facebook stuff and other recent issues, we might be entering a period where skepticism of tech company claims will rise quite significantly. So that may end up as another barrier to adoption.

My understanding is that truly autonomous cars (not Tesla "Autopilot" which is just driver-assist and not actually self-driving) are an order of magnitude safer than humans already (IE 90% reduction in road deaths.)

I hate Uber as much as than the next person, but reportedly this is something that a human couldn't have prevented - for starters, because the human safety driver at the wheel of this car didn't notice anything wrong until the collision. Also, the cops reviewed the camera footage that is conveniently available because it was self-driving car, and preliminary thoughts are that Uber isn't at fault for someone stepping out of the shadows and directly into traffic. Worth noting that the car was speeding slightly - 38 in a 35 - but that stretch of road was a 45 mph speed limit last year so it may have been outdated speed limit information.

User avatar
Mongrel
Posts: 21290
Joined: Mon Jan 20, 2014 6:28 pm
Location: There's winners and there's losers // And I'm south of that line

Re: Science!

Postby Mongrel » Tue Mar 20, 2018 3:32 pm

It'll be interesting if liability returns to the pedestrian in such cases, because current legal doctrine in most of North America (even in most supposedly "no-fault" systems) tends to assigns a non-zero amount of blame to the driver in these situations, even when the driver really could not have possibly anticipated the act.

Also, the 90% number is very promising, but IIRC it's coming from the manufacturers - I'm not sure how much independent verification of that number has been done. In a world where automakers are apparently happy to gas chamber monkeys as part of their testing which they later falsify anyway, I'd prefer claims that are really, definitely concretely proven.
Image

User avatar
Grath
Posts: 2387
Joined: Mon Jan 20, 2014 7:34 pm

Re: Science!

Postby Grath » Tue Mar 20, 2018 4:31 pm

Mongrel wrote:Also, the 90% number is very promising, but IIRC it's coming from the manufacturers - I'm not sure how much independent verification of that number has been done. In a world where automakers are apparently happy to gas chamber monkeys as part of their testing which they later falsify anyway, I'd prefer claims that are really, definitely concretely proven.

Mine is coming from Waymo, which... dunno about independent verification (although mandatory public reports of self-driving testing done in California indicate that Waymo has the lowest rate of safety-driver-takes-over-control among driverless cars and also doing the most testing on public roads), but Google's hopefully slightly more ethical than "gas chambering monkeys".

User avatar
Thad
Posts: 13165
Joined: Tue Jan 21, 2014 10:05 am
Location: 1611 Uranus Avenue
Contact:

Re: Science!

Postby Thad » Tue Mar 20, 2018 5:54 pm

While, again, my anecdotal experience is that these cars don't seem any more dangerous than human drivers, as far as actual statistical comparisons go I'm skeptical of any numbers this early in the game. These cars simply haven't driven enough miles to build a reliable statistical model of how safe they are compared to human drivers under similar conditions.

User avatar
Mongrel
Posts: 21290
Joined: Mon Jan 20, 2014 6:28 pm
Location: There's winners and there's losers // And I'm south of that line

Re: Science!

Postby Mongrel » Tue Mar 20, 2018 7:27 pm

Thad wrote:While, again, my anecdotal experience is that these cars don't seem any more dangerous than human drivers, as far as actual statistical comparisons go I'm skeptical of any numbers this early in the game. These cars simply haven't driven enough miles to build a reliable statistical model of how safe they are compared to human drivers under similar conditions.

They've also been - for the most part - confined to well-documented routes or otherwise "contained" environments. True, they're getting more and more real-world road time, but I recall there was already a series of articles last year which described the way Google was understating and even concealing-through-omission the problems their cars faced outside the perfectly-controlled and minutely-documented practise routes they were using for much of the initial testing.
Image

User avatar
Thad
Posts: 13165
Joined: Tue Jan 21, 2014 10:05 am
Location: 1611 Uranus Avenue
Contact:

Re: Science!

Postby Thad » Wed Mar 21, 2018 12:21 am

And of course the reason they're testing in the areas they are is that they're flat, have mostly-consistent clear weather, and are arranged in grids.

User avatar
Grath
Posts: 2387
Joined: Mon Jan 20, 2014 7:34 pm

Re: Science!

Postby Grath » Wed Mar 21, 2018 11:48 pm

Having seen the video (which cuts off just before impact, no gore, but still gonna link rather than embed this tweet) now, I'm gonna go back on it: I was under the impression there was an obstruction hiding the pedestrian that would prevent the LIDAR tech that Uber stole from working, but they absolutely should have been able to detect the person in the middle of the road.

Who is online

Users browsing this forum: No registered users and 17 guests