17 Comments
Aug 15Liked by Nancy Friedman

I’m still trying to get past one of the gooseberry definitions… 💩

Expand full comment
Aug 15Liked by Nancy Friedman

That was new to me; I'm more familiar with the term "dingleberry"

Expand full comment
Aug 15·edited Aug 15Liked by Nancy Friedman

I can imagine feeling romantically inclined towards my attractive middle-aged spouse on the way to the airport—perhaps we are headed to a Viking river cruise—but I'm not sure I'd be tempted to get up to anything in the back seat of a driverless car that I wouldn't do if there were a driver. (I mean, I understand that the idea might be that we wouldn't have to make small talk with a chatty driver, except that the image suggests something more . . . intimate.)

Expand full comment
Aug 15Liked by Nancy Friedman

Robotaxis have a built-in algorithm that places their passenger’s safety last, in case of any accident that involves other people.

Because of this, I would NEVER ride in a driverless vehicle!!

Expand full comment
Aug 15Liked by Nancy Friedman

Hiya. Do you have a source for this info? I can't find an explicit statement anywhere about the choice of passenger vs others safety.

Expand full comment
Aug 15Liked by Nancy Friedman

I guess I'm not getting a clear statement from these articles that SD cars have been programmed to, in effect, resolve the trolley problem; mostly they seem to outline the ethical dilemma and note that SD car programmers will have to deal with the issue.

"[I]n such cases, who is responsible for the death(s) caused? Is it the self-driving car manufacturer? Is it the software programmer? Or is it the car itself? There is no clear and right answer to this question. Thus, most people feel that accidents should happen naturally rather than letting a robot or software decide who’ll live and die."

On a separate but related issue, if a SD car is considerably less likely to get into a crash in the first place than a human-piloted one, does that factor into a societal weighing of the ethical issue? Right now, ~40,000 people a year die in crashes.

Expand full comment

Perhaps we are missing the proverbial elephant in the room. If prospective buyers (or passengers) knew that, in the case of an impending accident, the SD vehicle was programmed to prioritize lives and safety of others before that of its passengers, then no one would buy, or consent to ride in, such a vehicle.

Expand full comment
Aug 17Liked by Nancy Friedman

While the story as usual was beautifully written, my thought was about the 3rd wheel. Beyond its origin, I have been all too aware when I was said wheel and, sadly, not really finding the need to jettison someone else from the event in question.

Expand full comment
Aug 17Liked by Nancy Friedman

As a British reader I am wowed by exotic tales of Robotaxis and heartwarmed to read Play Gooseberry" and "Snogging" from an American writer.... Well done.

Kevin

Expand full comment
Aug 16Liked by Nancy Friedman

I had never heard of "playing gooseberry" before. Leave it to the Brits to come up with something MUCH more colorful than "third wheel."

Expand full comment
Aug 15Liked by Nancy Friedman

Amazing information as always!

Expand full comment
author

I goofed and neglected to include a link to this Waymo review by Harry McCracken in Fast Company:

https://www.fastcompany.com/91150764/google-waymo-one-self-driving-cars-san-francisco

A snippet:

"The closest thing to an exciting moment during my two trips today was a minor, only-in-San-Francisco traffic disruption. On the way back from my appointment, my Waymo encountered a stalled cable car. Its gripman was crouched in the street, apparently performing some maneuver to get it going again. The Waymo knew something was amiss, displayed a message telling me to remain seated and belted, and waited to proceed until the cable car was moving again. Faced with the same situation an Uber driver couldn’t have performed any better, and would probably have grumbled more."

Expand full comment