Cruisers Forum
 


Closed Thread
  This discussion is proudly sponsored by:
Please support our sponsors and let them know you heard about their products on Cruisers Forums. Advertise Here
 
Thread Tools Search this Thread Rate Thread Display Modes
Old 02-03-2021, 07:41   #676
Registered User
 
Mike OReilly's Avatar

Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,417
Re: Science & Technology News

Quote:
Originally Posted by Dockhead View Post
I still say it's not only vanishingly rare in application, but besides that, it's a very straightford, unchallenging policy question.
Again, rarity is irrelevant. And it is clearly not straightforward, as your proposed solution illustrates perfectly.

Quote:
Originally Posted by Dockhead View Post
In case the System is faced with a choice of preventing death of one or more persons vs some other person or persons, the choice should be made in a way which maximizes the Quality Life Years preserved, to the extent that the System is able to determine that."
Without attempting to drag this into a philosophical quagmire as to how one measures such a term, this perspective is merely one of many. I doubt you'd find agreement with this approach with even a small group of thinkers or policy makers.

Quote:
Originally Posted by Dockhead View Post
Should be put to the legislature as a matter of public policy. Someone will propose the traditional prioritization of adult female lives over adult male lives, and not just youth before age (as is inherent in the proposal above). Someone will propose some limit on the exclusion of property damage from consideration. Someone will propose considering every life to be equal, and dropping the Quality Life Years bit. The legislators will discuss, argue, vote, and decide something or another. A law will be passed. The law will be implemented in the programming. Et voila.

Why does this seem to you to be so mysterious or such a big challenge? It's an entirely banal and everyday policy question.
If it's banal, why doesn't it already exist? Where in law does it state that when firemen rush into the burning building that they prioritize young over old, women over men? And how does one even begin to calculate Quality Life Years?

We all make these kinds of assessments in the moment, and few legal or ethical judges question those choices made by actual humans. But the ethical new world with autonomous non-human actors is that we must decide a-prior who lives and who dies. This forces us to explicitly value some humans over others.

We've been down this dark ethical road many times, and are still there in many ways. We do value certain lives over others. Are we to codify that the rich are more valuable, so should be prioritized by our autonomous vehicles over the poor? How about white over black? Again, society already does this. Certainly men over women...

Again, whose ethics?

I am truly surprised you are so quick to dismiss this obviously significant aspect of the whole autonomous vehicle development. And remember, I speak as one who believes they will be a net benefit for society.
__________________
Why go fast, when you can go slow.
BLOG: www.helplink.com/CLAFC
Mike OReilly is offline  
Old 02-03-2021, 10:10   #677
Moderator
 
Dockhead's Avatar

Cruisers Forum Supporter

Join Date: Mar 2009
Location: Denmark (Winter), Cruising North Sea and Baltic (Summer)
Boat: Cutter-Rigged Moody 54
Posts: 34,563
Re: Science & Technology News

Quote:
Originally Posted by Mike OReilly View Post
Again, rarity is irrelevant. And it is clearly not straightforward, as your proposed solution illustrates perfectly.

Without attempting to drag this into a philosophical quagmire as to how one measures such a term, this perspective is merely one of many. I doubt you'd find agreement with this approach with even a small group of thinkers or policy makers.

If it's banal, why doesn't it already exist? Where in law does it state that when firemen rush into the burning building that they prioritize young over old, women over men? And how does one even begin to calculate Quality Life Years?

We all make these kinds of assessments in the moment, and few legal or ethical judges question those choices made by actual humans. But the ethical new world with autonomous non-human actors is that we must decide a-prior who lives and who dies. This forces us to explicitly value some humans over others.

We've been down this dark ethical road many times, and are still there in many ways. We do value certain lives over others. Are we to codify that the rich are more valuable, so should be prioritized by our autonomous vehicles over the poor? How about white over black? Again, society already does this. Certainly men over women...

Again, whose ethics?

I am truly surprised you are so quick to dismiss this obviously significant aspect of the whole autonomous vehicle development. And remember, I speak as one who believes they will be a net benefit for society.
I'm struggling to see where you are seeing a big challenge here. You say yourself -- we already do it. Firemen choose this person to save rather than that person. It doesn't bother us that the individual firemen decide based on their spur of the moment inspiration of conscience. So why would it bother us for a machine to be deciding this based on a program based on a consciouly adopted set of priorities? We could pass a law instructing firemen to do it like this and this, rather than like that, but we don't bother precisely because this does not bother us much.

"Whose ethics"? Well, that's very simple, and was answered in my previous post -- the community's ethics, as determined by democratic processes, through legislation. If anyone even ever cares about this (I wouldn't be surprised if no one does), then the first to care will be the manufacturers of the system, who will be worried about getting sued. They will not want to make such decisions instead of the community; they will want a law passed so that they can program in the system to act in accordance with what the community wants as expressed in a law. Then they can't be sued over the choice which gets made.

There is a good discussion of the ethics of autnomous cars in the Stanford Encyclopedia of Philosophy, which very much accords with my views. See here: https://plato.stanford.edu/entries/ethics-ai/#AutoSyst.

The problem we have been talking about is commonplace in ethics classes, it's called the "Trolley Problem" -- originally stated in 1969 as what happens if a trolley has a choice between going down one track and killing five people vs going down a different one and killing one. Sophie's choice.

The authors are doubtful that actual Trolley Problems are ever encountered in either human or autonomous driving, nor do they pose any difficult ethical problems if they do. The authors say: "While autonomous car trolley problems have received a lot of media attention (Awad et al. 2018), they do not seem to offer anything new to either ethical theory or to the programming of autonomous vehicles."

The authors say (and I agree) that ethical problems connected with driving are almost exclusively problems of the personal interest vs. public good type, which is different from the Trolley Problem. Imposing risks on other people by speeding, risky overtaking, etc. That sort of thing. These serious issues are entirely solved by autonomous vehicles: "The vast majority of these are covered by legal regulations on driving. Programming the car to drive 'by the rules' rather than 'by the interest of the passengers' or 'to achieve maximum utility' is thus deflated to a standard problem of programming ethical machines (see section 2.9). There are probably additional discretionary rules of politeness and interesting questions on when to break the rules (Lin 2016), but again this seems to be more a case of applying standard considerations (rules vs. utility) to the case of autonomous vehicles."

In other words, the ethical problems connected with driving become banal programming tasks, when we shift from human driven to autonomous vehicles.

I have a degree in Philosophy and would love to see an interesting ethical problem here, but there isn't one.
__________________
"You sea! I resign myself to you also . . . . I guess what you mean,
I behold from the beach your crooked inviting fingers,
I believe you refuse to go back without feeling of me;
We must have a turn together . . . . I undress . . . . hurry me out of sight of the land,
Cushion me soft . . . . rock me in billowy drowse,
Dash me with amorous wet . . . . I can repay you."
Walt Whitman
Dockhead is offline  
Old 02-03-2021, 10:26   #678
Registered User
 
Mike OReilly's Avatar

Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,417
Re: Science & Technology News

Yes, of course this is the trolley problem(s). But you seem to misunderstand the point. They are used to illustrate exactly the issue I'm pushing at; that it's hard to find ethical common ground when questions like this are crystalized in this way.

I too am struggling as to why you fail to see this as an important issue. As I've already stated, the challenge is not that decisions are made. It's that inherent in any pre-programmed decision tree will be a valuation of human life that must rank some higher and some lower. Humans make these decisions in the moment. The law does not differentiate or value which life is more valuable, but that is exactly what you are asking society to decide.

I'm not saying we can't or won't do it. I'm saying it's a very different approach that has any number of slippery slope kind of themes.
__________________
Why go fast, when you can go slow.
BLOG: www.helplink.com/CLAFC
Mike OReilly is offline  
Old 02-03-2021, 10:31   #679
Moderator
 
Dockhead's Avatar

Cruisers Forum Supporter

Join Date: Mar 2009
Location: Denmark (Winter), Cruising North Sea and Baltic (Summer)
Boat: Cutter-Rigged Moody 54
Posts: 34,563
Re: Science & Technology News

Sorry, I didn't address this one:

Quote:
Originally Posted by Mike OReilly View Post
. . . Without attempting to drag this into a philosophical quagmire as to how one measures such a term, this perspective is merely one of many. I doubt you'd find agreement with this approach with even a small group of thinkers or policy makers.. .
Again, you're way overcomplicating it.

Quality Life Years is one standard approach to triage, and it's not a philosophical question at all. There are different ways to measure it. It's one way to approach this.

I specifically mentioned that different legislators will have very different ideas, and even suggested some of the variants:

Quote:
Originally Posted by Dockhead View Post
. . . Someone will propose the traditional prioritization of adult female lives over adult male lives, and not just youth before age (as is inherent in the proposal above). Someone will propose some limit on the exclusion of property damage from consideration. Someone will propose considering every life to be equal, and dropping the Quality Life Years bit. The legislators will discuss, argue, vote, and decide something or another. A law will be passed. The law will be implemented in the programming. Et voila.. .
Why is this a problem? This is like any policy question, which will be solved (if anyone even bothers to care about it) like any other policy question.
__________________
"You sea! I resign myself to you also . . . . I guess what you mean,
I behold from the beach your crooked inviting fingers,
I believe you refuse to go back without feeling of me;
We must have a turn together . . . . I undress . . . . hurry me out of sight of the land,
Cushion me soft . . . . rock me in billowy drowse,
Dash me with amorous wet . . . . I can repay you."
Walt Whitman
Dockhead is offline  
Old 02-03-2021, 10:52   #680
Registered User
 
Tayana42's Avatar

Join Date: Nov 2012
Location: Long Beach, CA
Boat: Tayana Vancouver 42
Posts: 2,804
Re: Science & Technology News

I find it interesting that people in general are bothered by the idea of anyone predetermining decisions of who lives and who dies. The slippery slopes of discrimination, eugenics, racism, ageism rear up almost unspoken in this discussion. But, is a thoughtful ethical set of questions and answers worse than no one having the courage to ask and answer the questions? Is it a problem for AI to take a split second action but not a problem for a human driver to make (or not make) a split second decision?
Tayana42 is offline  
Old 02-03-2021, 10:55   #681
Moderator
 
Dockhead's Avatar

Cruisers Forum Supporter

Join Date: Mar 2009
Location: Denmark (Winter), Cruising North Sea and Baltic (Summer)
Boat: Cutter-Rigged Moody 54
Posts: 34,563
Re: Science & Technology News

Quote:
Originally Posted by Mike OReilly View Post
Yes, of course this is the trolley problem(s). But you seem to misunderstand the point. They are used to illustrate exactly the issue I'm pushing at; that it's hard to find ethical common ground when questions like this are crystalized in this way.

I too am struggling as to why you fail to see this as an important issue. As I've already stated, the challenge is not that decisions are made. It's that inherent in any pre-programmed decision tree will be a valuation of human life that must rank some higher and some lower. Humans make these decisions in the moment. The law does not differentiate or value which life is more valuable, but that is exactly what you are asking society to decide.

I'm not saying we can't or won't do it. I'm saying it's a very different approach that has any number of slippery slope kind of themes.
Where are the slippery slopes? Why is it any harder to find "ethical common ground" than it is with any every day policy question?

We as a society make decisions like this every day. Every time a legislature goes into session it makes decisions which will kill some people, and benefit others, will put companies out of business, will save people from poverty, throw people into poverty (which kills millions of people), give people access to health care, take access to health care away, etc. etc. etc. This is a tiny banal thing compared to things legislatures do every single day. Of course there are all kinds of different points of view -- so what? That's why God invented legislatures. If there weren't all kinds of different points of view then we wouldn't need them.

Why in the world is it somehow OK because "humans make these decisions in the moment?" That's actually much worse -- it means these decisions are made according to random individual impulses. Once you have the system being run by a computer instead of random individuals, then you can introduce system into it and you can make a policy which is ideally implemented. To the extent this WERE an actual problem (it's not), then automating the process would be a huge advantage (and it is a huge advantage that you eliminate dangerous driving, which is the actual important ethical issue with driving, not any trolley problem).

If trolley problems and driving WERE an actual problem that anyone cared about, then we would already have legislation instructing drivers what to do when faced with a trolley problem. We don't because no one cares -- it is so exceptionally rare (which is not irrelevant), plus we are not so much bothered at drivers making their own decisions. Since there is no private interest vs public good question here we are comfortable with drivers following their own conscience. When computers do it, there should be a policy.

I think you've been dazzled and blinded a bit by the presence of computers or AI in this. AI in general does present a number of complex ethical questions in different areas (read the Stanford article I linked to), but this is not one of them. There is no ethical dimension to AI whatsoever here -- with self-driving cars, all the ethical issues are decided by humans, in the programming. Whatever priorities we humans as a community decide are faithfully translated into action by the system. So the system, the technology, has nothing to do with it, other than being a mechanism which will perfectly fulfill our will. How we formulate our will in this case is just the most ordinary policymaking process you can imagine.
__________________
"You sea! I resign myself to you also . . . . I guess what you mean,
I behold from the beach your crooked inviting fingers,
I believe you refuse to go back without feeling of me;
We must have a turn together . . . . I undress . . . . hurry me out of sight of the land,
Cushion me soft . . . . rock me in billowy drowse,
Dash me with amorous wet . . . . I can repay you."
Walt Whitman
Dockhead is offline  
Old 02-03-2021, 11:25   #682
Registered User

Join Date: May 2011
Location: Lake Ont
Posts: 8,561
Re: Science & Technology News

Quote:
Originally Posted by Dockhead View Post
Where are the slippery slopes? Why is it any harder to find "ethical common ground" than it is with any every day policy question?
...
AI in general does present a number of complex ethical questions in different areas (read the Stanford article I linked to), but this is not one of them. There is no ethical dimension to AI whatsoever here -- with self-driving cars, all the ethical issues are decided by humans, in the programming. Whatever priorities we humans as a community decide are faithfully translated into action by the system.
Whoa, stop. At the current state of self-driving cars... the vehicle "AI" cannot yet distinguish between a senior citizen, and a slow-moving young adult. Or between a baby carriage and a buggy of groceries. Or a stroller with a baby, or a stroller with a Shihtzu. AI didn't stop for a woman wheeling a bike across the road; it just hit her.

So yes there's a honking big ethical issue about deploying technology that can't adequately recognize what's in its path, let alone be programmed to act completely in accord with our standards of ethical behaviour.
Quote:
So the system, the technology, has nothing to do with it, other than being a mechanism which will perfectly fulfill our will. How we formulate our will in this case is just the most ordinary policymaking process you can imagine.
You do know how corny that sounded, right? Paging Dr Asimov...
Lake-Effect is offline  
Old 02-03-2021, 12:49   #683
Senior Cruiser
 
newhaul's Avatar

Cruisers Forum Supporter

Join Date: Sep 2014
Location: puget sound washington
Boat: 1968 Islander bahama 24 hull 182, 1963 columbia 29 defender. hull # 60
Posts: 12,245
Re: Science & Technology News

New info on LNG fueled vessels

Pacific Maritime Magazine Online: Six New LNG-powered Vessels to be Dedicated by CMA CGM
__________________
Non illigitamus carborundum
newhaul is offline  
Old 02-03-2021, 12:53   #684
Registered User
 
Mike OReilly's Avatar

Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,417
Re: Science & Technology News

DH, I think you are missing the very fundamentally different perspective that autonomous vehicles introduce into the ethical mix. I've tried to explain it, but obviously we're missing here. The issue is not that the decisions are made, but that we must rank human life. This is something free societies have overtly avoided doing, yet it is demanded here.

The reason trolley problem are not codified into law is because there is no single right answer. Ten people will give 12 different answers. This is why it is a critical tool in philosophy, and in human psychology. Experiment have been done to show how inconsistent humans can be when faced with these questions. So again, who's ethics or approach will be programmed into automated vehicles? Yours? Mine?

You keep claiming these are inconsequential questions, yet it could mean the difference between a vehicle that will sacrifice your 8-year old daughter in certain circumstances. Few human drivers would choose this, yet it could easily be the right answer in your Quality Life Years analysis.

In fact, this kind of question was tested here: https://www.scienceintheclassroom.or...f-driving-cars

Quote:
In three situations: (i) when it swerved into a pedestrian to save 10 people, (ii) when it killed its own passenger to save 10 people, and (iii) when it swerved into a pedestrian to save just one other pedestrian. The algorithm that swerved into one to save 10 always received many points, and the algorithm that swerved into one to save one always received few points. The algorithm that would kill its passenger to save 10 presented a hybrid profile.

Like the high-valued algorithm, it received high marks for morality and was considered a good algorithm for other people to have. But in terms of purchase intention, it received significantly fewer points than the high-valued algorithm (P < 0.001) and was, in fact, closer to the low-valued algorithms.

Once more, it appears that people praise utilitarian, self-sacrificing AVs and welcome them on the road, without actually wanting to buy one for themselves.
And yes, I've read your paper. It does, as you say, parallel your own perspective. Thankfully, there is a lot more written on the subject, including the one above, and this other Stanford discussion paper which is not so quickly dismissive of the issues:

https://www.gsb.stanford.edu/insight...f-driving-cars

I don't think there's much point in carrying this discussion on. Clearly we have very different perspectives and appreciations for the very value of the question.
__________________
Why go fast, when you can go slow.
BLOG: www.helplink.com/CLAFC
Mike OReilly is offline  
Old 02-03-2021, 22:53   #685
CF Adviser
 
Pelagic's Avatar

Join Date: Oct 2007
Boat: Van Helleman Schooner 65ft StarGazer
Posts: 10,280
Re: Science &amp; Technology News

Mike, I believe that Utilitarianism will prevail once autonomous transport reduces fatalities by 90%

It will be seen as a net gain and whatever problems remain will be seen as engineering problems, not ethical.
Pelagic is offline  
Old 02-03-2021, 23:50   #686
Registered User
 
StuM's Avatar

Cruisers Forum Supporter

Join Date: Nov 2013
Location: Port Moresby,Papua New Guinea
Boat: FP Belize Maestro 43 and OPBs
Posts: 12,891
Re: Science & Technology News

Quote:
Originally Posted by Dockhead View Post
"Whose ethics"? Well, that's very simple, and was answered in my previous post -- the community's ethics, as determined by democratic processes, through legislation. I
So avoidance systems have to be programmed on a national/regional basis?
Avoidance priority:
In country X: keffiyeh, burqa, bare head?
In one particual US State: red cap, cap on backwards, long hair, turban, keffiyeh

...
StuM is offline  
Old 03-03-2021, 03:35   #687
Moderator
 
Dockhead's Avatar

Cruisers Forum Supporter

Join Date: Mar 2009
Location: Denmark (Winter), Cruising North Sea and Baltic (Summer)
Boat: Cutter-Rigged Moody 54
Posts: 34,563
Re: Science & Technology News

Quote:
Originally Posted by Mike OReilly View Post
DH, I think you are missing the very fundamentally different perspective that autonomous vehicles introduce into the ethical mix. I've tried to explain it, but obviously we're missing here. The issue is not that the decisions are made, but that we must rank human life. This is something free societies have overtly avoided doing, yet it is demanded here.

Ah! So that's the issue. Only now did I understand your point of view.


OK, fair enough. But why do you think that "free societies have overtly aoided [ranking human life]"? I didn't get you because it didn't occur to me that this is what you might be having a problem with.


Don't we rank human life all the time? For example, people die every day, and many of them, because a decision is made in hospitals to spend ICU resources on this person rather than that person, or to do this costly procedure on this person rather than that person. There is a whole library of books written on this. This is a common, even everyday occurrence (unlike vehicle trolley problems which are vanishingly rare), and it's worse than the vehicle trolley problem because not only is it ranking one person's life vs another, but it's also ranking one person's life against some quantity of dollars. But we do it every day -- society can't function without doing this. Triage was particularly visible, and on a mass scale, during the worst waves of the pandemic in 2020, when tens or hundreds of thousands of people died because resources were withheld from critical COVID patients who either had a worse prognosis compared to others needing the same resources, or because they were older or frailer with fewer Quality Life Years at stake than another patient. Some countries altogether refused to send people over a certain age to ICUs.


We also do triage on wounded in battle, choosing some to save and others to live, based on different criteria, victims of plane crashes, etc.


Triage in medical situations is just as coldly utilitarian, and the process is mostly based on maximizing the saving of human life -- so that resources are allocated where they have the best chance of saving a life. There is some controversy about whether life-years saved should be considered (the majority view), or lives period without considering Quality Life Years involved (so not prioritizing young and healthy people). See: https://journals.lww.com/lww-medical..._Ignore.1.aspx. I alluded to this controversy in one of my previous posts.


Note that triage in medical situations often explicitly prioritizes patients with "instrumental value" -- research participants and health care workers. How do you like that for "ranking human lives"?



In battle, saving lives is not the primary utilitarian goal, but rather maintaining effectiveness of the fighting unit -- conserving the unit's human resources. Since not every soldier is of equal value to the fighting effectiveness of the unit, triage in battle often considers rank or specialization of the wounded besides just the prognosis.


In hospitals, and in the whole medical system altogether, the rich, powerful, famous, and even slightly more wealthy than average get priority over resources devoted to saving their lives, over the poor or even average folks. That's a fact all over the world; it was true even in the Soviet Union (although nowhere to such an extreme degree as in the U.S.). That's also ranking of human lives, the uglier side of it, but it is ubiquitous. And even that is not always ugly -- imagine two critical cases are brought to the hospital, both requiring a ventilator. There is only one ventilator. The case which doesn't get the one ventilator will die. One patient is Albert Einstein, the other is a convicted murderer from the prison. Would anyone even hesitate in that choice? Society as a whole does have interests beyond the individual (even I see that, and I'm a libertarian!!).



So if we are ranking human lives all the time, what is particularly troubling about vehicle trolley problems? I think that's where our points of view differ -- you find something inherently shocking about it, while I consider it an everyday and unavoidable occurrence which doesn't bother me much. And having nothing to do with technology.



With human driven cars, and vehicle trolley problem situations, the ranking of human life is done by a random stranger based on a split second impulse. Is there something good about that? With automated vehicles, such situations will be resolved based on programming which was thought through beforehand and probably guided by legislation -- that is, a policy decision made by democratic means. Isn't that better, however you look at it?



Quote:
Originally Posted by Mike OReilly View Post
. . . The reason trolley problem are not codified into law is because there is no single right answer. Ten people will give 12 different answers.
Indeed not. Here is where you are going wrong. The very definition of a policy question, is a question which does not have a single right answer. Practically every single thing decided by legislatures every day can be characterized like this. The whole reason why we HAVE laws and have legislatures is because we have to come up with single answers on a host of issues that people disagree about. Codifying the required response to a trolley problem would be the most banal and ordinary act of a legislature. We don't codify it because it has not been an issue, because it is vanishingly rare, because no one wants to think about it, and because no one is particularly disturbed by relying on the judgement of drivers.



Quote:
Originally Posted by Mike OReilly View Post
. . .This is why it is a critical tool in philosophy, and in human psychology. Experiment have been done to show how inconsistent humans can be when faced with these questions. So again, who's ethics or approach will be programmed into automated vehicles? Yours? Mine?

I answered that question several times already -- the community's. We have a process for formulating policy for questions like this despite the fact that different people disagree. Our society could not function without that process. It's either that, or a dictator. It's a different conversation how dysfunctional that process has become lately in some countries, but even in the U.S., laws are passed, policies are made.




Quote:
Originally Posted by Mike OReilly View Post
You keep claiming these are inconsequential questions, yet it could mean the difference between a vehicle that will sacrifice your 8-year old daughter in certain circumstances. Few human drivers would choose this, yet it could easily be the right answer in your Quality Life Years analysis.

In fact, this kind of question was tested here: https://www.scienceintheclassroom.or...f-driving-cars



That's a very interesting paper -- thanks for the link. I read it twice.


This paper is not indeed about trolley problems, but about how people (there was a survey) would choose to sacrifice a greater number of strangers, than themselves or their own passengers. Such that people are in favor of other people having autonomous vehicles which would sacrifice own passengers to save a greater number of strangers, but wouldn't want to be in one themselves.


This is not a trolley problem, but a classical selfish benefit vs. greater good problem, so it's not really relevant to our discussion. Depending on how it's framed, you may also get a Prisoner's Dilemma. Do you have any doubt about what is the right answer to this question? Obviously, all such vehicles should be programmed to sacrifice their own passengers if it saves more other people. In fact, you could take the thought-experiment even further -- what if autonomous vehicles had self-destruct charges which would blow them up, if that were required to save a bus load of school children? Would it bother you to ride in a car with such a device?


I think every right thinking person would accept that a sacrifice of himself AND his 8-year old daughter in order to save, say, a bus load of school children is right and good, and would be glad if the entire fleet of vehicles in the nation could be programmed to provide that. Certainly I would, and I don't really doubt that one could get a law to that effect through even our dysfunctional Congress. But again, this is not relevant to our discussion -- it's a different problem.


And yes, I've read your paper. It does, as you say, parallel your own perspective. Thankfully, there is a lot more written on the subject, including the one above, and this other Stanford discussion paper which is not so quickly dismissive of the issues:

https://www.gsb.stanford.edu/insight...f-driving-cars [/QUOTE]


There is nothing of substance in that article, which is from Stanford Business Review. It just notes the questions which have to be thought through. Nothing to disagree with here. Although the trolley problem is mentioned, this mostly focuses on the (much more important) selfish interest vs. public good question -- are people ok riding in vehicles which would sacrifice themselves, or much more significantly, their 7 year old daughters, in order to save a bunch of other people?



Quote:
Originally Posted by Mike OReilly View Post
I don't think there's much point in carrying this discussion on. Clearly we have very different perspectives and appreciations for the very value of the question.

I would be sorry to discontinue it. I value the able challenge to my views and have enjoyed being forced to think these things through again. When I was a Philosophy student, I had three main interests, and Ethics (particularly Kant's ethics) was one of them. I think actually Kant gives us a very good way to think about all this -- the Categorical Imperative. This has similar consequences in this question as Utilitarianism (John Stuart Mills etc.) but is not Utilitarianism. Utilitarianism is focussed exclusively on the results. But Kant is focussed on the contrary on the good life and the good society -- he comes at it from the opposite side.



For Kant, a moral act is one with respect to which "you would wish that the maxim of your act would be a universal law". So, although you might have an impulse to desire that your AV is programmed to sacrifice a bunch of strangers rather than your 7 year old daughter, you could never wish that all vehicles were programmed like that -- just think of your 7 year old daughter in a school bus in traffic comprised of vehicles programmed to destroy the bus rather than sacrifice the occupants. So the Categorical Imperative forbids you from programming your own vehicle like that (or trying to hack the vehicle you are riding in so that it will behave like that).


Shared Autonomous Vehicles give us a chance to overcome this problem by having all vehicles programmed with the same consistent and universal values, which we can debate publicly, argue about, and finally adopt as policy through democratic processes. Who wouldn't feel better riding in a vehicle where all the vehicles act according to such a program? Even with self-destruct charges built in? This is an opportunity, not a challenge. The exact ranking of priorities in this program is a minor detail, and is a question which is a completely normal policy question which can be worked out in exactly the same way as we set policy on other issues.
__________________
"You sea! I resign myself to you also . . . . I guess what you mean,
I behold from the beach your crooked inviting fingers,
I believe you refuse to go back without feeling of me;
We must have a turn together . . . . I undress . . . . hurry me out of sight of the land,
Cushion me soft . . . . rock me in billowy drowse,
Dash me with amorous wet . . . . I can repay you."
Walt Whitman
Dockhead is offline  
Old 03-03-2021, 06:58   #688
Registered User
 
AKA-None's Avatar

Join Date: Oct 2013
Location: Lake City MN
Boat: C&C 27 Mk III
Posts: 2,647
Re: Science & Technology News

Sounds like a discussion on redeveloping the 3 laws of robotics
__________________
Special knowledge can be a terrible disadvantage if it leads you too far along a path that you cannot explain anymore.
Frank Herbert 'Dune'
AKA-None is offline  
Old 03-03-2021, 07:46   #689
Registered User
 
Mike OReilly's Avatar

Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,417
Re: Science & Technology News

Quote:
Originally Posted by Pelagic View Post
Mike, I believe that Utilitarianism will prevail once autonomous transport reduces fatalities by 90%

It will be seen as a net gain and whatever problems remain will be seen as engineering problems, not ethical.
Yes, I believe that's what people are saying; that some version of utilitarianism will drive the choices. My point is though, that:

#1. Based on actual human psychological research, few of us are purely utilitarian when it comes to hard choices. Even when we claim otherwise, when tested many of us make other real-life choices.

#2. If we embed autonomous vehicle (AV) algorithms with actual commonality standards of who is most important, we'd have to replicate the existing highly inequitable realities. If community standards are the guide, then rich white people would be prioritized over poor brown people. Men would be prioritized over women. And we'd always prioritize ourselves and those within our immediate group of family and friends over strangers.

Free societies do not codify this kind of inequity, but reality shows it is a fact. We like to pretend "all men are created equal," but actual community standards show this not how we treat people.

Quote:
Originally Posted by StuM View Post
So avoidance systems have to be programmed on a national/regional basis?
Avoidance priority:
In country X: keffiyeh, burqa, bare head?
In one particual US State: red cap, cap on backwards, long hair, turban, keffiyeh
I know you're having a bit of a laugh here, but I think this crystalizes the problem of claiming to apply community standards. Which community? Different communities have different standards. Are the cars going to change their prioritization algorithms as they move along?

Or, more likely, are we going to impose one standard over all communities, regardless of what they would want.

Dockhead, I appreciate your thoughts and engagement here. Your paper is just too long to manage in this format. I'll try and address some points, but my understanding is that you are claiming it is simply a matter of applying some version of a utilitarian approach. My questions around this are as above.

The triage and military analogies are good ones. There is guidance and wisdom here. But both are examples of crisis management in a situation where there are no good answers.

Speaking from my direct experience and knowledge of the Canadian healthcare system, crisis triage situations (which are exceedingly rare) would be based on odds of lives saved. Bank accounts and potential contributions to society are not explicitly considered to my knowledge. However, it would be foolish to think the system treats every life as equal.

The challenge with your Einstein scenario is not when both he and the inmate have equal potential outcomes, but when the inmate clearly has a higher likelihood of surviving and thriving vs an Einstein. If we follow your logic, Einstein is prioritized, but that is NEVER how medical triage is currently structured. Life is prioritized (in theory). Doesn't matter who's life.

What about the same scenario except one is an investment banker and other a homeless person. Who gets the ventilator? Equal potential health outcomes. I assume your utility-driven algorithm would favour the investment banker.

Our systems are designed to avoid getting to this point at all cost because we don't have good ways to make these choices. That's the whole point of our pandemic responses -- to avoid pushing the healthcare systems to where they'd have to make these choices.

If our AV are programmed with collision parameters based solely on lives saved, including the sacrifice of the driver and their family, then as the paper you read twice (thanks) says, a significant number of us will want others to buy these cars. BUT, many of us would not buy the car because we don't want to be sacrificed. Do we override this by forcing people to accept an outcome (being sacrificed), even though they would not do so if it was their choice?


P.S. And now I've written another book-length response. Sorry .

I really do appreciate the thoughtful discussion. I guess my summary statement would be that I don't think the questions are simple. I'm not trying to claim I have the answers, but in some sense I'm wary of getting what we ask for. If we program AV based on some version of utilitarianism outcomes, then actual humans may not like those actual outcomes.
__________________
Why go fast, when you can go slow.
BLOG: www.helplink.com/CLAFC
Mike OReilly is offline  
Old 03-03-2021, 08:13   #690
Senior Cruiser
 
newhaul's Avatar

Cruisers Forum Supporter

Join Date: Sep 2014
Location: puget sound washington
Boat: 1968 Islander bahama 24 hull 182, 1963 columbia 29 defender. hull # 60
Posts: 12,245
Re: Science & Technology News

Be careful what you wish for .
https://youtu.be/RvRZogigTtQ
__________________
Non illigitamus carborundum
newhaul is offline  
Closed Thread

Tags
enc


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off


Advertise Here


All times are GMT -7. The time now is 17:16.


Google+
Powered by vBulletin® Version 3.8.8 Beta 1
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Social Knowledge Networks
Powered by vBulletin® Version 3.8.8 Beta 1
Copyright ©2000 - 2024, vBulletin Solutions, Inc.

ShowCase vBulletin Plugins by Drive Thru Online, Inc.