Cruisers Forum
 

Go Back   Cruisers & Sailing Forums > Scuttlebutt > Flotsam & Sailing Miscellany
Cruiser Wiki Click Here to Login
Register Vendors FAQ Community Calendar Today's Posts Log in

Closed Thread
  This discussion is proudly sponsored by:
Please support our sponsors and let them know you heard about their products on Cruisers Forums. Advertise Here
 
Thread Tools Search this Thread Rate Thread Display Modes
Old 14-02-2021, 19:36   #121
Registered User
 
Mike OReilly's Avatar

Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,455
Re: Addressing Misinformation and Harmful Content Online

I appreciate the 'forfeiting of common carrier rights' argument, but the shield law does not create a responsibility or duty to carry all messages. The point of the law is to hold the carrier blameless for content it can't reasonably control.

This doesn't mean that carriers can't or shouldn't limit access to its service when deemed necessary by law, or by its own standards. These are private companies making decisions that they deem beneficial to themselves. Thier ideology is not politics, it's money. This is why I say the answer is to be found in shifting the economic incentives.
__________________
Why go fast, when you can go slow.
BLOG: www.helplink.com/CLAFC
Mike OReilly is offline  
Old 15-02-2021, 08:57   #122
Registered User
 
CaptTom's Avatar

Join Date: Apr 2004
Location: Southern Maine
Boat: Prairie 36 Coastal Cruiser
Posts: 3,258
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by Mike OReilly View Post
I appreciate the 'forfeiting of common carrier rights' argument, but the shield law does not create a responsibility or duty to carry all messages. The point of the law is to hold the carrier blameless for content it can't reasonably control.
I'm no lawyer, but the way I understood it, if they exercise no control, editing or moderation, then they can claim to be a common carrier.

If they start moderating and editing content, I thought they then could be held liable for that content. In other words, if a claimant could demonstrate that they could have blocked the damaging speech, and that they routinely did block other speech, it's hard to explain why they shouldn't be held accountable for what they do publish.

It's the difference between a telephone carrier, who can't easily, and doesn't normally, monitor or limit what's said on a private call, and a magazine, which chooses what to publish and publicly distribute. To me, social media is closer to the publisher. Especially when they use algorithms which decide what to "publish" broadly and what to bury.
CaptTom is offline  
Old 15-02-2021, 09:08   #123
Registered User
 
Mike OReilly's Avatar

Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,455
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by CaptTom View Post
I'm no lawyer, but the way I understood it, if they exercise no control, editing or moderation, then they can claim to be a common carrier.

If they start moderating and editing content, I thought they then could be held liable for that content. In other words, if a claimant could demonstrate that they could have blocked the damaging speech, and that they routinely did block other speech, it's hard to explain why they shouldn't be held accountable for what they do publish.

It's the difference between a telephone carrier, who can't easily, and doesn't normally, monitor or limit what's said on a private call, and a magazine, which chooses what to publish and publicly distribute. To me, social media is closer to the publisher. Especially when they use algorithms which decide what to "publish" broadly and what to bury.
Yes... this is exactly the point of contention. This was the point of contention back when the law was first adopted. Many (including myself at the time) thought these tools are more like publishers than common carriers. But the "stifling innovation" argument won the day*, and hence we ended up with the laws that we have.

(*And there are valid reasons to support this perspective as well.)

I'm no lawyer, although I have spent a lot of my career in the world of copyright and IP in general, although mostly from the Canadian perspective. My interpretation of this law is that it doesn't require or demand that all content be carried without question or vetting. If that were so, then CF here couldn't curtail and restrict subjects either.

I am surprised to learn that Sec. 230 shields and other paid content. By the nature of the transaction, this is clearly not unknown content. It certainly makes sense to have these companies responsible for the content of the PAID material they publish.
__________________
Why go fast, when you can go slow.
BLOG: www.helplink.com/CLAFC
Mike OReilly is offline  
Old 15-02-2021, 11:20   #124
Registered User

Join Date: Apr 2013
Posts: 11,004
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by Mike OReilly View Post
Yes... this is exactly the point of contention. This was the point of contention back when the law was first adopted. Many (including myself at the time) thought these tools are more like publishers than common carriers. But the "stifling innovation" argument won the day*, and hence we ended up with the laws that we have.

(*And there are valid reasons to support this perspective as well.)

I'm no lawyer, although I have spent a lot of my career in the world of copyright and IP in general, although mostly from the Canadian perspective. My interpretation of this law is that it doesn't require or demand that all content be carried without question or vetting. If that were so, then CF here couldn't curtail and restrict subjects either.

I am surprised to learn that Sec. 230 shields and other paid content. By the nature of the transaction, this is clearly not unknown content. It certainly makes sense to have these companies responsible for the content of the PAID material they publish.
It's really more of a spectrum issue rather than binary.

Stifling innovation is really separate from the common carrier issue (or at least a distinct separate aspect of it).

Common carrier is really about, are they simply a conduit or are they materially participating in the development and editing (and censoring) of the information.
- Now if they keep it simple and innocuous, it's pretty defendable that they are still a common carrier. Say they prohibit the use of swear words A, B & C. They can generally get away with the idea that they are still a conduit that anyone can use to convey a message as it's generally not needed to use swear words and if the words are clearly delineated, there is no question of which words are not allowed.
- Cruising Forum moves a little more into the gray area. If a new piece of boating related legislation comes up, by default it has a political aspect, so it's much more of a tight rope walk of how far the conversation is allowed to delve into politics. If it's not applied in an even handed manner, the common carrier status is at risk. By just dipping their toes into it and generally discouraging political discussion, the risk is minimal (and for the most part the moderators do a pretty good job of letting it go as long as it's a peaceful discussion).
- The big boys like facebook are at much greater risk when they vet what political statements can be made, kick individual & groups out based on political affiliations and decide which political ads to allow. Especially if they don't stick to a very even handed approach to rejecting political speech or demonstrate a pattern of supporting/opposing various political groups, they risk losing that common carrier status as they have materially participated in the formation of the larger message going out on their site. They are free to control what goes out on their site but they should lose the protections of being a common carrier if they censor groups they oppose.
- If you take it to an extreme where every post is reviewed and either edited or deleted by the social media company if it doesn't meet with the social media companies agenda, they are clearly not a common carrier and should be held responsible for what is posted because they are material participants in generating that information.

So companies are allowed to question and vet but the more they do, the weaker their common carrier status is.
valhalla360 is offline  
Old 15-02-2021, 11:42   #125
Registered User
 
Mike OReilly's Avatar

Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,455
Re: Addressing Misinformation and Harmful Content Online

I didn't realize I was presenting it as a binary issue. In fact, what I've said is that it's a rather nuanced challenge, which is what makes it so difficult.

The "stifling innovation" argument was explicitly made as a reason why these companies needed to be shielded from responsibility back when the laws were first enacted. So it's not a separate issue. I'm sure they will make the same arguments again, and not without some validity.

Agreed regarding the danger of making choices based on politics. Of course, FB et al. makes it clear they are NOT doing that. And I've seen no convincing evidence that says they are. What they are doing is enforcing their own codes of conduct, just as CF does.

One can (and I'm sure you will) quibble about unequal enforcement of their conduct rules. I'm not interested in that kind of inevitably biased discussion. Evidence I see is that they are driven by economics and profit-maximizing (as all companies are). Which is why I keep looking for market solutions to the problem.
__________________
Why go fast, when you can go slow.
BLOG: www.helplink.com/CLAFC
Mike OReilly is offline  
Old 15-02-2021, 12:09   #126
Registered User

Join Date: Apr 2013
Posts: 11,004
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by Mike OReilly View Post
I didn't realize I was presenting it as a binary issue. In fact, what I've said is that it's a rather nuanced challenge, which is what makes it so difficult.

The "stifling innovation" argument was explicitly made as a reason why these companies needed to be shielded from responsibility back when the laws were first enacted. So it's not a separate issue. I'm sure they will make the same arguments again, and not without some validity.

Agreed regarding the danger of making choices based on politics. Of course, FB et al. makes it clear they are NOT doing that. And I've seen no convincing evidence that says they are. What they are doing is enforcing their own codes of conduct, just as CF does.

One can (and I'm sure you will) quibble about unequal enforcement of their conduct rules. I'm not interested in that kind of inevitably biased discussion. Evidence I see is that they are driven by economics and profit-maximizing (as all companies are). Which is why I keep looking for market solutions to the problem.
Yes, Stifling innovation was made as an argument...but it really has little or nothing to do with being a common carrier. That was more of an argument to wait and see before making a final determination arguement.

CF can get away with a lot more because they are not a large national/international social media site, there are many boating related forums and other than direct boating related subjects, they largely stay out of politics.

FB on the other hand is a quasi-monopoly in it's segment (which is a rather large segment). They have been known to buy out new competitors to maintain that monopoly. They do have established publically acknowledged political agendas. So if they want to claim common carrier status, they have to be very careful about staying neutral on what people are allowed to post. They claim that they don't play games but this is different from a criminal trial where proof beyond a reasonable doubt is required.

Unequal enforcement of the rules is at the core of if they can maintain their common carrier status. If they are unequally enforced, they aren't providing the same access to the commons. Even if the rules are tilted in a biased manner, they don't qualify. You can argue if they have or have not done this but it can't be ignored in the discussion of common carrier status.

This all goes back to a statement that companies can post/limit what they want...and they can but not if they are claiming to be a common carrier. Then the expectation is they maintain a level of impartiality and act primarily as a conduit with minimal intrusion into the messages. Of course, it's well established that they use algorithms that respond to prior posts and searches directing new results based on that. That by itself is questionable as they are directing the message that gets out and therefore should be at least partially responsible for the message.
valhalla360 is offline  
Old 15-02-2021, 12:22   #127
Registered User

Join Date: May 2011
Location: Lake Ont
Posts: 8,565
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by valhalla360 View Post
Stifling innovation is really separate from the common carrier issue (or at least a distinct separate aspect of it).

Common carrier is really about, are they simply a conduit or are they materially participating in the development and editing (and censoring) of the information.
One of the big problems is that we still haven't decided what something like Facebook is. They don't neatly fit into either category of common carrier or publisher.

A better single-term description of what they are might be is "enabler". Facebook created a "walled garden" - a safer and somewhat curated version of the WWW where it was easy for anyone to put up a record that they wanted to share with family, friends and others, and that made it equally easy for them to find within the garden the postings of their friends. And also, to get and send hints about people and content they might be interested in following, or vice versa... and also from companies who have paid to be introduced to people who match a certain profile (oh, did we mention that the gardener has the right to get an idea of what you're into?)

This enablement has been a tremendous innovation that's greatly enhanced some aspects of our lives, and is the sort of innovation that we don't want to stifle. Of course the problematic part of this enabler is the underlined bit above
  1. who's able to buy access to you
  2. the information-shaping power of the gardener's methods
I believe that people, through their governments, need to settle on a new definition for this type of service, and what rules they think should be in place for people's protection. Just telling companies to figure it out is lazy and dangerous, and we probably won't like what we end up with.
Lake-Effect is offline  
Old 15-02-2021, 12:51   #128
Registered User
 
Mike OReilly's Avatar

Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,455
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by valhalla360 View Post
...Unequal enforcement of the rules is at the core of if they can maintain their common carrier status. If they are unequally enforced, they aren't providing the same access to the commons. Even if the rules are tilted in a biased manner, they don't qualify. You can argue if they have or have not done this but it can't be ignored in the discussion of common carrier status.
Agreed. They say they aren't enforcing things unequally, and that is supported by the evidence.

Quote:
Originally Posted by valhalla360 View Post
Of course, it's well established that they use algorithms that respond to prior posts and searches directing new results based on that. That by itself is questionable as they are directing the message that gets out and therefore should be at least partially responsible for the message.
Yes, this is part of the argument against viewing them as simple common carriers. I agree. To use the phone system as an example, it would not be acceptable if callers were subjected to targeted messages while waiting for the person to pick up. Or worse, that you could only phone like-minded people. Yet this is what FD does.

Quote:
Originally Posted by Lake-Effect View Post
One of the big problems is that we still haven't decided what something like Facebook is. They don't neatly fit into either category of common carrier or publisher.
Probably true. We need a third category.

Quote:
Originally Posted by Lake-Effect View Post
A better single-term description of what they are might be is "enabler". Facebook created a "walled garden" - a safer and somewhat curated version of the WWW where it was easy for anyone to put up a record that they wanted to share with family, friends and others, and that made it equally easy for them to find within the garden the postings of their friends. And also, to get and send hints about people and content they might be interested in following, or vice versa... and also from companies who have paid to be introduced to people who match a certain profile (oh, did we mention that the gardener has the right to get an idea of what you're into?)
The problem is, there's a fine line between presenting new but innocuous content people might be interested in, and manipulating people with false information that may cause harm to society. And none of this happens in a vacuum either with FB or Google. Their real business is knowing more about you than you know about you, and using that knowledge to maximize their profits. This is what makes the question of legal liability protection so vital.

Quote:
Originally Posted by Lake-Effect View Post
This enablement has been a tremendous innovation that's greatly enhanced some aspects of our lives, and is the sort of innovation that we don't want to stifle. Of course the problematic part of this enabler is the underlined bit above
It is a tremendous innovation. I'd quibble () as to whether it's actually enhanced our lives, but it is certainly an innovation.
__________________
Why go fast, when you can go slow.
BLOG: www.helplink.com/CLAFC
Mike OReilly is offline  
Old 15-02-2021, 13:22   #129
Registered User

Join Date: Nov 2013
Location: Slidell, La.
Boat: Morgan Classic 33
Posts: 2,845
Re: Addressing Misinformation and Harmful Content Online

Just let the corrupt and stupid corporate/capitalist system that created them 'take care' of the problem.


FTC Sues Facebook for Illegal Monopolization

https://www.ftc.gov/news-events/pres...monopolization

Using Antitrust Law To Address the Market Power of Platform Monopolies

https://www.americanprogress.org/iss...rm-monopolies/
jimbunyard is offline  
Old 15-02-2021, 13:23   #130
Registered User
 
CaptTom's Avatar

Join Date: Apr 2004
Location: Southern Maine
Boat: Prairie 36 Coastal Cruiser
Posts: 3,258
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by Lake-Effect View Post
I believe that people, through their governments, need to settle on a new definition for this type of service, and what rules they think should be in place for people's protection.
Very true. You can look at social media as a public space; a "commons" that society has an interest in protecting and governing the use of. No-one has a "right" to urinate into the town well.

Quote:
Originally Posted by Lake-Effect View Post
Just telling companies to figure it out is lazy and dangerous, and we probably won't like what we end up with.
Yet, that's exactly what happened. I was probably OK with that for a while, during the innovation phase. We didn't know what would emerge, and so were in no position to draft legislation which would quickly become obsolete.

We're in a better position now to see the pitfalls and potential for abuse. I'm not a fan of government regulation, but I'm not big on anarchy, either.
CaptTom is offline  
Old 15-02-2021, 14:20   #131
Registered User

Join Date: May 2011
Location: Lake Ont
Posts: 8,565
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by Mike OReilly View Post
To use the phone system as an example, it would not be acceptable if callers were subjected to targeted messages while waiting for the person to pick up.
For some stupid reason, I still have a copper landline. Ok, I just remembered why: it's so I can receive stupid pitches for duct cleaning, to help realtors keep tabs on me, easy access for scammers, or for random silence on the other end.

So, the phone company is a bigger a$$h0l3 than Facebook or Google, as far as I'm concerned. At least FB and Google aren't also dinging me ~$45 a month as well.
Quote:
Or worse, that you could only phone like-minded people. Yet this is what [FB] does.
You can search out specific people or subjects on FB, if you want. You're not locked to its whims. Like housepets, it can be trained.

Quote:
there's a fine line between presenting new but innocuous content people might be interested in, and manipulating people with false information that may cause harm to society.
There's NO line, if the system has no ability to judge the veracity and/or malice of content. And that's what's missing when there's no guidelines for what such a system is permitted to do.

Maybe a tag system is required, where content gets flagged (automatically or otherwise) by category and other criteria (eg politics, environment, history, and so on) and users can set filters to accept or reject tags as well as people, groups, etc. It would also be possible to rate content by its "truth", but who makes the call? Manual filters still might result in echo chambers, but at least the person chose their chamber.
Lake-Effect is offline  
Old 15-02-2021, 14:28   #132
Registered User
 
Mike OReilly's Avatar

Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,455
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by CaptTom View Post
Very true. You can look at social media as a public space; a "commons" that society has an interest in protecting and governing the use of. No-one has a "right" to urinate into the town well.
Problem is, FB is not the town well; it's not part of "the commons." It's more like a private well that the owner allows people to use as long as they follow his/her rules.

If it was a public good it would be easy. This is why some people argue FB and Google and all the rest should be nationalized, or at least treated as a quasi-controlled utility.

Although I see the attraction of this approach, I think it opens up to many dangers.

Quote:
Originally Posted by CaptTom View Post
We're in a better position now to see the pitfalls and potential for abuse. I'm not a fan of government regulation, but I'm not big on anarchy, either.
Me neither. The problem is, FB works so well because it taps into a basic part of the human psyche. It works because we all like to be told our views are correct. This is why the algorithms keep feeding us self-reinforcing messages. It keeps us engaged with the platform by creating the perfect echo-chamber for each of us.

Engagement is how they make their money, so naturally they're going to use the tools they know work. What else would anyone do?
__________________
Why go fast, when you can go slow.
BLOG: www.helplink.com/CLAFC
Mike OReilly is offline  
Old 15-02-2021, 14:42   #133
Registered User
 
Mike OReilly's Avatar

Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,455
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by Lake-Effect View Post
You can search out specific people or subjects on FB, if you want. You're not locked to its whims. Like housepets, it can be trained.
Yes, you can, but the system is designed to point you to people of its choosing.

Quote:
Originally Posted by Lake-Effect View Post
There's NO line, if the system has no ability to judge the veracity and/or malice of content. And that's what's missing when there's no guidelines for what such a system is permitted to do.
The system makes no distinction between truth or falsehoods because it's not in its economic self-interest to do so. It feeds people what keeps them engaged with the site, and mostly that means reinforcing each user's perspective. And since FB knows more about you than you know about you, it is very good at feeding you stuff that will keep you clicking.

Quote:
Originally Posted by Lake-Effect View Post
Maybe a tag system is required, where content gets flagged (automatically or otherwise) by category and other criteria (eg politics, environment, history, and so on) and users can set filters to accept or reject tags as well as people, groups, etc. It would also be possible to rate content by its "truth", but who makes the call? Manual filters still might result in echo chambers, but at least the person chose their chamber.
Isn't FB already doing this now? It's a start, but people who are 'fact-checked' quickly decide the fact-checkers are biased. We see that accusation here on CF all the time. Even well established facts will be disputed endlessly. It happens all the time here on CF.
__________________
Why go fast, when you can go slow.
BLOG: www.helplink.com/CLAFC
Mike OReilly is offline  
Old 15-02-2021, 15:30   #134
Registered User
 
CaptTom's Avatar

Join Date: Apr 2004
Location: Southern Maine
Boat: Prairie 36 Coastal Cruiser
Posts: 3,258
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by Mike OReilly View Post
Problem is, FB is not the town well; it's not part of "the commons." It's more like a private well that the owner allows people to use as long as they follow his/her rules.
OK, good point. I guess I'm looking generically at "the internet" or "social media" as common infrastructure, not FB in particular. To use the broadcast TV analogy, where a license is required and license holders must demonstrate that they act in the public interest, FB would be like one TV network.
CaptTom is offline  
Old 15-02-2021, 19:19   #135
Registered User
 
Mike OReilly's Avatar

Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,455
Re: Addressing Misinformation and Harmful Content Online

Quote:
Originally Posted by CaptTom View Post
OK, good point. I guess I'm looking generically at "the internet" or "social media" as common infrastructure, not FB in particular. To use the broadcast TV analogy, where a license is required and license holders must demonstrate that they act in the public interest, FB would be like one TV network.
Yes, that would be the broadcast model. Unfortunately, the Internet is largely privately owned. It didn't start off that way, but the collective "we", in our wisdom (and I use that term lightly), decided it was better to sell or, or transfer control, to private interests under the mantra that the private sector can do it better.

Well, some things are certainly done better by the private sector, but one thing it doesn't do well is serve the public good. And here is a case-in-point.
__________________
Why go fast, when you can go slow.
BLOG: www.helplink.com/CLAFC
Mike OReilly is offline  
Closed Thread


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
addressing the West Coast sailors in Canada kazo Our Community 18 31-12-2020 14:12
questions about addressing cracks/gouges in boat's hull tipsyraven Construction, Maintenance & Refit 6 26-09-2017 15:15
o-charts "The site ahead contains harmful programs" Wannabe-007 OpenCPN 8 23-02-2016 02:58
Light Loading of Diesels -- How Harmful? Dockhead Engines and Propulsion Systems 63 06-11-2015 09:02
Will the fuel back pressure be harmful? Extemporaneous Engines and Propulsion Systems 5 31-01-2009 19:04

Advertise Here
  Vendor Spotlight
No Threads to Display.


All times are GMT -7. The time now is 18:46.


Google+
Powered by vBulletin® Version 3.8.8 Beta 1
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Social Knowledge Networks
Powered by vBulletin® Version 3.8.8 Beta 1
Copyright ©2000 - 2024, vBulletin Solutions, Inc.

ShowCase vBulletin Plugins by Drive Thru Online, Inc.