Last week a presentation of Google research called “The Good Censor” was leaked. It’s a good summary of the burden on we put on tech companies to “police” the internet. Often times the examples we see in the media are overly simplified cases. The devil is in the details – especially for multinational companies. Slide 65 is a perfect example of this. The norms differ between regions.
The problem for me is that I wonder why should let corporations decide what is right or wrong. Especially today, where we as society are not always coherent on where we stand on issues. How do we decide what is fake news? What is hate speech? When should you pull something? What does it mean to influence elections?
All of these companies have “terms of service” which describe the clear and cut cases on what is acceptable or not on their service. We also know that these companies have more detailed guides, but they’re secret. I guess they keep them secret because it makes it harder to circumvent the moderation rules, but they also make it impossible to audit them.
It would be better if companies make their detailed content moderation guides open and share them. Also, they need clear appeal procedures to allow content moderation to change. What was unacceptable in 1974 is not necessarily unacceptable in 2018 or vice versa.
The power and reach of these companies are enormous and that’s fine, but it does come with great responsibility. Hiding behind algorithms or “we’re not a media company” is simply not enough anymore. It’s about trust and transparency inherently conveys trust.
Collecting data comes with big responsibility
Contrarian view: collecting private data for good
Your data is out there
Control your personal data
This is what keeps me up at night
Data integrity and privacy