Skip to content

Commit

Permalink
Initial draft of the thought experiment
Browse files Browse the repository at this point in the history
  • Loading branch information
apmarshall committed Dec 13, 2019
1 parent 25dd87d commit 5e77286
Showing 1 changed file with 71 additions and 2 deletions.
73 changes: 71 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,71 @@
# human-rights-platform
A Thought Experiment: Using the UN Charter on Human Rights to Define Content Moderation and Privacy Policy Goals for Tech Companies
# The Human Rights Governed Approach to Content Moderation and User Privacy

This is a thought experiment inspired by listening to [David Kaye on the Lawfare Podcast](https://www.lawfareblog.com/lawfare-podcast-david-kaye-policing-speech-online). The TLDR Version: David Kaye is a UN expert on free speech and the freedom of expression. He's proposed that tech companies, especially social media companies, look to international human rights law (and in particular the Universal Declaration of Human Rights) to guide their efforts at moderating content, especially. This approach leans towards allowing more freedom of expression, not less, but it does provide some guidelines for what content can and should be policed (and what the goals of such policing should be). Interestingly, Kaye also seems to suggest that taking an approach grounded in international law also gives companies a leg to stand on in terms of parsing which of many national legal regimes they are going to adhere to and which they might fight against.

So out of curiosity, I went and re-read the [Universal Declaration of Human Rights](https://www.un.org/en/universal-declaration-human-rights/index.html) and found it really interesting to think about how a technology platform might craft their own content and privacy policies to reflect it's principles. Not everything in the Declaration applies: much of it is quite clearly about people's legal rights in police/court proceedings, for example. But what follows is an attempt to create a "manifesto" of sorts for tech companies based on the Declaration. I welcome thoughts, feedback, suggestions, etc. You know the drill, file an issue or make a PR and let's chat.

## The Human Rights Framework for Tech Platforms:

1. In so far as technology platforms are generally oriented around information and communication, their foundational value should be the freedoms of speech and belief:
- Platforms should respect the right to freedom of thought, conscience, and religion.
- Platforms should respect the right to freedom of opinion and expression, including the freedom to hold opinions without interference and to seek, receive, and impart information and ideas through any media and regardless of frontiers.
- Platforms should respect the freedom of peaceful assembly and association.
- Platforms should resepct the right to freedom of movement and residence within the borders of each state and the right to leave any country and to return to one’s own country.
- Platforms should respect the right to own property.
- Platforms should respect the right to work, to free choice of employment, to just and favorable conditions of work, to just and favorable remuneration (including the right to equal pay for equal work), and to form and join trade unions.
- Platforms should respect the right to partcipate in the cultural life of the community, in particular to enjoy the arts and to share in scientific advancement and its benefits.
- Platforms should respect the right of individuals to participate in the government of their country.
- Platforms should respect the right to equal access to public services.
2. Notwithstanding this, platforms have a duty to the community. In particular, they have a duty to ensure that:
- They are not used to harm any individual’s rights to life, liberty, and security of person.
- They are not used to promote slavery or the slave trade in any form.
- They are not used to allow interference with any individual’s privacy, family, home, or correspondence.
- They are not used to attack an individual’s honor or reputation.
- They are not used to compel any association.
- They are not used to infringe upon the genuineness or integrity of any election.
- They are not used to exploit children or deprive them of anything necessary for their health and development.
- They are not used to infringe upon a parent’s right to choose the kind of education that should be given to their children.
- They are not used to infringe on the moral or material interests of any individual resulting from any scientific, literary, or artistic production of which they are the author.
3. Platforms must determine the particular policies that will guide their protection and enforcement of these rights, subject to the limitations determined by law solely for the purpose of securing due recognition and respect for the rights and freedoms of others and meeting the just requirements of morality, public order, and the general welfare in a democratic society. Platforms have no obligation to respect any law which is aimed at the destruction of any of the rights and freedoms found in the Universal Declaration of Human Rights.

## Some Implications (aka, A Sample Policy Skeleton):

- Platforms should generally not consent to or participate in any action which would restrict their users rights with regards to the freedoms outlined in section one of the framework above except where such rights might conflict with the rights of others as outlined below.
- Platforms which directly provide or indirect facilitate any service related to transportation or residence should (a) never engage in any sort of discriminatory practices with regards to the provision of their services (ie, housing discrimination) and (b) not tolerate discriminatory practices on the part of providers/employers who utilize their platform.
- Platforms who employ individuals through the use of their own property (ie, drivers, home rentals, or workers using their own computers/devices) should not place any requirements on the modification, condition, or use of their property which are not strictly necessary for either (a) the provision of the service or (b) the health and safety of customers.
- Platforms which directly sell or indirectly facilitate any service-for-hire should (a) never engage in any sort of wage discrimination and (b) not tolerate wage discrimination of the part of employers who might use their platforms to hire service providers.
- Platforms which facilitate service-for-hire work should (a) treat their core service providers as employees and (b) recognize the rights of those employees to unionize and collectively bargain.
- Platforms should ban any content which showcases or encourages real violence, including content containing or supporting:
- Threats and violent intimidation
- Terrorism or organizations engaged in terrorism
- Genocide/ethnic cleansing
- Racial, religious, or sexual violence
- Platforms which provide services-for-hire should ensure that the providers they employ meet the minimum requirements for safety and training as those in equivalent industries. Providers who engage in violent or abusive behavior should be disciplined appropriately. Any criminal activity by a provider should be reported to law enforcement and the platform should cooperate completely with any resulting investigation.
- Platforms should ban any content which pertains to human trafficking. For most platforms, this should also include a ban on pornography if the platform is unable to verify that such content was produced with full consent of the participants and without any sort of coercion or deception.
- Platforms which provide services-for-hire should ensure that their providers are fully cognizant of their rights as employees and have in no way been coerced into providing their services.
- Platforms should respect and protect the personal data of their users. Though data collected on users may have economic value to platforms, particularly those which rely on advertising, platforms should take care which information they monetize. In particular, platforms should never monetize and should take especial care to protect information pertaining to:
- health records or data pertaining to health matters,
- financial, tax, or credit history records,
- information pertaining to an individuals legal/criminal history,
- Information about the details of a person’s family (such as the names and ages of their children), home (such as their address or phone number), or sexual activities,
- Information posted or sent with a reasonable expectation of privacy, such as direct correspondence (email, texts, or direct/internal messages, for example),
- Information that, in a user's home country or present location, might be the basis for discrimination or persecution (for example, LGBTQ status for an individual located in a country with anti-LGBTQ laws).
- Best practices should be followed to protect all such private data, including the use of E2E Encryption and access limitations for the platforms staff
- Platforms may be required to produce some information in relation to legal processes. Beyond such obligations, highly personal information should never be shared with outside parties without explicit user consent for each instance of sharing.
- Users should have the right to review, modify, and remove collected data about themselves in a platforms records.
- Platforms should take steps to encourage users to avoid posting personally derogatory content. Though platforms cannot be expected to police all user content, they can implement tools that will (a) warn users that their content may be out of keeping with the platforms policies and (b) limit the reach and impact of duly flagged content. Additionally, platforms should take direct action to remove content which has been adjudicated to be slanderous/libelous in any legal proceeding.
- “Recommendation Engines”, whether for “friend”/“follower” connections or for additional content, should be transparent and modifiable by the end user. In other words, users should be able to see why content/associations are being recommended to them and choose to “opt-out” of future recommendations based on such an association. This should be prominently and clearly displayed so that users are encouraged to take advantage of this feature.
- Users should be allowed to limit the ability of others to “follow” or “friend” them on any platform, including by limiting the search criteria/methods by which they can be found, making their profiles “private” or “restricted” for public viewing, or explicitly blocking certain individuals from following them.
- Platforms must ensure that any content (paid or organic) on their platform pertaining to an election (whether directly about a candidate or about a political issue) is published in “good faith.” This means:
- Paid content must only be published by individuals verified as legal residents of the jurisdictions in which the election is taking place (ie, no outside actors paying for content)
- All content must clearly state who produced it, who posted it, and who paid for its amplification (if applicable). Additionally, platforms should make available tools which allow others to view what paid content is being promoted and how it is being targeted by other actors in the political space.
- Demonstrably false information intended to suppress voter participation (for example, promoting the wrong day for an election or false information about voter ID requirements) should be removed and violating accounts banned from additional posting.
- Platforms should observe a “black-out” period on political advertising prior to elections.
- Known purveyors of fake/fabricated political news should be banned or significantly limited in their distribution.
- Platforms should strictly ban all child pornography.
- Platforms should place strict limitations about advertising targeting minors, including limitations on the types of products/services and the nature of the content used to target minors.
- Advertisements which explicitly target minors should be manually reviewed by platforms.
- Platforms which provide services-for-hire should take extra precautions around service providers who may have contact with minors and hold them to strict standards of behavior.
- Platforms should require parental consent for minors to sign-up for their service. This consent needs to be verifiable (ie, not just a check-box).
- Parents should be given the option to monitor their minor children’s activity on any platform, including reviewing the content they are posting, the advertisements they are seeing, and the “meta-data” of any personal correspondences they are having.
- Platforms should have clear, verifiable take-down procedures for content reported to be in violation of an individual’s intellectual property rights. This includes violations of a person's right to their own likeness and image: platforms must respect the right of individuals to have their image and likeness permanently removed regardless of whether the requestor posted the content or it was posted by a third party.

0 comments on commit 5e77286

Please sign in to comment.