Notes
1 While this project recognizes that major social media platforms are functionally SaaS providers, this section does not treat them as “cloud providers” in order to focus on the unique ways cloud computing technologies fit into the broader debate over content moderation.
2 United Nations, “Universal Declaration of Human Rights,” United Nations, December 10, 1948, https://www.un.org/en/about-us/universal-declaration-of-human-rights#:~:text=Article%2019,media%20and%20regardless%20of%20frontiers.
3 Ibid.
4 This includes de-platforming, as in the case of AWS and Parler (Tony Romm and Rachel Lerman, “Amazon suspends Parler, taking pro-Trump site offline indefinitely,” The Washington Post, January 11, 2021, https://www.washingtonpost.com/technology/2021/01/09/amazon-parler-suspension/), as well as takedowns, labeling and other traditional forms of content moderation as practiced by social media companies.
5 Amazon Web Services, “Amazon Rekognition,” Amazon Web Services, n.d., https://aws.amazon.com/rekognition/?blog-cards.sort-by=item.additionalFields.createdDate&blog-cards.sort-order=desc and Microsoft Azure, “Content Moderator,” Microsoft Azure, n.d., https://azure.microsoft.com/en-us/services/cognitive-services/content-moderator/.
6 For example, see: Campbell Kwan, “Twitter labels India’s new content blocking powers as threat to freedom of expression,” ZDNet, May 27, 2021, https://www.zdnet.com/article/twitter-labels-indias-new-content-blocking-powers-as-threat-to-freedom-of-expression/.
7 Amazon Web Services, “Amazon Rekognition,” Amazon Web Services, n.d., https://aws.amazon.com/rekognition/?blog-cards.sort-by=item.additionalFields.createdDate&blog-cards.sort-order=desc and Microsoft Azure, “Content Moderator,” Microsoft Azure, n.d., https://azure.microsoft.com/en-us/services/cognitive-services/content-moderator/.
8 Qatar Financial Centre, “Qatar Remains Open for Business,” Bloomberg, https://sponsored.bloomberg.com/immersive/qatar-financial-centre/qatar-open-business.
9 Amazon Web Services and Intel, “Meeting your Data Residency Requirements,” Amazon Web Services and Intel, n.d., https://d1.awsstatic.com/product-marketing/Outposts/AWS%20Data%20Residency%20Infographic.pdf.
10 Catherine Howell and Darrell M. West, “The internet as a human right,” The Brookings Institution, Novermber 7, 2016, https://www.brookings.edu/blog/techtank/2016/11/07/the-internet-as-a-human-right/.
11 For example, the Santa Clara Principles lay out baseline standards on transparency, notice, and appeal, that companies engages in content moderation may subscribe to (Content Moderation and Removal at Scale, “The Santa Clara Principles on Transparency and Accountability in Content Moderation,” Content Moderation and Removal at Scale, May 7, 2018, https://santaclaraprinciples.org/). Likewise, Article 23 of the EU’s Digital Services Act (European Commission, “The Digital Services Act: ensuring a safe and accountable online environment,” European Commission, December 15, 2020, https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/digital-services-act-ensuring-safe-and-accountable-online-environment_en.), which calls for transparency on the use of automated moderation tools, including transparency on what the precise purpose of the tool is as well as indicators of the accuracy of its filters and safeguards against error.
12 Though not a cloud service provider, Twitter offers a useful framework for delivering insight into content moderation requests levied by governments: Twitter, “Removal Requests,” Twitter Transparency, n.d., https://transparency.twitter.com/en/reports/removal-requests.html#2020-jul-dec.
13 The auditability of AI remains contentious due to the black-box nature of these systems as well as to the security and commercial concerns by providers over auditing the source code for their technologies. A potential avenue to consider may be the adoption of explainable artificial intelligence (XAI) algorithms, which follow the three principles of transparency, interpretability, and explainability. In doing so, auditors and end-users may be better able to examine the systems and determine how it is making decisions and whether the results of these decisions are as expected. See: Amina Adadi and Mohammed Berrada, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” in IEEE Access 6, (Fall 2018): 52138—60, https://ieeexplore.ieee.org/document/8466590.
14 While such audits are relatively uncommon given the sensitivity around providers’ proprietary software, they are growing in popularity. Alfred Ng, “Can Auditing Eliminate Bias from Algorithms,” The Markup, February 23, 2021, https://themarkup.org/ask-the-markup/2021/02/23/can-auditing-eliminate-bias-from-algorithms and Rumman Chowdhury and Jutta Williams, “Introducing Twitter’s first algorithmic bias bounty challenge,” Twitter Engineering (blog), July 20, 2021, https://blog.twitter.com/engineering/en_us/topics/insights/2021/algorithmic-bias-bounty-challenge.
15 Nicol Turner Lee, Paul Resnick, and Genie Barton, “Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms,” The Brookings Institution, May 22, 2019, https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.
16 For example, see how developers can better understand and practice racial sensitivity: Jessie Daniels, Mutale Nkonde, Darakhshan Mir, Advancing Racial Literacy in Tech (New York City: Data & Society, 2019), https://datasociety.net/wp-content/uploads/2019/05/Racial_Literacy_Tech_Final_0522.pdf.
17 See, for example, Facebook’s Oversight Board: Oversight Board, “Oversight Board Home Page,” Oversight Board, n.d., https://oversightboard.com/.