Amazon Web Services says it is proactively removing more content that violates the rules
Amazon plans to take a more proactive approach to determining what types of content violate its cloud service policies, such as anti-violence promotion rules, and enforce its removal, two sources say, a move that could reignite debate about the amount of energy that tech companies should have to restrict free speech.
Over the next few months, Amazon will hire a small group of people in its Amazon Web Services (AWS) division to develop expertise and work with outside researchers to monitor future threats, one of the sources familiar with the matter said.
According to experts, this could make Amazon the world’s largest cloud service provider with 40% market share according to research firm Gartner, one of the world’s most powerful arbitrators for authorized content on Internet.
Amazon made headlines in the Washington Post last week for shutting down a website hosted on AWS that featured Islamic State propaganda celebrating the suicide bombing that killed around 170 Afghans and 13 US soldiers in Kabul last Thursday . They did so after the news agency contacted Amazon, according to the Post.
The proactive approach to content comes after Amazon banned the social media app Speak from its cloud service shortly after the Jan.6 riot on Capitol Hill for allowing content promoting violence.
âAWS Trust & Safety strives to protect AWS customers, partners and Internet users from bad actors who attempt to use our services for abusive or illegal purposes. When AWS Trust & Safety becomes aware of abusive or illegal behavior on AWS services, they act promptly to investigate and engage with customers to take appropriate action, âAWS said in a statement.
âAWS Trust & Safety does not pre-review content hosted by our customers. As AWS continues to grow, we expect this team to continue to grow,â he added.
Human rights activists and groups increasingly hold not only websites and apps accountable for harmful content, but also the underlying technological infrastructure that allows these sites to function, while conservatives politicians denounce the restriction of freedom of expression.
AWS already prohibits the use of its services in various ways, such as illegal or fraudulent activity, to incite or threaten violence or promote the sexual exploitation and abuse of children, in accordance with its acceptable use policy.
Amazon first asks customers to remove content that violates its policies or have a system to moderate the content. If Amazon does not come to an acceptable agreement with the customer, it may shut down the website.
Amazon aims to develop an approach to the content issues it and other cloud providers face more frequently, such as determining when disinformation on a company’s website is reaching a scale that requires AWS action, the source said. .
The new team at AWS does not plan to sift through the vast amounts of content that businesses host in the cloud, but will aim to stay ahead of future threats, such as emerging extremist groups whose content could end up on the web. AWS cloud, the added source.
Amazon is currently recruiting a global policy manager to join the AWS Trust and Security team, which is responsible for “protecting AWS from a wide variety of abuse,” according to a job posting on its website.
AWS’s offerings include cloud storage and virtual servers and have large companies like Netflix, Coca-Cola and Capital One as customers, according to its website.
Better preparedness against certain types of content could help Amazon avoid legal and PR risks.
âWhile (Amazon) can proactively eliminate some of these items before they are discovered and become big news, it helps to avoid this reputational damage,â said Melissa Ryan, Founder of CARD Strategies, a consulting firm that helps organizations understand extremism and online toxicity threats.
Cloud services such as AWS and other entities such as domain registrars are considered the “backbone of the Internet,” but have always been politically neutral services, according to a 2019 report by Joan Donovan, a Harvard researcher who studies online extremism and disinformation campaigns.
But cloud service providers have already removed content, such as in the aftermath of the 2017 alt-right rally in Charlottesville, Virginia, helping to slow the ability of alt-right groups to organize, Donovan wrote.
âMost of these companies naturally didn’t want to get into content and didn’t want to be the arbiter of thought,â Ryan said. “But when you talk about hate and extremism, you have to take a stand.”
Â© Thomson Reuters 2021