Social Media: Should Social Media Be Held Responsible For The Atrocities And Deaths It Facilitates?

Share this Story:

Not a week goes by without another tragic story of how social media directly facilitated the injury, death or suffering of someone in the world. Earlier this month a young female child was sold in South Sudan using Facebook to increase the bidding price, while two months ago a false rumor spreading on WhatsApp in Mexico led an angry mob to burn two innocent men to death.

As social media is increasingly used to facilitate horrific activities across the world, should the companies behind those platforms bear responsibility for the misuse of their platforms? In particular, as companies make explicit decisions regarding how much to invest in tools and staffing to counter misinformation and illegal use of their platforms, knowing full well the dangers posed by underinvestment, should they be held accountable in cases where they specifically declined to invest in protecting their users against a given threat, knowing full well that that lack of investment could lead to serious harm to those users?

Every day around the world, bad actors are misusing social platforms to commit horrific atrocities. Each time another atrocity makes headlines, the companies predictably issue statements that they are sorry about the misuse of their platform, but that there is nothing further they could have done to prevent the situation.

The issue really comes down to one of cost. Facebook and its peers simply do not wish to pay for armies of humans to engage in what amounts to a non-revenue-generating cost.

The company has repeatedly acknowledged again and again the role of misinformation on its platforms, including in the death or serious injury of people across the world and that it is actively investing in fighting misinformation. This raises the question, however, of whether the company feels that it is in any way responsible for the deaths and injuries of people harmed by the misinformation flowing through its platform.

Asked about whether Facebook felt any responsibility whatsoever for the Mexico murders caused by a false viral WhatsApp rumor or the child bride whose sale was facilitated by its platform, a company spokesperson offered that it felt that combatting misinformation as a whole was its responsibility and that human trafficking was against its terms of service. However, when asked specifically whether it felt that it bore any responsibility for these tragic situations, the company declined to comment.

It is worth pointing out that Facebook actually profits monetarily from atrocities committed on its platform. In the case of the child bride sale, the company earned advertising revenue from all of the users who viewed, posted or otherwise engaged with the sale content. This raises the question of whether, in such circumstances, the company conducts a subsequent review to identify all of the advertisements shown alongside the post and refund all money it earned from those advertisements. The company declined to comment when asked, suggesting that it keeps what amounts to “blood money.”

In addition to direct monetary benefit, the company also benefits indirectly in acquiring behavioral and interest data about users who engaged with the child trafficking content that is used to enrich its behavioral profiles of those users to offer them more accurate and targeted advertising in future. In essence, by recording which users viewed the content, the company is able to associate and strengthen the advertising selectors associated with those accounts, increasing its ability to sell more targeted advertising to them in their future usage of its systems. This raises the question of whether the company should go back and delete all modifications made to user profiles as a result of viewing or engaging with illegal content to ensure that Facebook does not profit even indirectly from the publication of that content through its platform. Under such a model Facebook would still retain security logs showing which users viewed what content, but it would delete that activity from the user profiles themselves to prevent it from affecting future advertising to ensure it did not benefit. The company declined to comment when asked whether it currently did so or whether it has ever considered doing so.

Of course, we don’t hold phone companies responsible for terrorists using their phones to coordinate violent attacks, so why should we hold social media platforms to account for the misuse of their systems?

The answer is that phone companies do not perform any moderation of any kind of their systems and that their systems facilitate only person-to-person communication, not global broadcasting. In the US, the local phone company does not have an army of human reviewers monitoring all of the phone calls transiting its networks and terminating calls that violate the set of topics and perspectives that it views as acceptable uses of its telephone connections. Nor can one’s phone by itself be used to instantly broadcast false information to a billion plus people globally.

Social Media Platforms, on the other hand, actively moderate and decide what is allowed to stay and what to delete, based on its own internal guidelines of what is considered to be acceptable speech.

Facebook as a moderator that is already actively reviewing the content on its platform and evaluating posts according to a set of corporate guidelines, the company is necessarily stepping into the space of active editorial control.

Most importantly, however, the phone company’s one-to-one communications model means that misinformation travels slowly, a person at a time. With social media, a post can be sent to millions with a mouse click. This makes it far more dangerous as a tool for spreading misinformation.

In the end, should we hold social media platforms to account for misuse of their systems that leads to human rights abuses, serious injury and death? If a company knowingly fails to invest in what it believes is the necessary level of moderation in a given country due to concerns about cost, recognizing that that refusal to invest will likely cost human lives, should be held accountable? Should the company at the very least be required to hand back the direct monetary profit they earn from those atrocities and delete the behavioral data they record that will indirectly monetarily profit them in future? The fact that Facebook declined to comment on any of this reflects that for now the companies aren’t being forced to grapple with these issues. Until governments step forward and force the companies to take some level of responsibility, the companies and their leaders will likely continue to stand their ground and “delay, deny and deflect.”

 

 

6,451 total views, 5 views today

Leave a Reply

Your email address will not be published. Required fields are marked *