top of page

Navigating Gray Areas: Social Media, the Casualties of War, and the Conflict between Free Speech and Ethics

Writer's picture: Augustine Acuna III (Staff Writer)Augustine Acuna III (Staff Writer)

The advent of social media presented the world with new user-based platforms that unlocked the ability to present ideas and information with greater ease than ever. Along with smartphones and their photographic features, a revolutionary convenience was placed in the hands of everyday citizens, creating a powerful tool to share day-to-day experiences. This has led to social media platforms like Facebook, Instagram, X, Reddit, and YouTube facing growing difficulties in moderating the content uploaded to their servers. Moderation involves the ‘governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse.’[1] Companies like these have the option to stop users from sharing sensitive or abusive material, such as those featuring casualties of war, through suspension or termination of the uploader’s account or by simply deleting the morbid content from their site.


This issue has become an increasingly complex and controversial task for social media companies. Morally, media featuring war dead can be seen as a violation of the victim’s family’s privacy, but it can also be used as a powerful tool to spread awareness, fuel support, and expose crimes. Due to these benefits, this essay will explore the delicate issues and solutions regarding the uploading of war dead should social media companies allow it in the first place. After all, social media platforms like Facebook (this essay’s main social media example) should prevent their users from uploading war dead if it poses too many political and legal issues. Therefore, the answer to this prompt lies not solely within a moral framework, but by looking at the issues through a company lens and determining whether a balance can be achieved that satisfies the intricacies of international law and the differing cultures and politics of the states they operate in.


Contemporary social media companies are multinational entities that operate across the world’s borders. A study of international humanitarian laws, such as that of the Geneva Conventions, serves as a good basis to analyze global laws that uphold ethical considerations during wartime. The closest material that addresses the handling of war dead is from article 34. ‘The remains of persons who have died for reasons related to occupation or in detention resulting from occupation or hostilities… shall be respected.’ [2] Although parallels between uploading war dead and rules of war can be drawn from various clauses, it’s clear to see that the Geneva Conventions need an update that addresses the new avenues of violation that social media may possess. Do images of soldiers without the consent of families constitute a breach of ‘respect’? Who determines the line between terrorist content and educational material? Guidelines on photos and videos featuring war dead could provide social media companies with a clear outline that helps them navigate moral and legal issues. While social media may have the ability to violate laws, it also has the ability to bring justice.


Violation of international humanitarian laws can be hard to pin down and subsequently punish due to their ambiguous definitions and the distorting fog of war. Serving as a justification for uploading content with war dead, social media has given humanitarian law organizations the ability to create a digital crime scene, using videos and images uploaded by users to piece together evidence that can be used to highlight violations. ‘In August 2017, the International Criminal Court (ICC) issued its first ever arrest warrant based largely on evidence gathered on social media. The warrant cited seven videos posted on social media depicting alleged Libyan commander Mahmoud Mustafa Busayf Al-Werfalli ordering or committing the executions of 33 individuals.’ [3] Humanitarian laws can be and have been upheld through the evidence collected via social media. This places a responsibility in the hands of social media platforms to become a depository for incriminating evidence.


Threatening this newfound responsibility are the automated systems designed to prevent the abuse of these social media platforms. The world’s top social media entities use an algorithm that conducts ‘perceptual image hashing’, a process which breaks down photos and videos into a long series of characters so that their identifying features can be recorded and cataloged. With the help of machine learning, platforms like Instagram (owned by Meta) can apply a filter that blocks terrorist-related content from being uploaded, much of which contains images of war dead. ‘Each firm applies its own policies and definitions of terrorist content when deciding whether to remove content when a match to a shared hash is found.’ [4] This has the potential to erase evidence from the internet before it can ever be shared. Furthermore, the differing content policies adopted by various social media companies also serve as another barricade for humanitarian law organizations to overcome. By pursuing automated moderation, the responsibility of preserving critical evidence is placed in the hands of algorithms that don’t yet understand the morals of the metadata they judge. Facebook has admitted to removing content before it can undergo human verification if its tools are confident enough that the content supports terrorism. In 2018 alone, Facebook took down over 14 million posts flagged for ‘terrorist content.’ [5] While much of this media may have possessed the ability to radicalize users, it’s also certain that these uploads contained evidence that the ICC could have used to incriminate individuals like the Libyan commander mentioned earlier. Facebook may just be the first in a long line of companies to begin this transition in order to avoid confronting the issue of uploading war dead.


There will be ‘increasing expectations by government regulators that companies remove illegal content, including hate speech and violent extremist content, within predetermined time periods.’ [6] The owners of various social media sites may soon have to choose between becoming beacons of free speech, practicing more relaxed forms of moderation, or bending to the influence of government regulators, avoiding risk of lawsuit and loss of user-base.


Transitioning to the concerns posed by individual states, each country that these companies operate in pose new ethical and legal issues. For example, the United States, who covets its’ freedom of speech, established section 230 in 1996. This legislation ‘provides immunity to online platforms from civil liability based on third-party content and for the removal of content in certain circumstances.’ [7] This ensures that social media companies operating in the United States concern themselves less with moderation, leaving the consequences of content violations to be enacted on the users themselves. While this system promotes greater user freedom, spreading awareness of global conflicts and crimes through images featuring war dead, the secrecy surrounding moderation has enabled companies like Facebook to take down important posts at their own discretion. Recently, bi-partisan talks of reforming section 230 have resulted in the Platform Accountability and Consumer Transparency Act (2021). ‘Under the PACT Act, if a site chooses to remove your post, it has to tell you why it decided to remove your post and explain how your post violated the site’s terms of use. The PACT Act would also require sites to have an appeals process.’ [8] Should legislation like this be passed, automated moderation could be questioned and perhaps even reversed, creating a system that could prevent posts that may be flagged as ‘terrorist content’ from being taken down and preserving any evidence they may possess.


While the introduction of this legislation is a good start, there are still many issues yet to be addressed. Facebook’s content policy on violent and graphic content removes posts that it deems ‘particularly violent or graphic’ while allowing posts whose context raises discussion on ‘important issues such as human rights abuses, armed conflicts, or acts of terrorism… to help condemn and raise awareness about these situations.’ [9] The morals of this policy may coincide with opinions shared throughout this essay, but Facebook’s stance on ‘important issues’ may force it to take various political stances, a position it strives to avoid. Marking certain image tags (achieved through perceptual image hashing) featuring war dead as relevant wields significant influence over global politics and is already inherently political. By restricting users under 18 from viewing this content (a separate issue whose enforcement requires greater verification) and utilizing the amnesty granted by Section 230 to its’ full extent, images of war dead posted by U.S. users could spread awareness under a completely legal framework.


On the other side of the world is Myanmar, a country whose ethnic divisions have become a source of ceaseless conflict. While the country’s adoption of Facebook as its main mode of communication has certainly proved financially fruitful for the social media company, the violence exerted by the country’s ethnic conflicts has placed Facebook under intense scrutiny and displayed the pitfalls of content moderation. ‘Myanmar’s state media created an environment where extreme speech targeting the Rohingya on Facebook was tolerated.’ [10] This created a social infrastructure full of toxicity and hate that only served to amplify tensions. ‘In 2014, the social media behemoth had just one content reviewer who spoke Burmese: a local contractor in Dublin.’ [11] Not only did this mean that the social media platform’s language interface only contained Burmese, a bias that excluded ethnic minorities, but also that its content moderation team could never hope to address all policy violations. Later in 2019, out of ignorance or good intentions, Facebook made a move to ban select ethnic armed organizations (smaller ethnicities who, along with the Rohingya, were victims of the Myanmar military) ‘forcibly injecting itself into the country’s discourse in the hopes of stymieing a number of emerging ethnic clashes.’ [12] Facebook effectively silenced ethnic organizations, taking the side of the Tatmadaw, in a misguided act of consideration for Myanmar. The purpose of this foray into Facebook’s mistakes in Myanmar is to highlight how social media companies have the power to legitimize certain organizations. And although the examples presented by Myanmar don’t directly address whether social media companies should prevent uploads featuring war dead, they display the cultural and political intricacies found within every country. Social media companies should have a complete understanding of the regions they share their platforms with, especially those of developing nations. Ignoring how user-interface language isolated minority ethnic groups and deploying a one-man moderation team was a severe insult to the people of Myanmar, a developing democracy whose institutions were not matured enough to handle the polarization brought on by a social media platform like Facebook.


Although this essay focuses mainly on the conduct of social media platforms like Facebook and the means with which they interact with international humanitarian law and states, it hopes to highlight the environment these companies find themselves in when they consider preventing users from uploading images of war dead. Facebook is not a government company; its’ legal status leaves it well within its right to forego any policy updates allowing users to upload these images. However, whether due to the appeal of free speech or recognition of benefits that images featuring war dead present, several social media platforms have acknowledged and attempted to address user rights regarding this specific content. It’s important to note that attempts at moderating user uploads with war dead can be just as perilous. Should a company’s values suggest not preventing users from uploading content with war dead, they should take care in balancing ethical considerations, all while pursuing accurate means of moderation, cultural sensitivity, and complete transparency that promotes collaboration with the international system and the states that compose it.


 

[1]  James Grimmelmann, “The Virtues of Moderation,” Yale Journal of Law and Technology, 2015, 42, https://heinonline.org/HOL/Page?collection=journals&handle=hein.journals/yjolt17&id=42&men_tab=srchresults.

[2] ICRC Database, Treaties, States Parties and Commentaries, Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977., Article 34 - Remains of deceased, https://ihl-databases.icrc.org/en/ihl-treaties/api-1977/article-34 (Last accessed on 11.02.2024) 

[3] Anna Veronica Banchik, “Disappearing Acts: Content Moderation and Emergent Practices to Preserve At-Risk Human Rights–Related Content,” New Media & Society 23, no. 6 (March 30, 2020): 146144482091272, https://doi.org/10.1177/1461444820912724.

[4] Robert Gorwa, Reuben Binns, and Christian Katzenbach, “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance,” Big Data & Society 7, no. 1 (January 2020): 205395171989794, https://doi.org/10.1177/2053951719897945.

[5] “Hard Questions: What Are We Doing to Stay ahead of Terrorists?,” Meta, November 8, 2018, https://about.fb.com/news/2018/11/staying-ahead-of-terrorists/#:~:text=Terrorists%20are%20always%20looking%20to.

[6] Sarah Myers West, “Censored, Suspended, Shadowbanned: User Interpretations of Content Moderation on Social Media Platforms,” New Media & Society 20, no. 11 (May 8, 2018): 4366–83, https://doi.org/10.1177/1461444818773059.

[7] DEPARTMENT OF JUSTICE, “DEPARTMENT of JUSTICE’S REVIEW of SECTION 230 of the COMMUNICATIONS DECENCY ACT of 1996,” www.justice.gov, June 3, 2020, https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996.

[8] Kathryn Montalbano, “Reimagining Section 230 and Content Moderation: Regulating Incivility on Anonymous Digital Platforms,” Communication Law and Policy, February 2, 2023, 1–33, https://doi.org/10.1080/10811680.2022.2136442.

[10] Jenifer Whitten-Woodring et al., “Poison If You Don’t Know How to Use It: Facebook, Democracy, and Human Rights in Myanmar,” The International Journal of Press/Politics 25, no. 3 (May 25, 2020): 407–25, https://doi.org/10.1177/1940161220919666.

[11] Steve Stecklow, “Why Facebook Is Losing the War on Hate Speech in Myanmar,” Reuters, August 15, 2018, https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/.

[12] Jeffrey Sablosky, “‘Dangerous Organizations: Facebook’s Content Moderation Decisions and Ethnic Visibility in Myanmar,’” Media, Culture & Society, January 20, 2021, 016344372098775, https://doi.org/10.1177/0163443720987751.


References


  1. James Grimmelmann, “The Virtues of Moderation,” Yale Journal of Law and Technology, 2015, 42, https://heinonline.org/HOL/Page?collection=journals&handle=hein.journals/yjolt17&id=42&men_tab=srchresults.

  2. ICRC Database, Treaties, States Parties and Commentaries, Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977., Article 34 - Remains of deceased, https://ihl-databases.icrc.org/en/ihl-treaties/api-1977/article-34 (Last accessed on 11.02.2024)

  3.   Anna Veronica Banchik, “Disappearing Acts: Content Moderation and Emergent Practices to Preserve At-Risk Human Rights–Related Content,” New Media & Society  23, no. 6 (March 30, 2020): 146144482091272, https://doi.org/10.1177/1461444820912724.

  4. Robert Gorwa, Reuben Binns, and Christian Katzenbach, “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance,” Big Data & Society  7, no. 1 (January 2020): 205395171989794, https://doi.org/10.1177/2053951719897945.

  5.   “Hard Questions: What Are We Doing to Stay ahead of Terrorists?,” Meta, November 8, 2018, https://about.fb.com/news/2018/11/staying-ahead-of-terrorists/#:~:text=Terrorists%20are%20always%20looking%20to.

  6.   Sarah Myers West, “Censored, Suspended, Shadowbanned: User Interpretations of Content Moderation on Social Media Platforms,” New Media & Society  20, no. 11 (May 8, 2018): 4366–83, https://doi.org/10.1177/1461444818773059.

  7.   DEPARTMENT OF JUSTICE, “DEPARTMENT of JUSTICE’S REVIEW of SECTION 230 of the COMMUNICATIONS DECENCY ACT of 1996,” www.justice.gov, June 3, 2020, https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996.

  8.   Kathryn Montalbano, “Reimagining Section 230 and Content Moderation: Regulating Incivility on Anonymous Digital Platforms,” Communication Law and Policy, February 2, 2023, 1–33, https://doi.org/10.1080/10811680.2022.2136442.

  9.   “Violent and Graphic Content | Transparency Center,” transparency.fb.com, n.d., https://transparency.fb.com/en-gb/policies/community-standards/violent-graphic-content/.

  10.   Jenifer Whitten-Woodring et al., “Poison If You Don’t Know How to Use It: Facebook, Democracy, and Human Rights in Myanmar,” The International Journal of Press/Politics  25, no. 3 (May 25, 2020): 407–25, https://doi.org/10.1177/1940161220919666.

  11.   Steve Stecklow, “Why Facebook Is Losing the War on Hate Speech in Myanmar,” Reuters, August 15, 2018, https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/.

  12.   Jeffrey Sablosky, “‘Dangerous Organizations: Facebook’s Content Moderation Decisions and Ethnic Visibility in Myanmar,’” Media, Culture & Society, January 20, 2021, 016344372098775, https://doi.org/10.1177/0163443720987751.



18 views

Recent Posts

See All

Who, Not How: Rethinking Sanctions

The case of North Korea raises critical questions about the effectiveness of sanctions, and how a contextual sanctions approach can lead...

bottom of page