A recent court decision may have significant ramifications for how news material is disseminated online, including ensuring that less sensationalism be used in Facebook postings, which are expressly intended to elicit the greatest amount of reaction.

Last week, the Australian High Court upheld a ruling which, in some circumstances, could see Australian media outlets held liable for user comments left on their respective Facebook Pages.

The discovery has generated a fresh wave of worries about the possibility of restricting journalistic freedom of expression and hindering the ability to report. However, the case’s intricacy extends beyond the headline’s first description. In fact, the High Court ruling expands the scope under which media outlets can be held legally responsible for comments made on their social media pages. However, the full nuance of the ruling is more specifically focused on making sure inflammatory posts are not shared with the clear intent of eliciting comments and shares.

The case stems from an investigation in 2016, which found that inmates of a youth detention center in Darwin had been severely mistreated, even tortured, during their confinement. Within the subsequent media coverage of the incident, some outlets had sought to provide more context on the victims of this torture, with a handful of publications singling out the criminal records of said victims as an alternate narrative in the case.

Dylan Voller, a former prisoner, says that the following media portrayals of him were both inaccurate and defamatory, which has resulted in Voller pursuing legal restitution for the published allegations. Voller himself has been the subject of numerous stories, including a piercing in the New York Times. According to the Australian, “Dylan Voller’s record of jailhouse events exceeds 200,” which emphasized the many wrongs Voller was said to have done that resulted in his imprisonment and led to his release.

Because these stories were reprinted on the Facebook Pages of the media outlets in issue, the matter involving Facebook comments arose particularly. According to Voller’s defense team, the framing of these articles, specifically within Facebook posts, prompted negative comments from users of the platform. This framing, according to Voller’s defense team, was intended to provoke more comments and engagement on these posts, in order to garner more reach within Facebook’s algorithm.

The crux of the case, therefore, boils down to a single point: it is not so much about whether or not publications can now be sued for people’s comments on their Facebook posts, as it is about whether or not there can be a definitive link shown between the Facebook post itself, and whether or not that has lured defamatory comments, and community perception, which cries out for a publisher to be sued.

Indeed, in the original case notes, Voller’s legal team argued that the publications in question:

“Should have known that there was a “significant risk of defamatory observations” after posting, partly due to the nature of the articles”

So the complexities in this case go far beyond the topline finding that publishers can now be sued for comments posted on their Facebook Page, because the real impetus here is that those who publish any content to Facebook on behalf of a media publisher must be more cautious in the actual wording of their posts. Because if subsequent defamatory remarks can be traced back to the original post, and the publisher is shown to have incited such a reaction, legal action may be pursued against the publisher and the post itself.

In other words, publishers are free to re-share anything they want as long as they stay true to the facts and refrain from sharing deliberately inflammatory social media postings in the wake of a major news event.

As an example, here is another story published by The Australian on the Dylan Voller case, which, as you might guess, has likewise elicited a large number of critical and derogatory comments.

Facebook post from The Australian

It is not defamatory, and it is just reporting the facts. The quotation is from an MP, and there is no clear evidence to indicate that the publisher was attempting to lure Facebook users into commenting based on the article posted on the social media platform.

Which brings us to the actual issue at hand: the decision places a greater emphasis on publishers’ consideration of the framing of their Facebook postings as a method of attracting reader comments. It is possible for a publisher to be held liable for inciting negative comments; however, there must be definitive evidence to show both damages to the individual and intent within their social media post, which must be specific to the social media post and not the linked article, in order for the publisher to be prosecuted.

Which, in certain cases, may be the preferable course of action. As a result of the obvious advantage for publishers to post anger-inducing, emotionally charged headlines in order to stimulate comments and shares, which then guarantees maximum reach, media incentives have been dramatically changed over the last decade by internet algorithms.

To elicit a user response, misinterpretations, half-truths, and outright lies are used. If publishers can be held accountable for their actions in this regard, it would appear to be a more beneficial approach than proposed reforms to Section 230 laws in the United States, which would severely curtail press freedoms therein.

Once again, this decision is unique to Facebook postings, and the language used in such posts is intended to elicit an emotional reaction in order to elicit interaction from the audience. Proving a clear connection between a Facebook status post and any personal damages will continue to be challenging, as it is in all defamation cases, at least for the foreseeable future. However, it is possible that this discovery may encourage Facebook Page administrators at media sites to be more accurate in their updates, rather than relying on comment-baiting to increase algorithm reach.

As a result, although it may expose media outlets to greater responsibility, it may actually be a step forward in terms of establishing more accurate reporting and holding publishers accountable for inciting online mob assaults based on their framing of a story.

As has been shown, the most effective method to generate comments and shares on Facebook is to elicit an emotional response, which then encourages others to remark, share, and so on.

If a Facebook post is shown to be plainly encouraging such behavior, and that this may result in reputational harm, it seems to be a good move – albeit it obviously comes with greater risk for social media administrators.