Tackling abuse on social media is a monumental task – but billion dollar companies should be up to it

The Facebook logo displayed on a phone and computer

Facebook, Twitter, and other social media companies have faced criticism over their moderating CREDIT: JUSTIN TALLIS/AFP/GETTY IMAGES

Much of the activity which makes up our lives online is conducted in public. The giant social media platforms – Facebook, Twitter, Youtube – are all to some extent designed around similar principles; they thrive on open, instant, global participation.

The processes of signing up, posting and finding or creating communities to communicate with have been made as smooth as possible. This approach has promoted an explosion in communication, and created a colourful array of special interest channels, groups and hashtags, allowing people to discuss things they care about.

This open design and the vast scale on which content is created and shared are both key to social media platforms. They help to generate the frenetic activity of likes, retweets and comments upon which these platforms depend.

The more time people spend clicking, sharing and discussing on a platform the more advertising they will see, and the better this advertising can be targeted. This is not a new business model, but it has proved to be extremely effective; Facebook earned $8.8 billion last quarter, much of it through advertising.

A Twitter app tile on a mobile phone screen
Twitter is another company that faces criticisms about how it filters content CREDIT:RICHARD DREW/AP PHOTO

The problem with putting such a premium on instant accessibility, is that it becomes difficult to deal with content which is hateful, abusive or illegal. Lacking the traditional safeguards of editors and moderators able to screen content before it is publicly view able, social media companies have found themselves hosting violent threats, hate speech and, in some cases, images of child sexual abuse.

The majority of platforms have strict guidelines on acceptable content. The question however, is whether they are doing enough to remove content which is in contravention of these. It is an issue that has been increasingly raised by journalists and MPs, and was a major point of contention when representatives of Facebook, Google and Twitter appeared in front of the Home Affairs Select committee yesterday.

When it comes to removing content, platforms face two distinct challenges. The first of these is reactive: how quickly and effectively can you deal with content which has been reported as being problematic. The second is proactive: what can platforms do to identify and remove dangerous content before it is reported – ideally, before it has even been seen.

In the first of these cases, it is clear that the procedures platforms have in place to remove content which is clearly illegal or contravenes their terms of service could be improved. These processes are often lengthy and opaque, and where a decision is made not to remove content it is not always clear why. This need for increased communication and transparency is not only a problem for users who have flagged up content, but also for authorities – last year, a report from the House of Lords highlighted poor communication channels between platforms and law enforcement as a key problem to address.

The question of when a comment crosses the line from criticism or humor to abuse is contentious and difficult to decide. Since the web’s inception, the grey area at the limits of acceptable discourse is one in which online communities – trolls, satirists and provocateurs –- have thrived. The task of developing protocols and guidelines for when content should be removed cannot be left to social media companies alone, and it is imperative that civil society, free speech organisations, groups representing vulnerable communities and law enforcement, not to mention the users of the platforms themselves, are involved in the discussion to decide where content should be flagged, and taken down. Much of this work is already underway – social media companies have, it should be pointed out, been aware of this problem for years – but it is clear more could be done.

The proactive question is much more difficult to deal with. It is often highlighted that tech companies are good at using algorithms to work out how to target advertising based on our browsing patterns, and remove pictures of female nipples from Instagram. So why can’t they use the same approach to get rid of hate speech and threatening language? The problem here is that human language is inherently messy. In many cases, whether or not something is deemed inappropriate will depend on the context of the conversation, the person being addressed and the ever-shifting meaning behind the language used (who could have known a year ago that ‘snowflake’ could be such a potent term?)

While great strides are being made in machine learning and natural language processing, the problems which computers are becoming good at solving – playing chess, for example, or finding the quickest bus route through central London – are by and large clearly defined problems with relatively uncontroversial solutions. In cases of offensive content, however, this consensus is often non-existent – though the limits of acceptable speech is an intrinsically human question, drawing those boundaries is often a task which even humans find impossible to agree on.

Of course, the difficulty of this task does not mean it shouldn’t be attempted. It’s likely that algorithmically detecting content to be flagged will have a key role to play here, even if it’s only in the least controversial cases of illegality. If anyone has the technical know-how and resources to undertake this work it is the social media giants, and increasing public pressure to alleviate this problem should help provide some urgency to this work. Again, however, we should be wary of leaving this task purely in their hands. There is a crucial need to keep these algorithms accountable and free of bias, especially where they are used to censor public discourse or report illegality. Civil society and the public at large have a key role to play in their oversight.

The thread connecting all of this is a need for dialogue and transparency. Where users have flagged abusive content, there needs to be communication about what’s going on, when action is likely to be taken, and, where possible, what the reasoning was behind any decisions. Where difficulties exist in detecting or removing content, either from a technical or ethical perspective, the government, civil society and social media companies need to be able to talk frankly about what these difficulties are, and how we might start to resolve them. Finally, where automatic procedures are put in place, we need to understand their strengths and limitations, as well as making the decisions they have taken as transparent and accountable as possible.

[“Source-telegraph”]