Home Business What Kenyan regulators can learn from UK fake news inquiry : The Standard

What Kenyan regulators can learn from UK fake news inquiry : The Standard

by kenya-tribune
21 views

[ad_1]

Kenya’s 2017 general election saw an unprecedented shift in the campaign strategies used by the leading political parties to woo voters.
With more than 90 per cent of the population owning mobile phones and a quarter of them smartphones, much of the campaigns were primarily targeted at social media users.
The Jubilee Party employed Cambridge Analytica, the now-defunct digital communications firm indicted for misusing users’ data from Facebook, while Nasa employed the services of the Vanguard Group to help with political messaging and social media targeting.

SEE ALSO :WhatsApp limits message-sharing to five to curb fake news

The result was an online war that saw each party use all means, including false news and propaganda, to ensure maximum engagement with their followers on social media.
A study by Geopol and Portland Communications group found nine out of 10 Kenyans had identified fake news concerning the polls in their social media feeds during the campaign process.
Like in the US presidential elections and the Brexit referendum vote in 2016, Kenyans got a first-hand experience of how social media can be weaponised to spread misinformation and reinforce ideological biases, making democratic debate across opposing sides difficult.
However, regulators have been slow in responding to the threat posed by disinformation online. A move by the Communications Authority of Kenya (CA) to develop new regulations for over-the-top service providers such as Facebook and YouTube has lost momentum while the Data Protection Bill 2018, now five years in the making, has stalled.
This is despite the country having one of the most active social media communities on the continent.

SEE ALSO :Taliban confront fake news and social media in propaganda war

With over five million monthly active users on Facebook, over four million on Twitter and millions more on WhatsApp, many Kenyans cite social media as their first source of news.
The British parliament last month published a report after an inquiry into the role of Facebook and other social media in spreading disinformation and weakening democratic discourse online.
The 111-page report contains valuable lessons for Kenyan policymakers that could help avoid a repeat of the disinformation and deeply polarised political contest that played out in the last polls.
British lawmakers say information technology and the ubiquity of social media have given propaganda and politically-aligned bias a new form, magnifying it’s spread and reach.
“In this environment, people are able to accept and give credence to information that reinforces their views, no matter how distorted or inaccurate, while dismissing content with which they do not agree as ‘fake news’,” explains the report. One of the recommendations is to disregard the term ‘fake news’ as it has already lost meaning and instead use ‘misinformation’ and ‘disinformation’ in developing the regulation.
“With those terms come clear guidelines for companies, organisations, and the Government to follow linked with a shared consistency of meaning across the platforms, which can be used as the basis of regulation and enforcement,” explains the parliamentary report in part. Regulators are also advised to establish a new category of a social media firm that is neither a platform nor a publisher.
This will confer more liability on companies to ensure they act faster in tackling disinformation. “Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites,” explained the report in part.
“This approach would see the tech companies assume legal liability for content identified as harmful after it has been posted by users.”
The report argues that social media companies downplay the harms caused by their products including loss of attention, mental health issues, confusions over personal relationships, risks to democracy, and issues affecting children.
“This proliferation of online harms is made more dangerous by focussing specific messages on individuals as a result of ‘micro-targeted messaging’—often playing on and distorting people’s negative views of themselves and of others,” explains the report in part.
This has called for transparency into the algorithms that select and rank content based on users’ past online activity, social connections, and location.
“Just as information about the tech companies themselves needs to be more transparent, so does information about their algorithms,” explains the report in part. “These can carry inherent biases, as a result of the way that they are developed by engineers; these biases are then replicated, spread, and reinforced.”
False perception
For example, social media users tend to share more good news about themselves than bad – creating a false perception that everybody else is doing better in life when in the actual sense we do not see the bad news. This has been blamed for heightened anxiety and depression among youth who heavily use apps such as Instagram and Facebook.
New technology now makes it possible to produce ‘deepfakes’ – audio and videos that look and sound like a real person, saying something that that person has never said – that will make it harder for the public to discern misinformation. 
The report argues that self-regulation of social media firms is impossible. Countries like Germany and France have proven that imposing regulation can force firms such as Google and Facebook to act faster to stop spread of disinformation.
When tech firms in Germany failed to meet a requirement of removing hate speech within 24 hours, the country passed the law that fines companies Sh2.2 billion for failing to remove content flagged as disinformation. “As a result of this law, one in six of Facebook’s moderators now works in Germany, which is practical evidence that legislation can work,” explains the report.

Fake newsCambridge AnalyticaSocial Media



[ad_2]

You may also like