Here are the 22 recommendations by S'pore "Fake News" Committee to combat falsehoods online

'Legislation' was mentioned 55 times in the 317-page report.

Martino Tan | September 20, 2018, 06:54 PM

Laws, laws, laws.

The parliamentary Select Committee on Deliberate Online Falsehoods has called for new laws to swiftly stop the spread of online falsehoods, as part of a broad array of measures that include public education and getting technology companies like Facebook, Google and Twitter to do more.

In a 317-page report released on Thursday afternoon, the committee made 22 recommendations to deal with the issue, with the hope of achieving the following objectives:

a. Nurture an informed public;

b. Reinforce social cohesion and trust;

c. Promote fact-checking;

d. Disrupt online falsehoods; and

e. Deal with threats to national security and sovereignty.

Here are the 22 recommendations:

1. The government should consider putting in place a national framework to coordinate and guide public education initiatives.

2. The government should consider encouraging and providing the support for ground-up initiatives for public education, to widen effective outreach beyond government-led initiatives.

3. News organisations, technology companies and institutes of higher learning should consider ways to ramp up the training of journalists, especially in techniques for ensuring accuracy in the digital news environment.

4. Journalists should proactively update their skills in digital fact-checking, and arm themselves with knowledge of how online falsehoods and disinformation campaigns work.

5. Both mainstream media and the alternative news platforms should hold themselves to the same professional standards of journalism, ensuring there is fairness, accuracy and integrity in reporting.

[related_story]

6. The government should consider how it can support the objectives in recommendations 3 to 5.

7. Organisations and initiatives for the promotion of social cohesion should consider providing clarifications on distortions and falsehoods affecting social cohesion.

8. The government should consider supporting or conducting research to understand society’s vulnerabilities.

9. Public institutions should provide information to the public in response to online falsehoods in a timely manner. They should also seek to pre-empt vulnerabilities and put out information, where appropriate, to inoculate the public.

10. Existing efforts should be reviewed, to consider whether they are adequate to achieve the following:

a. Transparency. Swiftly communicating information in response to online falsehoods, the reasons for any government action against online falsehoods, and the reasons for decisions to not disclose information to the public.

b. Participation and communication. Engaging the public on government strategies against online disinformation operations.

c. Accountability. Assuring the public of the integrity of the information the government puts forward concerning public institutions.

11. There is a role for trusted fact-checking initiatives in combatting deliberate online falsehoods.

Different media organisations and partners from other industries should consider establishing a fact-checking coalition in Singapore to debunk falsehoods swiftly and credibly, or providing relevant support to such credible fact-checking initiatives. There are differing views on the role that the government can play in supporting fact-checking initiatives.

12. The government should have the powers to swiftly disrupt the spread and influence of online falsehoods with the following objectives:

a. Provide access to and increase the visibility of corrections, including through tagging functions and the use of other platforms with significant reach.

b. Limit or block exposure to the online falsehood.

c. Disrupt the digital amplification of online falsehoods, including through the use of false amplifiers (e.g. inauthentic accounts run by bots or trolls), and digital advertising tools.

d. Discredit the sources of online falsehoods.

Legislation will be needed to achieve the above objectives.

13. The government should identify the additional measures needed to safeguard election integrity, and implement the necessary measures, including legislation.

14. The government should consider implementing monitoring and early warning mechanisms, to facilitate assessments of when and how to intervene to stop the spread of online falsehoods.

15. The government should consider powers needed to establish a de-monetisation regime, including through legislation which will:

a. Disrupt the flows of digital advertising revenue to purveyors of online falsehoods.

b. Require the disgorgement of financial benefits by purveyors of online falsehoods.

16. Criminal sanctions should be imposed on perpetrators of deliberate online falsehoods.

These deterrent measures should be applied only in circumstances that meet certain criteria. There should be the requisite degree of criminal culpability (i.e. intent or knowledge), in accordance with established criminal justice principles, and also a threshold of serious harm such as election interference, public disorder, and the erosion of trust in public institutions.

17. To prevent and mitigate the abuse of their platforms to spread online falsehoods, technology companies should:

a. Take proactive action to prevent and minimise the amplification of online falsehoods on their platforms, including by:

i. Prioritising credible content on their platforms, and de-prioritising proven falsehoods to limit their circulation.

ii. Labelling or shutting down accounts and networks of accounts that are designed to amplify online falsehoods.

The specific measures undertaken may vary depending on how content is amplified on the platform. For example, on a closed messaging platform (such as WhatsApp, Telegram or WeChat and others), minimising the amplification of an online falsehood may involve prohibiting the forwarding of the online falsehood.

b. Ensure that their digital advertising tools and services do not aid the spread of online falsehoods. They should disallow:

i. The placement of advertisements on sites that propagate online falsehoods.

ii. The use of their advertising services by sites that propagate online falsehoods.

iii. Their advertising services, such as targeted advertising tools and boosting of posts, from being used to further amplify online falsehoods.

c. Minimise the ability of bad actors to hide their abuse of digital advertising tools by increasing digital advertising transparency.

d. Calibrate or restrict the use of digital advertising tools.

e. Prevent user data from being used to manipulate people.

18. Technology companies should implement measures such as the following:

a. Enable users to meaningfully assess the credibility of the information they receive, including by:

i. Disclosing when content is sponsored, and by whom, especially for all forms of digital advertisements.

ii. Using tags to indicate relevant contextual information, such as whether an account is managed by a bot, or the credibility of the source of information.

b. Enable researchers and experts to find solutions to the problem, by providing them with information on how online falsehoods spread, so that they can better understand disinformation tactics and techniques.

c. Inform users of how the design of their platforms influences the content that they receive.

d. Contribute resources to:

i. Developing technologies that could advance the integrity of information on the Internet, such as the automated detection of online falsehoods, effective detection of hidden identities behind advert purchasing, blockchain-based tools, and fact-checking applications.

ii. Strengthening the wider information ecosystem, including fact-checking initiatives and quality journalism.

19. Technology companies should demonstrate their accountability by being transparent about the nature and extent of the spread of online falsehoods on their platforms, and the effectiveness of their responses.

Specifically, technology companies should undertake regular voluntary reporting and independent audits. These should cover the following areas:

a. The scale and nature of the problem of online falsehoods on their platforms, and potential risk areas;

b. How their platforms and products have been used to the spread of online falsehoods;

c. The measures taken to address the problem, and to equip informed users; and

d. How effective these measures have been.

20. The government should consider both legislation and other forms of regulation of technology companies to achieve the objectives of Recommendations 17 to 19.

Legislation would be needed particularly for measures to be taken in response to an online falsehood, since Facebook, Google, and Twitter have a policy of generally not acting against content on the basis that it is false.

The government should consider whether there is a need for new areas of regulation, such as of targeted advertising and the use and collection of personal data on online platforms for micro-targeting.

To complement legislation, the government should consider regulatory approaches such as working with technology companies and other industry stakeholders to develop a voluntary code of practice or guidelines to tackle online falsehoods. Where appropriate, the government should collaborate with technology companies to develop solutions.

21. The government should explore how it can facilitate the efforts of start-ups and companies to develop platforms, products and technologies which are designed to ensure the integrity of our online information ecosystem.

22. The government should study the specific countermeasures proposed by expert representors, and come up with a national-level strategy and coordinated approach for countering State-sponsored disinformation operations.

Top photo compiled from pictures from Parliament website