Today (April 1), a bill to tackle online falsehoods was introduced in Parliament.
This comes after a select committee was formed last year to investigate the problem of deliberate online falsehoods and recommend strategies to deal with them.
On March 29, Prime Minister Lee Hsien Loong indicated that the bill will allow the government to hold online news sources and platforms responsible for the spread of deliberate online falsehoods.
But what exactly does this mean? And should you be scared?
What is a falsehood?
Let's start with the basics.
The bill defines falsehood as a statement of fact that is false or misleading.
This means that the bill does not cover opinions, criticism, satire, and parody.
What's the difference between an opinion and a statement of fact?
An opinion is a view or judgement formed about something, not necessarily based on fact or knowledge.
An example of an opinion might be "the Government is to blame for the crisis of inequality". This is not covered under the bill.
An example of a false or misleading statement of fact, on the other hand, might be something like "The Singapore Government has declared war against all our neighbours".
So how will the bill work in action?
For action to be taken it must be determined that the falsehood has met two criteria:
Firstly, there must be a false statement of fact.
And after that, it must also be determined that it is in the public's interest for the government to take action.
Wait what do you mean by public interest?
Public interest is a broad term chosen because of the variety of different circumstances that falsehoods appear in. It allows the assessment of the situation to be specific to each context.
Under the bill, public interests will include the following:
- the security of Singapore
- Public health, public safety, public tranquillity, public finances
- Friendly relations of Singapore with other countries
- The outcome of an election or referendum
- Preventing the incitement of enmity, hatred, or ill-will between different groups of persons.
- Preventing a diminution of public confidence in the work of public institutions
So what happens next?
In the scenario where a falsehood that undermines the public interest is being digitally disseminated, the bill provides for two actions to be taken in order to counteract the damage:
- a correction direction
- a 'take down' direction
In most cases, a correction direction should suffice, while the most serious cases will be met with a 'take down' direction.
What is the correction direction?
This will involve online platforms or prominent re-sharers of the falsehood to put up a correction together with the falsehood.
This means that the falsehood will continue to remain published, except it must also now include a correction with the actual facts. This will allow for people, who now have the facts pertaining to the falsehood, to decide for themselves what to make of it all.
The facts will either be provided by the government or a credible third party.
What about the 'take down' direction?
A 'take down' direction will be issued when there is a need to address serious harm.
This is where the falsehood will have to be, well, taken down.
For transparency, notice of the 'take down' direction will have to be published on the platform while in some cases might require a further correction to be published.
Should I be scared?
If you intend to maliciously spread falsehoods, then yes, you should probably be a little scared and hopefully deterred.
But if not, it's important to note that neither the correction nor the 'take down' directions are criminal punishments. The focus here is on counteracting damage rather than punishing.
But how about freedom of speech?
The idea with the bill is not to limit freedom of speech but rather to inform people of facts so that genuine reasoned discussions can take place.
In theory, the spread of falsehoods could undermine the quality of free speech by drowning out valid ideas.
So what will happen to fake accounts or bots on social media?
As a BBC report indicates, troll farms with their fake accounts and bots are a real and serious threat today.
Action can be taken requiring a prescribed internet intermediary, like Facebook for example, to disable a fake account or bot from further communication in Singapore when it has been found to:
- Spread a falsehood in Singapore that undermines the public interest, or
- Engage in coordinated inauthentic behaviour.
And what if an online source repeatedly spreads falsehoods that undermine the public interest?
If a site publishes at least three different falsehoods in the span of six months -- each having active directions in force against them -- a declaration can be issued against the site.
This declaration will have to be published on the site, and while the site will not be shut down, its ability to profit from falsehoods will be cut off.
The declaration may be lifted in the future if the site exhibits good behaviour.
Top image from Pexels
If you like what you read, follow us on Facebook, Instagram, Twitter and Telegram to get the latest updates.