The Select Committee began its public hearings last Wednesday (March 14).
The committee, appointed in January, aims to study the problem of "deliberate online falsehoods" and make recommendations on how Singapore should respond.
To do this, they're in the process of consulting with a load of people from all over the world — not just here. They include academics, security experts, representative of religious bodies and organisations, media companies and even students.
In all, 79 individuals and organisations will be meeting with the Committee, and on Thursday, March 22, they will hear from the internet giants — Facebook, Twitter, and Google.
In a session later in the afternoon, telcos Singtel and StarHub will also have an audience with the Committee.
Prior to these hearings, everyone appearing before the Committee has sent in written submissions that state their respective positions and views on the issue of deliberate online falsehoods, and we present what they've submitted as they take their oaths to speak.
Facebook: taking down fake accounts & spammers, tools with info about articles
Facebook is represented by two representatives in public policy, for Southeast Asia and for the Asia-Pacific.
Their appearance at the hearing comes on the back of an ongoing American investigation over the abuse of its platform in spreading outright fake stories to influence the outcome of its 2016 presidential election, and just this week, revelations of one of the largest data leaks in its history:
Here are the steps that Facebook said they will take:
1. Removing fake accounts
- Facebook has an authentic name policy, and will strive to find and remove fake accounts that are often used to spread falsehoods.
- They will also track down abuse and coordinated behaviour that goes against their community standards.
2. Increasing transparency for ads
- Facebook says falsehoods are sometimes spread online through targeted ads, so by the end of this year, any Facebook user will be able to see all the ads that any Page is running, even if they are not in the audience for those ads.
3. Hunting down spammers
- Some spammers make money by disguising themselves as legitimate news publishers, posting hoax stories that encourage internet users to visit their sites, which are often ads.
- Facebook says they will hunt these spammers down to make sure they cannot run such ads on Facebook or make money from them, and folks found to do this repeatedly will be banned from advertising on Facebook.
[related_story]
4. Improving the news feed
- Down-ranking "low quality" information such as spam and fake news, i.e. making it appear lower in users' news feeds.
- Some updates that reduce the appearance of "click bait" or "sensationalism" in news feeds have already been made.
- If people read an article but it is not shared as much, Facebook will also take it as a signal that it has "misled people in some way" and incorporate that factor into its ranking.
5. Tools to help Facebook users better understand content they encounter
- These include:
- a tool that provides additional information about the articles without needing to go elsewhere, and
- a tool that provides tips on how to spot fake news.
- Facebook also plans to train journalists and support events that promote greater media and news literacy, as well as government officials to help improve their cyber security and awareness to better deal with the risk of threats and abuse during high profile events such as elections or political turmoil.
6. Enabling users to be more "civically engaged"
- They plan to work with government agencies such as the Elections Department and Ministry of Communications and Information ahead of the next General Election.
- They will also strengthen their efforts to combat hate speech on their platform, by updating their community standards to reflect changing trends, as well as the local context of each market they operate in.
"This underscores our commitment to ensuring that deliberate online falsehoods do not cause social discord or disrupt the integrity of the political process globally and here in Singapore".
Google: Not positioned to evaluate disputes regarding facts
Google's main vehicle of serving news stories to searchers is Google News, which presents articles from more than 80,000 sources it deems to be news outlets.
Its conditions for these include being transparent about their owners and primary purposes, offer timely reporting and analysis on events, show legitimate author bylines and biographies and contact info, as well as limit their use of distracting ads on their pages.
However, it says in its submission that it will not take down content from its search listings "unless pursuant to a valid legal request", as it says it is "not positioned to evaluate disputes related to facts or characterizations laid out in a news article".
But here's what it's doing nonetheless, apart from external initiatives, engagement and training to improve things:
1) Allowing publishers to flag "fact-checked" articles
This feature was first rolled out in the US, UK, Germany, France, Brazil, Mexico and Argentina and now exists globally, and publishers can show users where a claim came from as well as their verdict on its veracity.
It involves quite a few steps and Google-hosted tools and platforms, though, like a widget called "Share the Facts" and a markup called ClaimReview from Schema.org. Nobody in Singapore has used this yet, to our understanding.
2) Publisher information panels in searches
This feature, only available in the US currently, displays panels that show the awards received by a publisher and topics it often covers so that a person who searches specifically for the publisher will be able to understand more information about it.
3) Making sure fake/misleading sites don't surface in search or benefit from Google ads
Citing the example of a site called "abcnews.com.co", Google said it doesn't qualify to display Google ads, but hires external search quality raters, humans who evaluate how well Google's search results address what a user is looking for.
Their guidelines for raters in their experiments in searching on Google were also updated to factor in misleading site domain names.
4) Purging misleading, inappropriate & harmful ads, and barring errant sites from advertising
Google says it purged 1.7 billion ads that violated its advertising policies in 2016, more than twice the number they took down a year earlier.
Also in 2016, they took more than 100,000 publishers who violated their advertising policies off AdSense (i.e. they were not allowed to display Google ads or benefit from the associated revenue), and blocked ads from more than 300 million videos that weren't in accordance with its guidelines.
They're also improving their reporting processes so they can act more quickly in response to folks who flag misleading ads or objectionable content on websites.
Twitter: Internal actions are not long-term responses to fake news
From Twitter's perspective, truth surfaces through the real-time interaction of journalists, experts, engaged citizens and even government bodies to affirm, correct and challenge the claims that people make.
However, Twitter reaches out to government agencies and nonprofits, verifies authentic accounts and conducts training in order to elevate factual information across its platform in times of emergency and crisis.
In their submission, they outlined what they have been doing:
1) Detecting & blocking spam content and coordinated spam accounts
Following an initiative it introduced last year, Twitter now detects and blocks about 523,000 suspicious logins generated with malicious automation each day.
It also said that in December last year, it identified and challenged more than 6.4 million suspicious accounts across its platform globally every week, a 60 per cent improvement from two months earlier.
It also developed new techniques to catch these accounts, discover hacked or compromised accounts and those that are linked or started by the same person, as well as checks like captchas and phone verification to ensure that humans are using their accounts.
2) Purging accounts that promote terrorism
Twitter said it enhanced its tools to detect accounts promoting terrorism and terrorist-related activity, improving its detection rate from one-third of accounts removed in early 2016 to 95 per cent last year. 75 per cent of these, they added, were found and removed even before they sent their first tweet out.
On the whole, all three (Facebook, Google and Twitter) said they were committed to supporting the fight against deliberate online falsehoods, as well as efforts to do so both online and offline.
Only Facebook offered their position on the helpfulness of added legislation, however — a big topic discussed in the rounds of hearings preceding Thursday afternoon's sessions:
Facebook: Legislation isn't the best approach to addressing the issue
Facebook pointed out that some of Singapore's existing laws and regulations already do deal with hate speech, defamation and the spreading of fake news — albeit with limitations.
They include:
According to Section 45 of the Telecommunications Act, any person who knowingly transmits any false or fabricated message, or causes any false or fabricated message to be transmitted, will be prosecuted.
Limitation: However, this does not apply if the person did not know that the message was false or fabricated.
This is different from the other laws mentioned as it is a form of judicial remedy. Any individual against whom a false statement has been made can apply to Court for protection. The Court can grant an order that the statement shall not be published, or continue to be published.
Limitation: This only covers falsehoods made in respect to the applicant individual, so this cannot be utilised to serve public interests unless every individual affected applies to Court for protection.
- the Penal Code, and
Under Section 298 of the Penal Code, any person who causes any matter to be seen or heard by another person, with the deliberate intention to wound the racial or religious feelings of that person, can be charged.
Section 298A of the Penal code is more specific, punishing any person who by other means (apart from words or visual representation) knowingly promotes disharmony, hatred or ill-will between different racial and religious groups, or commits any act that is prejudicial to the maintenance of harmony between different racial and religious groups.
Limitation: Although both provisions apply to any kind of online communication, they only cover falsehoods that touch on racial or religious issues. It is also a requirement for the falsehood to be spread knowingly, and does not apply otherwise.
This empowers the Minister for Home Affairs to make a restraining order against an individual or person of authority in any religious group, if said person caused feelings of ill-will or hostility between different religious groups: promotes a political cause, carrying out subversive activities or inciting disaffection against the government under the guise of practising a religious belief.
Limitation: However, no restraining orders have yet to be issued under this Act, amid concerns of a lack of checks on the Minister's power, difficulty in distinguishing between religious and political matters, and lack of transparency regarding the process.
Facebook also advised the government to first study how other governments have approached the issue, "instead of rushing into legislation or adopting knee-jerk responses" that could be counter-productive to its efforts.
Top image via CBS News/YT
If you like what you read, follow us on Facebook, Instagram, Twitter and Telegram to get the latest updates.