• IP addresses are NOT logged in this forum so there's no point asking. Please note that this forum is full of homophobes, racists, lunatics, schizophrenics & absolute nut jobs with a smattering of geniuses, Chinese chauvinists, Moderate Muslims and last but not least a couple of "know-it-alls" constantly sprouting their dubious wisdom. If you believe that content generated by unsavory characters might cause you offense PLEASE LEAVE NOW! Sammyboy Admin and Staff are not responsible for your hurt feelings should you choose to read any of the content here.

    The OTHER forum is HERE so please stop asking.

General Election 2025

LITTLEREDDOT

Alfrescian (Inf)
Asset

Bill to combat digitally manipulated content, deepfakes during elections tabled in Parliament​

rreld0909.jpg

The Returning Officer can issue corrective directions to those who publish prohibited online election advertising content under the proposed new law. ST PHOTO: KUA CHEE SIONG
AK_csf_180522.png

Chin Soo Fang
Senior Correspondent

Sep 11, 2024


SINGAPORE - A new Bill will put in place measures to counter digitally manipulated content during elections including misinformation generated using artificial intelligence (AI), commonly known as deepfakes.
The proposed safeguards under the Elections (Integrity of Online Advertising) (Amendment) Bill will apply to all online content that realistically depicts a candidate saying or doing something that he did not.
This includes content made using non-AI techniques like Photoshop, dubbing and splicing.
If the Bill is passed, candidates will be able to ask the Returning Officer (RO) to review content that has misrepresented them. A false declaration of such misrepresentation is illegal and could result in a fine or loss of a seat.
Others can also make requests to review such content, which is set to be made illegal from the time the Writ of Election is issued to the close of polling.
The move comes ahead of a general election that must be held by November 2025.
The RO can issue corrective directions to those who publish prohibited online election advertising content under the proposed new law. Social media services that fail to comply may be fined up to $1 million upon conviction, while all others may be fined up to $1,000, jailed for up to a year, or both.

Corrective actions include taking down the offending content, or disabling access by Singapore users to such content during the election period.
Minister of State for Digital Development and Information Rahayu Mahzam tabled the Bill in Parliament on Sept 9. It will be debated at the next available sitting and if passed, will amend the Parliamentary Elections Act and the Presidential Elections Act to introduce the new safeguards.
To be protected under it, prospective candidates will first have to pay their election deposits and consent to their names being published on a list that will be put up on the Elections Department’s website some time before Nomination Day.

If they choose to do so, it will be the first time that the identities of prospective candidates are made public before Nomination Day.
The measures will also cover successfully nominated candidates from the end of Nomination Day to Polling Day.
The Ministry of Digital Development and Information (MDDI) said in a press release that while the Government can already deal with individual online falsehoods against the public interest through the Protection from Online Falsehoods and Manipulation Act (Pofma), targeted levers are needed to act on deepfakes that misrepresent candidates during elections.
“Misinformation created by AI-generated content and deepfakes is a salient threat to our electoral integrity,” said an MDDI spokesperson.
“We see this new Bill not as a replacement for Pofma, but rather as a means to augment and sharpen our regulations under the online election advertising regime, to shore up the integrity of our electoral process.”

The spokesperson added that with Pofma, the Government will respond when it knows what the facts are, for example, when someone makes a falsehood about the reserves or housing prices.
“However, in the case of deepfakes featuring political candidates, it is much more difficult for the Government to establish what an individual said or did not say, did or did not do. Therefore, we do need the individual to come forward and say that this is a misrepresentation.
“While we can use a set of technological tools to assess whether the content is AI-generated or manipulated, these tools give us a certain confidence level, but it is not 100 per cent. So there is quite a lot of weight given to what an individual claims is the truth, and this is where it differs from Pofma.”
Fraudsters have disrupted elections in many countries, including in Slovakia and India. More recently, fake videos of presidential nominees Kamala Harris and Donald Trump have proliferated on social media in what is widely billed as America’s first AI election in November.
In response, there has been a growing momentum worldwide to deal with deepfakes during elections.
For example, South Korea implemented a 90-day ban on political AI-generated content before its election in April.
Its National Election Commission said it busted a total of 129 deepfakes that were deemed to violate the laws on elections of public officials between Jan 29 and Feb 16.
Brazil has also banned synthetic content that will harm or favour a candidacy during elections in February.

Closer to home, then Prime Minister Lee Hsien Loong warned the public of deepfake videos circulating online in December 2023 which showed him and then Deputy Prime Minister Lawrence Wong promoting investment platforms. The videos used AI to mimic their voices and facial expressions.
Minister for Digital Development and Information Josephine Teo told Parliament in January that Singapore needs to grow new capabilities to keep pace with scammers and online risks.
She announced a new arsenal of detection tools Singapore is developing to tackle the rising scourge of deepfakes and misinformation. The tools will be designed under a new $50 million initiative to build online trust and safety.
Beyond elections, a new code of practice will be introduced to tackle deepfakes and other forms of manipulated content.
The Infocomm Media Development Authority (IMDA) will introduce the code requiring social media services to put in place measures to address digitally manipulated content.
This will ensure that they do more to gatekeep, safeguard and moderate content on their platforms. IMDA will engage social media services in the coming months to work out the details of the code.
 

LITTLEREDDOT

Alfrescian (Inf)
Asset

askST: What are the proposed measures to deal with deepfakes during Singapore’s elections?​

Desweepoll110305.jpg

The Bill will introduce new measures to protect Singaporeans from digitally manipulated content during elections, including deepfakes. PHOTO: ST FILE
AK_csf_180522.png

Chin Soo Fang
Senior Correspondent

Sep 09, 2024


SINGAPORE - A new Bill was tabled by the Ministry of Digital Development and Information (MDDI) in Parliament on Sept 9 to deal with false content put out to sway elections here.
Called the Elections (Integrity of Online Advertising) (Amendment) Bill, it will introduce new measures to protect Singaporeans from digitally manipulated content during elections, including artificial intelligence-generated misinformation, commonly known as deepfakes.
From the issuance of the Writ of Election to the close of polls on Polling Day, the Bill proposes to prohibit the publication of digitally generated or manipulated online election advertising (OEA) that depicts a candidate saying or doing something that he or she did not. It will apply only to OEA depicting people who are running as candidates for an election.
The Bill will amend the Parliamentary Elections Act and the Presidential Elections Act to introduce the new safeguards.

Q: Why is this Bill needed?​

A: Misinformation created by AI-generated content and deepfakes has been used during elections in other countries, and is also a salient threat to Singapore’s electoral integrity. Such content can realistically depict the appearance, voice or action of a candidate, which can deceive or mislead the public. Voters must be able to make informed choices based on facts and not misinformation.
In Singapore, there have been cases of AI-generated content being used to impersonate individuals, including political office-holders. While such content has so far primarily been used for scams, it can also be deployed during an election.

Q: How is it different from Pofma?​

A: While the Government can already deal with individual pieces of online falsehoods against the public interest through the Protection from Online Falsehoods and Manipulation Act (Pofma), the Bill is targeted specifically at Singapore’s election periods.

If passed, it will allow an election candidate to report any misinformation about his or her words or actions. It also enables the Returning Officer (RO) to address deepfakes swiftly by directing their removal, given the fast-paced nature of information flow in an election context.

Q: Why limit the Bill to only elections?​

A: It is focused on the challenges and risks posed by AI-generated content in the high-stakes context of elections. Other laws such as Pofma and the Online Criminal Harms Act are safeguards that are in place to tackle such threats during non-election periods.
A new code of practice (COP) will also be introduced soon to ensure that social media companies do more to gatekeep, safeguard and moderate content on their platforms. Details of the COP will be shared at a later stage, said MDDI.

Q: What is and isn’t covered under the Bill?​

A: Content that will be covered includes realistic audiofakes or robocalls, and manipulated content using non-AI techniques such as splicing and photoshopping of videos that may affect electoral outcomes.
Besides covering fresh content, those who pay to boost, share or repost such content will also be liable.
What is not covered includes animated cartoons and characters, cosmetic alterations, entertainment content and memes, and campaign posters.
Also not covered is news published by authorised news agencies for factual reporting on prohibited content, as well as content communicated electronically between individuals that is of a private nature.

Q: What are some of the actions that can be taken against misinformation?​

A: The RO can issue corrective directions to individuals who publish such content. He can also issue such directions to social media services and internet providers to take down the content, or to disable access by Singapore users to such content during the election period. Failure to comply would be an offence punishable by a fine or imprisonment, or both, on conviction.

Q: How will the public be informed about any misinformation?​

A: The public will be informed by the authorities when there is a need to remove certain content.
The candidate can also put out his or her own press statement or social media post to inform the public of any misinformation, given the urgency and fast-paced nature of an election period.
Independent fact-checkers and media outlets may also separately do their checks and debunk such falsehoods.
 

LITTLEREDDOT

Alfrescian (Inf)
Asset

Bill to combat deepfakes during election timely despite challenges: Analysts​

ge2020queuevote0909a.jpg

The proposed measures will be in force from the issuance of the Writ of Election to the close of polling on Polling Day. PHOTO: ST FILE
AK_csf_180522.png

Chin Soo Fang
Senior Correspondent

Sep 11, 2024

SINGAPORE – Proposed measures to combat deepfakes during elections are timely given the proliferation of such content worldwide, said analysts. But the effectiveness of such laws will depend on factors such as enforcement and public awareness, they added.
The Elections (Integrity of Online Advertising) (Amendment) Bill, tabled in Parliament on Sept 9, will prohibit the publication of digitally manipulated content during elections. This refers to content that realistically depicts an election candidate saying or doing something that he or she did not, and includes misinformation generated using artificial intelligence (AI) – commonly known as deepfakes.
These measures will be in force from the issuance of the Writ of Election to the close of polling on Polling Day, with the Returning Officer empowered to issue corrective directions to those who publish such content.
Professor Mohan Kankanhalli, director of NUS’ AI Institute, said the problem of misinformation and disinformation requires a combination of technical solutions, regulation and legislation, and public education. “These laws not only serve as deterrents, they also provide legal recourse post-publication. Such legislation is therefore necessary.”
He added that while such laws signal a proactive stance, enforcement in other countries has been challenging.
“Detecting and proving malicious intent behind deepfakes can be difficult,” he said. “However, these capabilities are constantly improving.”
Prof Kankanhalli cited the example of the 2020 US presidential election, where deepfakes were a concern, though their direct use was limited.

One notable case involved a manipulated video of House Speaker Nancy Pelosi, which was slowed down to make her appear intoxicated or cognitively impaired. It showed how video manipulation could mislead the public, and demonstrated the potential for deepfakes to be used as a political weapon, he said.
He also cited the example of the 2019 Indian general election, when deepfakes were used by the Bharatiya Janata Party (BJP) to create manipulated videos for campaign purposes. On one occasion, the party produced videos of Delhi BJP president Manoj Tiwari, in which he appeared to speak in different dialects of Hindi and Haryanvi. The videos were designed to reach specific regional audiences more effectively without requiring him to physically record the same speech multiple times.
“Though this use of deepfake technology wasn’t meant to deceive in a malicious sense, it raised ethical concerns about the potential for such technology to mislead voters if misused,” Prof Kankanhalli said, adding that this incident also marked one of the first high-profile cases where deepfake technology was used in a political campaign.

Assistant Professor Roy Lee, from SUTD’s Information Systems Technology and Design pillar, noted that concerns have also been raised about deepfakes for the upcoming 2024 US presidential election. Manipulated videos targeting Indonesian politicians had also emerged during Indonesia’s recent election, he said.
In response to this growing problem, laws aimed at curbing deepfakes have been introduced in several countries.
For example, the US state of California passed a law in 2019 to criminalise the distribution of manipulated media such as deepfakes intended to mislead voters. Specifically, it prohibits individuals or entities from distributing such media with malice within 60 days of an election.
The European Union enacted in 2022 the Digital Services Act, which imposes stricter regulations on digital platforms, including measures to prevent the spread of manipulated content.

Prof Lee said: “These laws have been part of broader efforts to prevent election interference, although their effectiveness largely depends on timely detection and public awareness.”
Mr Benjamin Ang, head of NTU’s Centre of Excellence for National Security, noted that the US has also banned the use of AI-generated voices in robocalls, including those used in election campaigns to spread misinformation and mislead voters.
The decision came after AI-generated robocalls impersonating President Joe Biden sought to discourage voting in the New Hampshire primary election in January. Some experts noted that enforcing this law against foreign actors seeking to interfere in US elections may still be challenging, though it sends a clear message that exploiting AI to mislead voters will not be tolerated.
“The law is only one part of the battle to combat deepfakes and protect electoral fairness and integrity because this also requires vigilance and cooperation from tech platforms where the deepfakes are circulating, public education about the dangers of spreading deepfakes, and our own personal choice to stop and think very seriously before we share any videos or other content,” said Mr Ang.
He added: “The impact of this Bill, like all other laws, should be to set standards of behaviour by which our society can maintain order, resolve disputes, and protect rights.”
Dr Carol Soon, principal research fellow at Institute of Policy Studies and adjunct principal scientist at the Centre for Advanced Technologies in Online Safety, said deepfakes also make it easier for political candidates to falsely claim genuine content to be manipulated or generated by AI, allowing them to benefit from the “liar’s dividend” in a polluted information ecosystem.
For example, during the recent Turkish election, a video that showed compromising images of an electoral candidate was said to be a deepfake when it was in fact real.
“This proposed Bill is surgical as it is focused both in terms of the defined offence and timeframe. The Bill thus seeks to strike the fine balance between upholding election integrity and allowing for non-harmful use of generative AI such as entertainment, education and creative usage,” she said.
To be protected under the proposed Bill, prospective candidates will first have to pay their election deposits and consent to their names being published on a list that will be put up on the Elections Department’s website some time before Nomination Day. If they choose to do so, it will be the first time that the identities of prospective candidates are made public before Nomination Day.
The proposed law will also cover successfully nominated candidates from the end of Nomination Day to Polling Day.
On the early disclosure of candidates’ names, Prof Lee said this primarily enhances transparency in the electoral process.
“This transparency can help mitigate the risk of misinformation and deepfake-related content as voters will have more time to scrutinise information about candidates and ensure its accuracy,” he said. “It also provides more time for online platforms and regulatory bodies to monitor and take corrective actions against manipulated content targeting these candidates.”
 
Top