AI usage on social media has potential to impact voter sentiment

The use of synthetic intelligence (AI) in social media has been targeted as a potential threat to impact or sway voter sentiment within the upcoming 2024 presidential elections within the United States.
Major tech corporations and U.S. governmental entities have been actively monitoring the scenario surrounding disinformation. On Sept. 7, the Microsoft Threat Analysis Center, a Microsoft analysis unit, published a report claiming “China-affiliated actors” are leveraging the expertise.
The report says these actors utilized AI-generated visible media in a “broad campaign” that closely emphasised “politically divisive topics, such as gun violence, and denigrating U.S. political figures and symbols.”
It says it anticipates that China “will continue to hone this technology over time,” and it stays to be seen how it is going to be deployed at scale for such functions.
On the opposite hand, AI can also be being employed to assist detect such disinformation. On Aug. 29, Accrete AI was awarded a contract by the U.S. Special Operations Command to deploy synthetic intelligence software program for real-time disinformation risk prediction from social media.
Prashant Bhuyan, founder and CEO of Accrete, mentioned that deep fakes and different “social media-based applications of AI” pose a critical risk.
“Social media is widely recognized as an unregulated environment where adversaries routinely exploit reasoning vulnerabilities and manipulate behavior through the intentional spread of disinformation.”
In the earlier U.S. election in 2020, troll farms reached 140 million Americans every month, in accordance to MIT.
Troll farms are an “institutionalized group” of web trolls with the intent to intervene with political beliefs and decision-making.
Related: Meta’s assault on privacy should serve as a warning against AI
Regulators within the U.S. have been methods to regulate deep fakes ahead of the election.
On Aug. 10, the U.S. Federal Election Commission unanimously voted to advance a petition that will regulate political advertisements utilizing AI. One of the fee members behind the petition known as deep fakes a “significant threat to democracy.”
Google introduced on Sept. 7 that it is going to be updating its political content policy in mid-November 2023 to make AI disclosure necessary for political marketing campaign advertisements.
It mentioned the disclosures can be required the place there’s “synthetic content that inauthentically depicts real or realistic-looking people or events.”
Collect this article as an NFT to protect this second in historical past and present your assist for impartial journalism within the crypto area.
Magazine: Should we ban ransomware payments? It’s an attractive but dangerous idea