Global movements in the digital age: How social media and AI shape political dynamics
According to Emerson Brooking, the director of strategy and resident senior fellow at the Digital Forensic Research Lab (DFRLab) of the Atlantic Council Technology Programs, social media platforms have not only revolutionized the dissemination of information, but transformed the way it is used to mobilize political and social movements worldwide, reports Kazinform News Agency correspondent.
Speaking at the Qazaq Forum on Thursday, Emerson Brooking expressed his opinion on the transformation of digital age, characterized by the rise of social media.
“It has profoundly transformed global landscapes, including the realm of politics. This transformation is significantly amplified by the advent of Artificial Intelligence (AI), which introduces both beneficial and detrimental effects on national security”.
The key observation is that the rapid advancements on the internet and social media technologies have accelerated decision-making processes and increased communication channels among combatants and a broader audience. This wider audience can significantly influence conflicts, potentially guiding their outcomes in one direction or another.
“This dynamic is clearly demonstrated in the ongoing conflict between Israel and Hamas in Gaza. Although the conflict occurs within a geographically confined area, its digital footprint is large, covering the stories and the opinions they share online. These digital interactions can, in turn, impact military strategies and, consequently, the lives of people on the ground”, says Brooking.
The current work focus of DFRLab consists of utilizing open-source media and social media signals to detect potential war crimes and legal violations, as well as instances of information manipulation and disinformation, including the presence of fraudulent accounts.
“This forms the core of our social media studies, underscoring its significance in preventing violence, leading us to consider the role of AI.”
Currently, Emerson Brooking remains skeptical of the view that AI is the revolutionary tool it is often hyped to be. However, it undoubtedly transforms basic online interactions significantly.
“Simple AI tools can now build networks of fake social media profiles that spread lies in multiple languages, making them seem fluent in foreign languages.”
In the past, when people spread falsehoods online, they often simply sent the same message repeatedly. AI tools now allow for a more sophisticated approach that is harder to detect. These tools enable malicious users to cause more harm and reach a broader audience quickly, which is a major downside.
“Another point to consider is the risk of attributing too much negative impact to technology. For example, it might be tempting to blame recent protests in the United States or those sparked by AI-related issues on technology. However, this is likely an oversimplification. While AI can expedite the spread of certain messages, the root of spontaneous protests, both in the U.S. and globally, often lies in deeply held beliefs in support of democratic causes. Overemphasizing AI's role risks diminishing the genuine convictions and the freedom of expression by individuals”, says Brooking.
With AI tools like ChatGPT or Gemini available to the public, the society can use such misperception more often. But calling this a “hype” movement is more fitting, the speaker says.
“General AI does raise concerns, especially if it starts controlling our lives. However, when people like Elon Musk talk about general AI, it often sounds like they're trying to sell it by saying it's so powerful it needs to be stopped. It's a clever sales tactic, but it's probably not that close to reality”, adds Brooking.
The speaker has been following discussions about AI hype since 2017, and his team has experimented with earlier AI models before ChatGPT burst onto the scene in late 2022.
“Back then, there were fears of a deep fake apocalypse, with a single piece of deceptive content potentially fooling everyone and leading to catastrophic outcomes. However, these scenarios have not come to pass, nor do they seem likely to. Instead, deep fakes and AI technologies have made it more challenging to detect the truth”, says Brooking.
In his conclusion, the speaker urges a clearer understanding of what AI can realistically perform, advocating for a balanced view that acknowledges its potential without giving in to exaggerated fears or expectations.
“Exploring AI concepts reveals that its evolving terminology can create significant misunderstandings across linguistic and cultural contexts. For example, the terms “Artificial Intelligence” in the USA and “Artificial Intellect” elsewhere illustrate how language nuances influence leaders' perceptions and expectations, often shaped by sci-fi portrayals. These unrealistic expectations could be addressed by those in media and technical literacy, working together to bridge gaps and to clarify the true potential and limitations of AI”, the speaker concludes.