Speakers
Synopsis
In the evolving landscape of information warfare, the roles of Large Language Models (LLMs) and Generative AI have become increasingly significant. This presentation introduces the potent application of such technologies in generating and mitigating misinformation and disinformation within the contexts of government elections and international relations, with notable examples including the tensions between Ukraine and Russia, and between Palestine supporters and Israel supporters. The ease with which Generative AI can produce vast quantities of convincing, ideologically charged content poses a unique challenge to the integrity of public discourse.
Our research (https://doi.org/10.1109/TCDS.2024.3377445) introduced a novel algorithm that manipulated LLMs to produce content with a tailored blend of accuracy and fabrication. Our method enabled the precise control of the ideological bent of the generated information, allowing for the strategic manipulation of public opinion through semantic and ideological shifts. Such capability signified a paradigm shift in information warfare, enabling actors to saturate the information space with targeted narratives that are challenging to detect, censor, or counter.
The presentation will further explore the ethical and regulatory dilemmas posed by the use of LLMs in such capacities, emphasizing the urgent need for robust countermeasures. We will discuss sophisticated strategies to identify and neutralize misinformation and disinformation, alongside the broader implications of these technologies on societal trust and the complexity of public perception. This discourse aims to shed light on the dual-edged nature of Generative AI in the modern information warfare domain, advocating for a balanced approach to harnessing its potential while safeguarding against its perils.