The digital battlefield: how deepfake technologies are used to spread false information against Israel

The use of artificial intelligence, machine learning, and deepfake technologies to spread misleading information and create manipulative content has become all too common. Guest author Guy Horesh explains

IMAGO/Christian Ohde via Reuters Connect

By Guy Horesh

As we are seeing during the Swords of Iron War, the battlefield is not limited to physical borders but is also reflected in the media and digital space. The side trying to challenge Israel's legitimacy is using advanced technologies such as artificial intelligence (AI), machine learning (ML), and deepfake.

Using these technologies, false information is produced and distributed to create negative narratives and arouse anti-Israeli and anti-Jewish feelings amongst online communities.

According to a report recently published by the Zionist Organization, the Diaspora Ministry, and the Jewish Agency, since the beginning of the war, there has been a 500% increase in the number of anti-Semitic incidents worldwide.

The disinformative campaigns on social networks and the media have a profound effect; it can be assumed that part of the sharp increase in the number of incidents is also related to the false information being spread online.

Therefore, it is important to understand the various technologies' capabilities and develop effective coping strategies to combat these phenomena while maintaining the truth, preventing fraudulent threats, and spreading false information.

The use of artificial intelligence, machine learning, and deepfake technologies to spread misleading information and create manipulative content has become common in various fields.

One of the main areas is the creation of articles, posts, and videos created with the help of AI that spread false narratives aimed at targeting Israel and the Jewish community.

With the help of advanced machine learning algorithms, these technologies make information accessible to different audiences while amplifying anti-Israel sentiments by adjusting the content to reinforce existing misconceptions.

In addition, using deepfake, you can create audio and video clips that look very reliable but are completely fabricated. Through the support of advanced algorithms, as we have already seen in the attempt to influence the results of the U.S. 2016 elections, the manipulative content seamlessly integrates into the media platforms and social networks, gaining popularity and a significant number of followers.

AI-powered bots and accounts also amplify biased information and spread it through algorithmic amplification (echoing) across platforms such as Twitter, Facebook, Instagram, and TikTok.

Together and separately, these cases highlight the use of advanced technologies to manipulate narratives, promote disinformation, deceive, and sow controversy regarding Israel and the Jewish community at this time.

How are advanced technologies used to spread false information?

Via content generated by artificial intelligence. Artificial intelligence-based natural language engines can generate false articles, blog posts, or social media content perpetuating anti-Israel sentiments. AI-generated bots or accounts can flood platforms with biased or misleading information, causing it to be widely shared without verifying it.

ML algorithms and targeted messaging: Machine learning algorithms analyze social media user data to identify people who have the potential to absorb and adopt anti-Israel narratives. Campaigners can then tailor content to appeal to these specific demographics.

For example, targeting ads with anti-Israel content to users who are engaged in related topics, thus increasing their biases or misconceptions.

Deepfake technology: Deepfake videos can manipulate speeches or interviews of prominent Israeli personalities, changing their words or actions to present them in a negative light. It is also possible to create videos showing uninvolved victims on the other side.

Increasing social media resonance: AI algorithms used by social media platforms optimize content delivery based on user engagement. Biased or false information generates more clicks, comments, and shares, leading platforms to inadvertently boost such content. For example, a false narrative about Israel committing war crimes gains traction on social media and may reach millions due to algorithmic amplification.

Detection and countermeasures: AI tools can be used to detect anomalies in content distribution. For example, sentiment analysis algorithms can flag suspicious increases in negative content related to Israel and help identify potential disinformation campaigns. In addition, there are initiatives to develop deepfake detection tools powered by artificial intelligence to detect manipulated media.

How do you deal with fake information?

There are detection technologies that make use of AI and machine learning and make it possible to identify images, videos, or sounds created by deepfakes. It is important to continue developing new tools to help identify and prevent the impact of deepfake products.

Also, it is helpful to raise public awareness of the risk and train people for the possible identification of a fake digital product. At the state level, legislation that will limit the use of deepfake technologies and enable the protection of real content can be promoted.

In conclusion, using artificial intelligence (AI), machine learning (ML), and deepfake systems is a major challenge in the fight against misinformation and biased narratives. These technologies use powerful tools to generate and spread false information across various online platforms.

Artificial intelligence algorithms can autonomously create content that mimics authentic human communication, making users' ability to distinguish between legitimate and fabricated information a real challenge.

The rapid spread of fabricated content, aided by artificial intelligence-driven algorithms on social networks, increases the range of its influence and makes efforts to curb its spread even more difficult.

Technological advances such as deepfake amplify these challenges, enabling the creation of realistic and deceptive audio-visual content that blurs the lines between reality and fiction.

As a result, distinguishing between truth and manipulated or fabricated content is becoming increasingly complex, making it difficult for efforts to effectively counter false narratives and protect the integrity of accurate information.

To deal with deepfake products, it is necessary to combine the operation of advanced technology, social policy, and appropriate laws and train the public to identify the fakes.

Guy Horesh is a pre-sales engineer (IS, App & Identity) at Bynet Data Communications

img
Rare-earth elements between the United States of America and the People's Republic of China
The Eastern seas after Afghanistan: the UK and Australia come to the rescue of the United States in a clumsy way
The failure of the great games in Afghanistan from the 19th century to the present day
Russia, Turkey and United Arab Emirates. The intelligence services organize and investigate