You are using an outdated browser. Please upgrade your browser to improve your experience.
Skip to content
Hear from our CEO about the Key OSINT Trends in 2024 Read Now

In this blog, our experienced Fivecast Tradecraft Team draws on their expertise in the Government Intelligence sector to explore how social media platforms have enabled violent extremists and terrorists to spread violent extremist rhetoric online. At the same time, this use of social media provides critical insights for intelligence teams that can use OSINT to detect and disrupt potential threats before they materialize into real-world violence.

How Modern Media Shapes the Narrative

In 1985, during a speech to the American Bar Association in London, the then Prime Minister of the United Kingdom, Margaret Thatcher, said, “…we must try to find ways to starve the terrorist and the hijacker of the oxygen of publicity on which they depend.”[1] She was referring to media coverage giving terrorist groups unintentional help by sharing their attack’s publicly and providing a platform to help spread the group’s political message.

While there is no agreed-upon definition of terrorism, it can be described as violence, or the threat of violence, to coerce or intimidate the government or the public and advance a political, ideological, or religious cause.

Since that speech almost 40 years ago, the media and how we communicate and access information have changed dramatically. Previously, official news outlets had a limit to what they would show and attempted to remain objective and unbiased. However, media via newspapers and television have been far surpassed by digital news media and social media, which generate and distribute information quicker and further than ever.

Navigating Social Media Before and After Terrorist Attacks

Many people also share and obtain news media from unregulated sites and platforms containing misinformation[2] and disinformation[3] (4Chan, 8Kun, Gab, dark web forums, etc.). Many of these unregulated sites and platforms also allow users to communicate via forums, comments, or chat rooms, which can fuel conspiracy theories, propaganda, and violent extremist ideology. When something like a terrorist attack occurs, these unregulated sites go into overdrive.

For example, following the 2019 terrorist attacks in New Zealand, when Brenton Tarrant killed 51 people during an attack on two Mosques, official media reporting was quick to report on the issue. However, you did not have to look too hard to view the livestreamed footage of the attack and the violent extremist ideology pushing it. Tarrant initially announced on 8kun (then known as 8chan) that he would be conducting an attack and provided the link to his livestream on a popular social media platform.

Request our Industry Brief: Leveraging AI for Counter-Terrorism Efforts

Mainstream social media platforms invest a lot of time and money to regulate the information on their sites. For example, one popular mainstream platform invests 5% of its overall revenue, or 3.7 billion US dollars annually, toward content moderation.[4] To put that into context, the money the platform spends on content moderation every year is more than the total annual revenue of X (previously known as Twitter).

Even though mainstream media invest in content moderation, Tarrant’s videos spread rapidly on mainstream platforms. In total, a mainstream media platform removed about 1.5 million videos of the attack globally within the first 24 hours.[5] Tens of thousands of videos were uploaded to YouTube at a rate of one per second in the immediate hours following the attack. People also used creative ways, such as editing videos, to outsmart the platform’s detection systems. The videos were often accompanied by violent extremist rhetoric that supported the attacks and dehumanized the victims. Even if Mainstream social media were more successful at content moderation, many other social media sites and areas of the internet are unregulated, accessible, and publicly available.

The Path to Violent Extremism

A Violent extremist describes the beliefs and actions of a person or group and their support for violence to achieve either a social, political, or legal outcome or in response to a specific political or social grievance.[6]

A violent extremist could carry out a terrorist attack; however, a violent extremist could also engage in different forms of violence, such as violent protests to promote their beliefs and cause. The key point here is the term ‘violent’ in violent extremism. Individuals are free to have extremist views of their ideology or religion. It is not until they decide to engage in violence on behalf of their ideology or religion that it becomes a threat.

Violent extremism can be broken into separate groups, usually depending on how a government defines violent extremism. In Australia, violent extremism is broken into:

  • Ideologically Motivated Violent Extremism (IMVE): often referred to as left- or right-wing extremism but can include any ideology. Examples can be racist or nationalist, environmental or anarchist violent extremists.
  • Religiously Motivated Violent Extremism (RMVE): Can be applicable to any person or group engaging in violent extremism on behalf of any religion.

However, individuals do not just ‘become’ violent extremists overnight. Typically, this occurs through online platforms that provide a breeding ground for extremist ideologies. These people can very easily access social media groups, websites, dark web forums, and instant messaging groups filled with extremist ideologies. Engagement with these open-source platforms may push people into violent extremism through engagement with other like-minded individuals or groups, and these platforms can create an echo chamber of narratives and propaganda that can help to radicalize.

Request the White Paper- Understanding the New Violent Extremist

 

OSINT to Counter Terrorism and Violent Extremism

While the digital landscape may serve to fuel and promote terrorism and violent extremism, open-source intelligence (OSINT) can also provide critical insights for intelligence teams to understand this environment. Organizations can harness OSINT, transforming publicly and commercially available information into a powerful tool to identify and disrupt terrorism and violent extremism.

Law enforcement and intelligence organizations can use open-source information to better understand the different violent extremist groups, monitor their extremist narratives or changes in violent rhetoric, and identify anyone within the group becoming more radicalized towards violent extremism or planning to engage in terrorism. Likewise, other organizations can use OSINT to inform checks to help surface violent extremism or terrorist red flags. Checks can include employment suitability checks, Critical Infrastructure Background Checks, security clearance vetting, visa application checks, Maritime or Aviation Identification Card checks, firearm license checks and many more.

Leveraging the capabilities of Fivecast ONYX, intelligence teams can rapidly filter vast amounts of publicly and commercially available information. Covering the end-to-end intelligence process, Fivecast ONYX is an AI- enabled scalable solution, with the power to seamlessly and rapidly increase vision of the threats posed when screening tens of thousands of people, or through a deep-dive  targeted investigation of a person of interest.

REFERENCES

  1. Margaret Thatcher Foundation Website.
  2. Misinformation can be defined as information that is verifiably false, misleading, or deceptive content that can cause harm, and is spread through ignorance, error or by mistake.
  3. Similar to misinformation although it is ‘deliberately’ spread for the purpose of causing harm by undermining trust and promoting confusion in government institutions.
  4. Knowledge at Wharton, 2024, How Social Media Firms Moderate Their Content.
  5. 2019, Combating Terrorism Centre, The Christchurch Attacks: Livestream Terror in the Viral Video Age.
  6. Australian Security Intelligence Organisation, The terrorism and violent extremism threat in Australia.