You are using an outdated browser. Please upgrade your browser to improve your experience.
Skip to content
FREE eBook: Explore the role of AI in OSINT Read Now

Dr. Trent Lewis is a Senior Data Scientist at Fivecast leading efforts in natural language processing of OSINT data and the evaluation of the Fivecast risk detectors. In this blog Dr. Lewis explores the evolution of Generative AI and its pivotal role in enhancing intelligence workflows, offering valuable benefits to intelligence teams.

How Generative AI Transforms Open-source Intelligence

The data science team at Fivecast supports our OSINT analyst community by deploying automated and intuitive Artificial Intelligence (AI) and Machine Learning (ML) capabilities into our OSINT solutions [1]. Our team of PhD qualified data scientists take care of the algorithms so that our customers can reap the many benefits of AI and machine learning without needing a PhD in data science!

In the Fivecast data science team, we are continually striving to expand beyond “traditional” AI/ML offerings to address the increasingly complex intelligence investigations and threat landscape faced by many of our customers across National Security, Defence, Law Enforcement as well as corporate organizations. As such, we are very excited to be exploring the possibilities of the new kid on the block: Generative AI.

Read our Ebook to learn how AI enhances the Intelligence Cycle

From Post to Prose: How Generative AI took the leap

Generative AI, or GenAI, refers to algorithms that use ML, particularly Large Language or Large Vision Models (LLMs & LVMs), to generate data that mirrors or simulates real-world data [2]. Extracting themes or keywords from related and relevant blocks of text, as is often required in OSINT investigations, is an ideal application of GenAI as it constrains the search space and reduces the opportunity for hallucinations.

For example, given the following description of the Extinction Rebellion…

“Extinction Rebellion is a decentralized global environmental movement that organizes nonviolent civil disobedience in an attempt to compel government action to address climate change and associated environmental issues. The majority of protest actions are designed to inconvenience members of the public in order to gain attention, such as obstructing traffic, but recently some members have been accused of deliberate property damage and significant supply chain disruption.”

… a GenAI can be prompted to extract the key themes of “Extinction Rebellion, climate change, civil disobedience, environmental movement, protest”.

Summarizing a paragraph is okay, and, until recently, most LLM context windows (the size of the input) only allowed an input of around 400 words. Now, we can send the models the equivalent of about 350 pages of text [3]. The ability to input such large amounts of text allows us to not only send a whole book but to take advantage of so-called in-context learning. Traditional ML relies on the data used for training the model to make predictions about new data. In-context learning with GenAI allows us to provide new data to the model as part of the prediction input to augment the generation process. The expanded context window also gives rise to new possibilities including summarizing larger investigations, although recent work suggests that today’s LLMs struggle with information stuck in the middle of very large documents [4].

Request the ai white paper


Unlocking Generative AI Developments

An exciting development in GenAI is Retrieval Augmented Generation or RAG [5]. Used naively, GenAI can only answer questions from its training data, which limits it to a particular period in time and from the choices made by the AI developers. RAG allows us to include relevant data for the GenAI to reason with during its generation. Using RAG, an Analyst can ask a question over a large dataset from an investigation. Relevant data is first retrieved from the investigation data source (which can be much larger than the model can handle) using a semantic search. This step enhances contextual understanding and reduces bias and hallucinations by the GenAI by augmenting the generation from the relevant retrieved data.

For example, asking OpenAI’s gpt-4 who is the president of the Rebel Motorcycle club it responds with:

“As of my last update, specific details about the leadership of the Rebel Motorcycle Club can vary, as it is not a widely publicized organization like a corporate or governmental entity. Without more context or a specific timeframe, it’s challenging to provide the current or past president’s name.”

But having an OSINT-based investigation covering Rebels MC and using RAG, the gpt-4 response is:

“The president of the Rebels Motorcycle Club is Alex Vella.”

Can a machine tell if I have a “primary and overriding commitment to Australian democracy”?

The recent advancements are mind-blowing. They move beyond an auto-completion task to answering questions that appear to require human-level understanding and background knowledge. We have been exploring using natural language questions with natural language explanations. Many explainable AI techniques may highlight passages or list decision rules, but generative AI models can provide natural language and human-like reasoning for decisions.

At Fivecast, we have been exploring the use of GenAI for automated digital footprint assessment based on the Australian Government Security Vetting Agency (AGSVA) Protective Security Policy Framework [6]. This framework has seven high-level risk factor areas for assessment (each with pages of definitions) that Analysts are required to use when assessing applicants for a security clearance. For example, take the following example social media post with the criterion, “the author must have a primary and overriding commitment to Australian democracy”:

“I love the land, I love the people, but I f#cking hate the governments that treat us like inmates and talk down to us like naughty children.”

The GenAI will flag that, no, the author probably does not have a primary and overriding commitment to Australian democracy. Importantly, it will also provide the reasoning:

“The author expresses a strong dislike for governments, which could be interpreted as a lack of respect for the democratic process. The author also uses derogatory language to describe the government, which could be seen as a lack of respect for the democratic institutions that are in place. The author’s statement that governments treat people like inmates and talk down to them like naughty children could be seen as a criticism of the democratic process, as it implies that the government is not acting in the best interests of the people. The author’s use of the word ‘hate’ to describe their feelings towards the government could also be seen as a lack of commitment to the democratic process, as it suggests a strong negative emotion towards the democratic system.”

Further, our approach combines advanced prompt engineering and model fine-tuning to ensure the optimal performance of the GenAI model for the adjudication task. By refining and optimizing the model, we have achieved results comparable to human evaluation.

Whilst this statement alone cannot definitively determine a person’s suitability for a security clearance or the need for further investigation, it represents just one facet of a comprehensive assessment process. This process enables analysts to review potential risks and arrive at evidence-based decisions.

Generative AI and Multimodal Image Assessment- How many bottles of alcohol?

Finally, we are seeing a rise in the multimodal GenAI for allowing natural language querying across both text and images. Prior iterations of GenAI technologies could process either text or images, but now we can ask text-based questions to reason over images at the same time. For example, we can prompt Google’s Gemini with the text, “You are an expert in extracting information from images. Your job is to describe all objects you see in this image in detail,” along with the following image,

… and the model responds with:

“This is an image of 11 miniature liquor bottles along with a quarter for scale. The bottles are arranged in a row with the quarter on the right of them. The bottles, from left to right, are as follows:…” 

It goes on to list the bottles by name. The astute reader may have noticed that there are, in fact, 12 bottles in the image, while the model responded with, “This is an image of 11 miniature liquor bottles”. The response does, in fact, go on to list all twelve bottles, but this highlights a common limitation of GenAI models: authoritatively stating false information. As the models have been designed to respond in a human-like way, they may generate plausible but entirely fictitious data.

In an intelligence context, this could lead to incorrect conclusions or decisions based on false information. Careful prompt engineering, in-context learning, and the use of RAG can help mitigate this issue.

As GenAI models are trained on vast amounts of data from the internet, they can inadvertently inherit and perpetuate biases present in the training data. Using the techniques mentioned above and using GenAI for summarization can lessen this issue, however, analysts still need to be aware of the generation of biased and discriminatory outputs. For example, when asked to assess the emotional content of the text, if the GenAI were cast in the role of a “male,” it tended to over-label the emotion anger whilst being “female,” it labeled sadness. [7]

Spicy Auto-Complete or More? How Generative AI Enhances AI Workflows

GenAI is especially good at producing syntactically correct text that reads as if written by a seasoned journalist or even a poet. However, we must remember that at its heart, generative AI is “simply” a next-word or token prediction machine [8]; some have even called it “spicy auto-complete” [9]. Given this, it is vitally important to understand how to best use this new technology to enhance Analyst workflows.

Despite their inherent inability to truly understand the world, the GenAI models of today can produce text and other media but can also—importantly for an OSINT analyst—parse and summarize documents and images (e.g., via multimodal models such as OpenAI’s GPT-4 with Vision [10] or Google Gemini [11] to extract entities, sentiment, and themes and answer natural language questions with natural language answers.

The Future of OSINT with Generative AI

Generative AI has the potential to reshape OSINT, empowering analysts, and driving innovation. By embracing its potential while addressing challenges, we can unlock valuable insights and strengthen national security [12]. GenAI is a powerful ally, but human judgment remains essential. Analysts validate AI outputs, contextualize information, and make critical decisions. By integrating GenAI into the Intelligence Lifecycle, we enhance efficiency, uncover hidden insights, and stay ahead in the ever-evolving landscape of OSINT.

Request a demo