You are using an outdated browser. Please upgrade your browser to improve your experience.
Skip to content

Request a Demo

Dr. Brenton Cooper, CEO of Fivecast presenting at the OSINT and AI event in Canberra, Australia

31st Mar, 2026 | BY JADE LOCKE - GLOBAL DEMAND GENERATION

Last week, Australian and Asia-Pacific intelligence professionals from government, industry, and academia gathered in Canberra, Australia for Fivecast’s OSINT Community of Interest event to explore the increasing importance of open-source intelligence (OSINT) and artificial intelligence (AI) and how they are reshaping intelligence work in practice.

Across keynotes, panels, and technical sessions, discussions moved beyond AI capability to focus on credibility,  how AI outputs are trusted, governed, evaluated, and used responsibly in tandem with OSINT and in environments where decisions carry real‑world consequences. The conversations reflected a shared understanding that AI is already embedded in intelligence workflows, but that its value now depends on discipline, transparency, and human judgement.

The following ten insights capture the most consistent themes from the day, outlining what the next phase of AI in open-source intelligence demands.

Artificial intelligence has crossed a threshold in intelligence work. It is no longer experimental, peripheral, or speculative. It is embedded across collection, processing, and analysis, all essential components of an OSINT investigation. What has not kept pace with this rapid increase in capability is confidence, not confidence in what AI can do, but confidence in how its outputs are generated, interpreted, governed, and acted upon.

This gap between capability and credibility defines the next phase of AI in open-source intelligence.

1. AI IN OSINT: Capability is no longer the differentiator

AI has dramatically reduced the cost of generating answers. Tasks that once required significant time and manual effort can now be completed at scale and speed. This has reshaped intelligence workflows and expectations around how open-source intelligence is consumed.

The Australian OSINT intelligence community watches a panel session on AI and OSINT capability.

Yet faster access to information has not made intelligence decisions simpler. Threat environments are more fragmented, more ambiguous, and more dynamic than before. Signals are dispersed across platforms and identities, and intent is often unclear until late in the escalation cycle.

“The challenge is not access to data. It’s recognising patterns and meaning in that data.”

– Dr Josh Roose, Associate Professor of Politics, Deakin University

Amongst the Australian and Asia-Pacific intelligence professionals in the audience, it was agreed that this shift places greater emphasis on interpretation, context, and judgement, areas where speed alone does not equal insight. AI accelerates exposure to uncertainty. It does not remove it. That makes credibility, not velocity, the decisive factor.

2. AI-driven outputs are persuasive by design

Modern AI systems, particularly large language models, are optimised to generate coherent, fluent, and contextually plausible outputs. This fluency is a feature, but it is also a risk.

These systems are probabilistic. They predict what an answer should look like based on patterns in data, rather than verifying the truth. As a result, they can produce responses that appear confident and authoritative even when they are incomplete, unsupported, or incorrect.

Dr. Josh Roose discusses disinformation, online influence, and the realities of modern information warfare at the Fivecast OSINT community of interest event in Australia.

In intelligence contexts, where confidence carries weight, this matters. The danger is not simply that AI can be wrong, but that it can be wrong convincingly.

The implication is clear. Intelligence‑grade AI cannot be treated as a black box. For mission-driven OSINT investigations, AI outputs must be explainable, traceable, and accompanied by signals of uncertainty so analysts can judge how much weight they should carry.

Read our AI E-Book

A Fivecast intelligence analyst presents OSINT best practices at the Masterclass session.

3. From performance to defensibility

As AI tools become easier to deploy, demonstrations and isolated success stories are no longer meaningful indicators of readiness. What matters is whether systems perform reliably under real‑world conditions, including when they should not return an answer at all.

Defensible intelligence requires systematic evaluation, especially in OSINT use cases across the Australian Government and Corporate sectors, which can range from counter-terrorism to trafficking to financial crime and insider threat detection. AI systems must be tested against realistic data, assessed across multiple conditions, and examined for failure modes as well as strengths. Knowing where a system does not perform well is just as important as knowing where it does.

This approach protects analysts and decision‑makers. It ensures AI outputs can be questioned, challenged, and contextualised rather than accepted at face value.

Panelists discuss AI and OSINT during the event.

4. Trust is an operational requirement

Trust emerged as a defining issue, not as an abstract principle, but as an operational one.

For intelligence analysts, trust determines whether AI outputs are used appropriately, challenged, or over‑relied upon. For organisations, trust shapes how technology is embedded into workflows and decision‑making processes. Across government and industry, trust underpins collaboration and shared responsibility.

Trust is not built through capability alone. It is built through transparency, consistency, and clarity of intent. When AI systems are opaque or poorly governed, trust erodes quickly, particularly in the sensitive discipline of open-source intelligence. When limitations are acknowledged and outputs are explainable, trust can be sustained.

In this sense, trust is not a soft value. It is a prerequisite for effective intelligence operations.

Fivecast Data Scientist presents on ethical AI best practices.

5. Governance must evolve alongside technology

As AI capability accelerates, governance frameworks must evolve with it. Many existing models were designed for earlier forms of analytics and struggle to accommodate generative and agent‑based systems.

“As AI capability evolves, governance cannot remain static.”

– Dr Brenton Cooper, CEO, Fivecast

Duane Rivett and Dr Brenton Cooper discussing The Power of OSINT: Driving Public-Private Collaboration

Governance in this context must be active, contextual, and continuously reviewed. In the Australian OSINT industry, as in other regions around the world, legal compliance remains essential, but it is not sufficient on its own. Intelligence organisations must continually ask not only whether something can be done, but whether it should be done, and whether it is proportionate to the risk and consequence.

This does not require slowing innovation. It requires aligning innovation with accountability, particularly as AI systems become more capable and more embedded in decision‑making processes.

This concept was explored by Fivecast Senior Data Scientist Dr Sarah James in her article “The Future of Ethical OSINT”, which examines why ethical safeguards, explainability, and transparency are not constraints on intelligence capability, but prerequisites for public trust and long‑term legitimacy in AI‑enabled OSINT.

6. Methodology must come before AI in OSINT

This theme was explored during the investigations and risk panel moderated by Sam Pearce (Fivecast), with panelists Joe Morris (Control Risks), Davina Mansfield (Crime Stoppers International), Andrew Wright (Protegas), Matt Winlaw (EDD Group), and Shohei Sekiya, (Japan Nexus Intelligence)

A consistent message from the discussion was that AI delivers the greatest value when it strengthens existing intelligence methodologies rather than attempting to bypass them. Panelists examined how AI is being applied across open-source intelligence, investigations, and protective security contexts, and where discipline is essential to avoid false confidence and operational risk.

Applying technology without clear methodological guardrails risks amplifying bias, misinterpretation, or over‑confidence. At the same time, excessive hesitation in adopting AI can also introduce risk by slowing response and limiting visibility. The balance lies in embedding AI within disciplined workflows, supported by human oversight and clear boundaries around where automation should stop.

A forward‑looking perspective on how disciplined OSINT tradecraft, ethical AI, and analyst‑led workflows will shape the year is explored in Fivecast CEO Dr Brenton Cooper’s  “OSINT Predictions for 2026: Threats, Tradecraft, and What’s Next.”

Guest speakers from the Australian intelligence community discuss industry trends.

7. Confident AI outputs are not the same as reliable ones

A recurring technical concern raised throughout the day by Australian and Asia-Pacific intelligence professionals was the distinction between confidence and correctness in AI‑generated outputs. Modern AI systems can produce responses that sound authoritative and complete, even when they are based on assumptions or incomplete information.

This distinction matters operationally. In intelligence contexts, confidence can influence decision‑making, particularly when outputs are consumed quickly or under pressure. Without traceability or evidence, fluent answers risk being taken at face value, which can be highly risky in OSINT investigations.

Intelligence‑grade AI must therefore make uncertainty visible. Analysts need to understand why an output was produced, what evidence supports it, and where its limitations lie. Reliability is built through transparency, not presentation.

A panel on the importance of public and private sector collaboration in the intelligence industry gets underway.

8. Defensible intelligence requires systematic evaluation

Another consistent theme was that AI systems cannot be treated as reliable simply because they perform well in demonstrations or isolated use cases. Defensibility comes from evaluation under realistic conditions.

This includes testing systems against representative datasets, understanding how they behave when inputs are ambiguous, and identifying scenarios where an AI system should not return an answer at all. Knowing where a system fails is just as important as knowing where it succeeds.

This evaluation discipline supports trust and governance. It allows intelligence organisations to deploy AI with a clear understanding of its strengths and limits, rather than relying on assumption or anecdote.

Panel moderated by Sam Pearce from the Fivecast Australia OSINT team with guest speakers from across the intelligence industry.

9. Public‑private collaboration depends on trust and shared language

These issues were examined in the public‑private collaboration panel moderated by Jake Ramsay (Fivecast), with Meg Tapia (Novexus), Derek Dalton (x-RD), Chris Taylor (ASPI), and Dr Brenton Cooper (Fivecast).

Discussions on collaboration repeatedly returned to the importance of trust, clarity, and shared understanding between government and industry as AI in OSINT becomes increasingly important for successful outcomes.

Effective collaboration requires more than technical capability. It depends on clearly defined roles, transparency about intent, and a common language around risk, proportionality, and responsibility. Where expectations are unclear or assumptions go untested, trust erodes quickly.

When collaboration works, it is because participants understand not only what tools can do, but how and why they are being used. This shared understanding is foundational to sustained public‑private partnership in intelligence settings.

Practical guidance on applying ethical OSINT frameworks in collaborative environments is outlined in Fivecast’s industry brief, Ethics & OSINT:  Navigating Publicly Available Information.

Request the Industry Brief – Ethics & OSINT

Networking image with Jake Ramsay, Account Director for Fivecast Australia.

10. Analysts remain central to intelligence outcomes

Across every session, a clear consensus emerged. Technology augments intelligence work, but it does not replace human judgement.

AI can surface signals, reduce friction, and accelerate workflows. It cannot understand context, consequence, or intent in the way analysts do. As AI compresses time to insight, it increases, rather than reduces, the responsibility placed on human decision‑makers.

Analysts operate under conditions of uncertainty. Their role is not to simplify complexity, but to interpret it responsibly. Intelligence outcomes improve when technology supports this role, not when it attempts to bypass it.

Fivecast Team photo at the conclusion of the Community of Interest Event in Canberra, Australia.

Frequently Asked Questions

What does “AI in open-source intelligence” mean in practice?
AI is used to support open-source intelligence workflows such as large‑scale collection, pattern identification, and prioritisation. It augments analysts rather than replacing human judgement.

Why is credibility more important than capability now?
AI and OSINT capabilities have advanced rapidly, but intelligence decisions still require trust, accountability, and defensibility. Credibility comes from explainable outputs, evaluation, and governance.

What is the difference between confident and reliable AI outputs for OSINT investigations?
Confident outputs may sound authoritative, while reliable outputs are supported by evidence, traceability, and visible uncertainty.

Why does governance need to evolve with AI?
As AI systems become more generative and embedded in workflows, static governance frameworks are no longer sufficient for OSINT investigations.

Does ethical AI limit intelligence effectiveness?
No. Ethical safeguards and transparency enable trust and legitimacy, which are essential for sustained intelligence operations.

What role do analysts play in AI‑enabled intelligence?
Analysts remain responsible for interpretation and decision‑making, particularly in the realm of open-source intelligence. AI supports scale and speed, but human judgement remains central.

About Fivecast

Fivecast delivers intelligence solutions built for clarity, powered by AI, and trusted to surface what matters.  Engineered to solve complex intelligence challenges, our platform cuts through digital noise to help those protecting nations, borders, businesses, and communities uncover critical insights – before risk becomes reality.

Trusted by agencies and enterprises across national security, law enforcement, defense, corporate security, and financial crime, Fivecast was born from collaboration between government and research institutions. Headquartered in Australia with a global footprint, we support the world’s most critical missions.

Fivecast. Engineered for Intelligence.

Request a Demo