You are using an outdated browser. Please upgrade your browser to improve your experience.
Skip to content
Hear from our CEO about the Key OSINT Trends in 2024 Read Now

In this blog, Dr. Sarah James, PhD, Data Scientist at Fivecast discusses the development of Artificial Intelligence (AI), and how ethical AI practices can ensure high-quality technology to assist analysts in fields such as Open-Source Intelligence (OSINT). 

Responsible and trustworthy AI for OSINT

Artificial intelligence (AI) is fast becoming embedded into our lives and is shaping the future of many industries. While the benefits and promises of AI are attractive, the risks and challenges AI poses for society create a distrust for developing technology. In response to these challenges, many businesses have adopted responsible and trustworthy AI principles to ensure their technology is held to the highest standards.

Within the field of open-source intelligence (OSINT) for defence intelligence, national security, law enforcement, and corporate security, developing responsible and trustworthy AI is crucial given the unwanted and potentially devastating consequences of an incorrect decision based on a poor AI prediction. Such consequences may include, but are not limited to, the following:

  • For use cases in security vetting, incorrect resolution of a person of interest may lead to loss of liberty or opportunity.
  • For the identification of threat actors, limited training data may create an unintentional bias toward underrepresented demographics thus leading to the misidentification of persons of interest.
  • Across all use cases, poorly explained predictions may cause users to draw conclusions that are not supported by the data, which could negatively impact an investigation.

Therefore, designing and developing responsible and trustworthy AI for use in OSINT is essential to enhance the vital skills of analysts and continuously provide accurate insights during investigations.

Principles of responsible and trustworthy AI

The design, development, deployment and operation of responsible and trustworthy AI is generally guided by five fundamental principles:

  • Bias and harm mitigation
  • Fairness
  • Reliability and safety
  • Accountability
  • Transparency and explainability

While the specific details surrounding principles of ethical AI vary across fields and industries (see Australia’s AI Ethics Principles, US DOD’s Principles of AI Ethics, and the UK MOD’s Defence AI Strategy for examples and further information), these core principles represent a global agreement in delivering ethical AI across the world.

These principles aim to build trust in AI technology by means of explainable and accurate predictions, encourage ongoing support for AI technology solutions from customers and society, improve the outcomes and predictions of AI, and ensure that everyone can benefit from this constantly evolving technology.

Techniques for designing, developing, deploying and operating responsible and trustworthy AI in OSINT

As the availability of open-source data increases across Surface, Deep, and Dark Web platforms, manual data interrogation is now beyond human scale. OSINT solutions, therefore, require advanced AI and machine learning capabilities to quickly uncover threats and provide meaningful insights to analysts. However, to ensure the best outcomes from AI and machine learning technologies, we must implement and continuously improve ethical techniques that underpin our solutions.

Protect people and communities

From the initial stages of design and development through to deployment and operation, AI and machine learning models should continuously benefit all individuals and societies across the world. OSINT is crucial for a diverse range of use cases including, but not limited to, counterterrorism, monitoring disinformation, combatting trafficking, and security vetting. Here at Fivecast, we develop AI technology to support OSINT best practices to not only protect human and societal well-being today but to enable a safer world both now and in the future.

Hyper-enable analysts with augmented intelligence

While there is generally increasing acceptance of the need for AI in our society and businesses, many people are developing a distrust of the evolving technology due to the potential negative consequences of incorrect decisions. It is therefore crucial to design and develop accurate AI technology that augments and empowers human ability and encourages humans to remain in control.

With the rise in data availability and complexity, intelligence analyst teams – even the most well-resourced ones – struggle to cope with the huge task of open-source data collection and analysis. Effective OSINT investigations need to increasingly rely on AI and machine learning to help analysts collect, monitor, and analyze masses of complex data to quickly find that needle in the haystack. At Fivecast, we work together with our customers to ensure our AI-enabled open-source intelligence solutions enhance rather than replace the vital skills of analysts and assist in speeding up critical investigations. We also have a strong focus on ensuring our AI and machine learning technology is intuitive and accessible to all users and can be leveraged without the need for specialist data science skills.

READ OUR BLOG ‘ARTIFICIAL INTELLIGENCE VS. AUGMENTED INTELLIGENCE’

 

Design reliable, safe, and secure AI technology

It is essential that all sensitive data involved with AI and machine learning models are kept secure, used only for intended purposes, are not shared without permission, and are safely deleted once no longer required. Furthermore, appropriate measures must be put into place to protect against adversarial attacks, data corruption, misuse of data and technology, and theft.

With sufficient privacy, security, and data protection protocols in place, AI must also be safe and reliable to protect people and encourage trust in the evolving technology. Procedures must be put into place to ensure AI performs as intended, meets pre-defined accuracy and performance standards, and safely responds to new situations without introducing harm.

All Fivecast AI technology solutions are designed and developed with high attention to detail and are closely monitored to ensure the protection and security of our company and customers. We have dedicated teams that ensure all data and models remain secure and uncorrupted and run efficiently in production. We also monitor and correct degradation in model performance, otherwise known as model drift, that may arise from changes in real-world data. Further information regarding data protection, privacy, and security can be found in The Ethical Use of AI in Open-Source Intelligence Programs.

Develop explainable and interpretable models

Adding to the complexities of open-source data and analysis, it is inherently difficult to trust the decision-making processes of AI when the trained models are black-boxes or particularly difficult to interpret or explain.

Fivecast has a strong focus on delivering explainability and interpretability tools alongside each AI-driven tool in our solutions. This starts with accompanying every AI or machine learning model with a Model Card¹ which clearly describes:

  • use cases,
  • training and evaluation data sets,
  • detailed examples of output,
  • limitations,
  • metrics used to measure accuracy and performance,
  • performance analysis,
  • ethical considerations, and
  • recommendations.

In doing so, we minimise the chance of models being used in situations for which they are not well suited and prevent users from drawing conclusions that are not supported by the data.

As developers of AI technology in OSINT, we also have a responsibility to ensure that AI-based decisions are explained in terms understood by all users and stakeholders. Through ongoing research and committing to leveraging all available information from our designed models, we aim for customers of Fivecast to be able to trust AI-based OSINT solutions in a similar way to how we trust other people who can explain their reasoning for decisions.

Ensure accountability and contestability

Though the development of laws and regulations regarding accountability for AI technology is ongoing, there remains an expectation from not only Fivecast customers and stakeholders but all members of society that Fivecast is accountable for all outcomes of the AI technology delivered. With clear accountability, there is a fair and accessible human review process available to challenge the use or decisions of Fivecast AI technology should anyone be negatively impacted.

AI and machine learning technology is an essential pillar in modern OSINT, allowing analysts to gain valuable insights from masses of data and to assist analysts in quickly filtering and uncovering threats. Here at Fivecast, we are focused on working with our customers across defense, national security, law enforcement, financial intelligence, and corporate security to deliver responsible and trustworthy AI and machine learning tools to enable a safer world.

References

¹ Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I.D. and Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency – FAT* ’19. doi:10.1145/3287560.3287596.