Safeguarding is an important and increasingly mandatory element of protecting teachers and students in an academic environment.
“The use of technology has become a significant component of many safeguarding issues. Child sexual exploitation; radicalisation; sexual predation: technology often provides the platform that facilitates harm. An effective approach to online safety empowers a school or college to protect and educate the whole school or college community in their use of technology and establishes mechanisms to identify, intervene in, and escalate any incident where appropriate.

The breadth of issues classified within online safety is considerable, but can be categorised into three areas of risk:

  • content: being exposed to illegal, inappropriate or harmful material; for example pornography, fake news, racist or radical and extremist views;
  • contact: being subjected to harmful online interaction with other users; for example commercial advertising as well as adults posing as children or young adults; and
  • conduct: personal online behaviour that increases the likelihood of, or causes, harm; for example making, sending and receiving explicit images, or online bullying”
    UK Department of Education

Safeguarding solutions often involve text analysis, keystroke loggers and screenshot grabbing. These processes generates huge quantities of potentially inappropriate visual content which must be manually reviewed. If inappropriate visual content is displayed on the screen and there is no corresponding metadata it can be very hard to identify.

Image Analyzer can significantly reduce the time required to identify the display of inappropriate content within a screenshot, allowing moderators to quickly identify breaches of their usage policy.

Image Analyzer allows software vendors to integrate AI powered visual threat intelligence into their safeguarding solution. As an embedded feature or add-on module, the company helps provide a competitive differentiator while adding incremental revenue from the existing customer base.


  • Advanced AI delivers high detection and near zero false positives
  • Identifies sexually explicit and NSFW images
  • Improves human moderation team productivity
  • Reduces moderation queue by 90% or more
  • Human moderators review images based on risk