Safeguarding is an important and increasingly mandatory element of protecting teachers and students in an academic environment.
UK Department of Education:
“The use of technology has become a significant component of many safeguarding issues. Child sexual exploitation; radicalization; sexual predation: technology often provides the platform that facilitates harm. An effective approach to online safety empowers a school or college to protect and educate the whole school or college community in their use of technology and establishes mechanisms to identify, intervene in, and escalate any incident where appropriate.”
The breadth of issues classified within online safety is considerable, but can be categorized into three areas of risk:
*content: being exposed to illegal, inappropriate or harmful material; for example, pornography, fake news, racist or radical and extremist views;
*contact: being subjected to harmful online interaction with other users; for example, commercial advertising as well as adults posing as children or young adults; and
*conduct: personal online behavior that increases the likelihood of, or causes, harm; for example, making, sending and receiving explicit images, or online bullying”
Available Safeguarding solutions often involve text analysis, keystroke loggers and screenshot grabbing. These processes generate huge quantities of potentially inappropriate visual content which must be manually reviewed. If inappropriate visual content is displayed on the screen and there is no corresponding metadata it can be very hard to identify. Image Analyzer can significantly reduce the time required to identify the display of inappropriate content within a screenshot, allowing moderators to quickly identify breaches of their usage policy.
Image Analyzer’s artificial intelligence-based content moderation technology for image, video and streaming media integrates with available safeguarding solutions. We provide our customers with a competitive differentiator and incremental revenue growth opportunities.
- Helps comply with online safeguarding regulations
- Protects brand reputation
- Reduces legal risk exposures and liabilities for educators and corporate organizations
- Improves efficiency and productivity of IT teams
- Supports internal audit and computer misuse investigations and can verify employee misconduct
- Protects moderators' mental health by filtering high risk scoring visuals, reducing the volume requiring moderation to nuanced content
- Advanced AI delivers high detection and near zero false positives
- Identifies sexually explicit and NSFW images
- Reduces moderation queue by 90% or more
- Human moderators review images based on risk
- Integrates easily with available Safeguarding solutions
- Highly scalable to grow with increasing volumes without affecting performance