Spectrum Labs Launches Content Moderation for Generative AI

Spectrum Labs, the leader in Text Analysis AI whose tools scale content moderation for games, apps and online platforms, announced the launch of the world’s first AI content moderation solution that detects and prevents harmful and toxic behavior produced by Generative AI. With the rise of Generative AI, such as ChatGPT, Dall-E, Bard, Stable Diffusion and others, automatic content creation can now be used to produce racist images, propagate hate speech, radicalization, spam, scams, grooming and harassment quickly and on a massive scale with a low time investment by bad actors intent on misusing the new technology.

To begin to address this issue, Spectrum Labs has developed a first-of-its-kind moderation tool for Generative AI content that helps platforms auto-protect their communities from this highly-scalable adversarial content.

“Platforms have already been struggling to sift through mountains of user-generated online content produced each day to identify and remove hateful, illegal and predatory content before Generative AI came along. Now, whether you are a spammer, a child groomer, a bully or recruiter for violent organizations, your job just got a lot easier,” said Justin Davis, CEO of Spectrum Labs. “Fortunately, our existing contextual AI content moderation tools can be adapted to address this new flood of content, because it was built to detect intent, not just a list of keywords or specific phrases, which Generative AI can easily avoid.”

Also Read: CoPeace and KidGlov Announce Strategic Partnership

Because Generative AI is designed to create plausible variations of human speech, traditional keyword-based moderation tools are unable to detect if the intent of content is hateful if it never uses specific racist words or phrases. (For example, a children’s story about why one race is superior to another without any racial slurs). Similarly, other existing contextual models that can detect sexual, threatening or toxic content but are unable to detect positive behaviors, such as encouragement, acknowledgment and rapport would redact Generative AI responses about sensitive topics even when the content intended to be helpful, supportive and assuring. (For example, if a user who has suffered from sexual abuse seeks help finding psychological support resources).

Even for image-based generative AIs such as Dall-E, automated detection and redaction of toxic human-generated prompts can prevent the creation of libraries of new AI-Generated image and video content that is hateful, threatening, radicalizing and more, while preserving the real-time latency that makes the user experience of generative AI seem so magical.

Future uses of multi-layer real-time AI moderation of Generative AI could include detection of copyright violations, detecting bias within AI-generated content to filter and eliminate biased and problematic training data sources as well as better analytics on what kinds of content people want to make and how it’s used. But right now the company is focused on quickly providing a basic set of tools to help protect users and platforms from a potential tidal wave of toxic content.

SOURCE: PR Newswire

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More