in

New AI tools aim to improve live-stream content moderation

While Facebook, Twitter, Google and other popular web-service providers are busy deploying legions of people to mitigate online toxicity in the forms of hate speech, bullying, and sexual/racial abuse, two lesser-known companies have come together in a new research and development project to try and resolve these problems in the live-streaming video industry.

The Meet Group, which develops software for interactive dating websites, and Spectrum Labs, which makes an AI-based audio-content moderation platform, on July 27 announced an expansion of their partnership to include a significant R&D commitment into voice moderation aimed at protecting users from online toxicity in TMG’s live-streaming applications.

The Meet Group owns several mobile social networking services including MeetMe, hi5, LOVOO, Growlr, Skout, and Tagged. The company has registered millions of mobile daily active users and facilitates tens of millions of conversations daily. Its mobile apps are available on iOS and Android in multiple languages.

Hate and personally abusive speech are increasing in many channels, as social-networking companies have reported. Voice moderation is currently a major challenge because recording all content is not possible nor privacy-friendly in an ephemeral live-streaming video context, TMG said. Existing methods of AI voice moderation are slow, tedious, and cost-prohibitive, because they require voice content to be transcribed before the text AI can be applied.

Recording, analyzing content at the right time

The Meet Group and Spectrum Labs are partnering to record content at the right time and proactively and cost-effectively detect toxicity, improve accuracy for moderators, and expand safety measures for users, TMG said.

“The method of monitoring live streaming video today is twofold,” TMG CEO Geoff Cook told ZDNet. “One is algorithmic sampling of the stream every five to seven seconds, analyzing it, and taking actions accordingly. The other is the report side; we have 500-plus moderators who are staffing this and putting eyes on the stream in less than a minute after that report button is tapped. We want to record and transcribe that content, analyze it based on what’s going on, index it potentially in some kind of category, take action on it, then make that transcription or recording available to the moderator.

“This R&D project is concerned with being more thoughtful about filling in the gaps in the existing moderation.”

Voice tracking will begin recording from two different triggers: The first happens when a report button is tapped; the tool will begin recording the voice track and automatically send it for analysis. The second trigger will begin voice recording automatically based on comments in the video. If an issue is believed to exist in the video based on the comments in the chat, the live stream proactively will be reported.

If a content violation is believed to exist, the recording, along with the behavior flag and transcription, in addition to the live stream itself, if still in progress, will be sent to one of The Meet Group’s 500+ human moderators, who will review the content under the company’s Content and Conduct policy to see if a policy was violated. 

Live-streaming usage increasing on social networks

Social, dating, and gaming companies are increasingly moving into live streaming video to improve community engagement, Spectrum Labs CEO Justin Davis told ZDNet. 

“With that shift comes a growing demand for effective moderation for voice,” Davis said. “With a billion minutes spent in its live-streaming platform per month and nearly 200,000 hours of content broadcast per day, The Meet Group is a fantastic partner with whom to work in deploying Spectrum’s toxic-voice detection and moderation platform to deliver best-in-class user safety controls for their moderation team and consumers alike.”

“User safety is fundamental to what we do, and effective moderation of live-streaming video requires effective moderation of all aspects of the stream, including voice, text chat, and video,” Cook said. “The combination of Spectrum’s technology and moderation solutions with our safety standards and processes create what we believe is a model that others in the live-streaming video industry may look to follow.”

The expanded partnership announced July 27 also includes algorithmic moderation of all chats sent within The Meet Group’s live-streaming solution and AI private-chat moderation.

The algorithmic chat moderation which will be available to The Meet Group apps as well as the company’s expanding list of vPaaS partners will be screening the nearly 15 million daily chats within the live-streaming feature for hate speech, sexual harassment, and other code-of-conduct violations, TMG said.


Source: Information Technologies - zdnet.com

Google announces new bug bounty platform

Enterprise data breach cost reached record high during COVID-19 pandemic