This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

Open AI describes how to use GPT-4 to moderate content

OpenAI claims that it has developed a way to use ChatGPT to moderate content. The technique relies on prompting GPT-4 with a policy that guides the model in making moderation judgments and creating a test set of content examples that might or might not violate the policy. A policy might prohibit giving instructions or advice for procuring a weapon, for example, or stealing a car.

“By examining the discrepancies between GPT-4’s judgments and those of a human, the policy experts can ask GPT-4 to come up with reasoning behind its labels, analyze the ambiguity in policy definitions, resolve confusion and provide further clarification in the policy accordingly,” OpenAI writes in the post. “We can repeat [these steps] until we’re satisfied with the policy quality.”

OpenAI makes the claim that its process can reduce the time it takes to roll out new content moderation policies from months to hours.

“Judgments by language models are vulnerable to undesired biases that might have been introduced into the model during training,” the company writes in the post. “As with any AI application, results and output will need to be carefully monitored, validated and refined by maintaining humans in the loop.”

The blog post includes a helpful video that explains how the process works.

Content moderation plays a crucial role in sustaining the health of digital platforms. A content moderation system using GPT-4 results in much faster iteration on policy changes, reducing the cycle from months to hours. GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling.

Tags

chatgpt, ai, artificialintelligence, innovative technology