OpenAI claims that it has developed a way to use ChatGPT to moderate content. The technique relies on prompting GPT-4 with a policy that guides the model in making moderation judgments and creating a test set of content examples that might or might not violate the policy. A policy might prohibit giving instructions or advice for procuring a weapon, for example, or stealing a car.
“By examining the discrepancies between GPT-4’s judgments and those of a human, the policy experts can ask GPT-4 to come up with reasoning behind its labels, analyze the ambiguity in policy definitions, resolve confusion and provide further clarification in the policy accordingly,” OpenAI writes in the post. “We can repeat [these steps] until we’re satisfied with the policy quality.”
OpenAI makes the claim that its process can reduce the time it takes to roll out new content moderation policies from months to hours.
“Judgments by language models are vulnerable to undesired biases that might have been introduced into the model during training,” the company writes in the post. “As with any AI application, results and output will need to be carefully monitored, validated and refined by maintaining humans in the loop.”
The blog post includes a helpful video that explains how the process works.