OpenAI Releases CoT Monitoring to Stop Malicious Behavior of Large Models

robot
Abstract generation in progress

Golden Finance reported that OpenAI released the latest research, using CoT (chain of thought) monitoring, it can prevent malicious behaviors such as large models talking nonsense and hiding true intentions, and it is also one of the effective tools for supervising super models. OpenAI uses the newly released cutting-edge model o3-mini as the monitored object, and the weaker GPT-4o model as the monitor. The test environment is a coding task that requires the AI to implement functionality in the codebase to pass unit tests. The results showed that the CoT monitor performed well in detecting systematic "reward hacking" behavior, with a recall rate of up to 95%, far exceeding the 60% of behaviors that were only monitored.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)