This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| less than a minute read

Recent Assessment Shows Want of Transparency of AI Models

A recent transparency index evaluating 10 foundational AI models has sounded a clarion call for increased disclosure in the AI industry. Researchers from Stanford, MIT, and Princeton contend that without greater transparency on the inner workings, training data, and consequences of advanced AI tools, understanding and mitigating the associated risks will remain elusive. Self-regulation has proven inadequate, as leading companies have grown more secretive due to competitive and safety reasons. The researchers argue that prioritizing transparency should be a central focus of AI legislation, emphasizing the urgency of making AI development more comprehensible and accountable.

A damning assessment of 10 key AI foundation models in a new transparency index is stoking new pressure on AI developers to share more information about their products — and on legislators and regulators to require such disclosures. Why it matters: The Stanford, MIT and Princeton researchers who created the index say that unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.

Tags

artificial intelligence, ai, machine learning, ai governance