Anthropic scientists expose how AI actually ‘thinks’ — and discover it secretly plans ahead and sometimes lies
venturebeatAnthropic has developed a new method for peering inside large language models like Claude, revealing for the first time how these AI systems process information and make decisions.
The research, published today in two papers (available here and here), shows these models are more sophisticated than previously understood — they plan ahead when writing poetry, use the same internal blueprint to interpret ideas regardless of language, and sometimes even work backward from a desired outcome instead of simply building up from the facts.
The work, which draws inspiration from neuroscience techniques used to study biological brains, represents a significant advance in AI interpretability. This approach could allow researchers to audit these systems for safety issues that might remain hidden during conventional external testing.
“We’ve created these AI systems with remarkable capabilities, but because of how they’re trained, we haven’t understood how those capabilities actually emerged,” said Joshua Batson ...
Copyright of this story solely belongs to venturebeat . To see the full text click HERE