Have you ever wondered if a computer could really get inside your head, not in a sci-fi way, but in a way that actually understands what you might do next? That is exactly what is happening with Centaur, a new artificial intelligence that is starting to predict human behaviour with a level of accuracy that is making scientists sit up and take notice.
How Centaur learns about us
Centaur is not just another chatbot. In fact, it has been trained on a mountain of real-world data, more than 60,000 people making over 10 million decisions in all kinds of situations. This includes everything from memory games to tricky moral dilemmas, all boiled down into simple language so the AI could learn what people actually do, not just what they say they will do.
Researchers took a powerful language model and gave it a crash course in human psychology. They did not start from scratch, either. They used Meta’s Llama 3.1 as a base, then fine-tuned it with a special training method that focused only on the bits that matter for predicting behaviour. The whole process took less than a week, but the results are surprising.
Centaur was tested on a huge variety of psychological experiments. It did not just spit out random guesses. When put to the test, it beat out the old psychology models that experts have trusted for years. How, you wonder? It could predict what people would do, even when the rules of the game changed or when it faced a challenge it had never seen before. In some cases, it even started to behave like a real person, making decisions that felt genuinely human.
What makes this AI different
One of the most surprising things about Centaur is that its way of thinking started to line up with patterns found in actual brain scans. No one told it to copy the brain, but the more it learned about human choices, the more its inner workings started to look like ours. It even helped scientists spot a new way people make decisions, something the experts had missed before.
Centaur could end up being useful in all sorts of places. Think about smarter apps that actually understand how you learn, or tools that help doctors spot when someone is struggling. Of course, there are big questions too. If an AI can predict your choices, how much privacy do you really have? And who gets to decide how this kind of technology is used?
For now, the team behind Centaur is working on making it even better, adding more voices and more types of decisions so it does not just reflect one slice of humanity. They have opened up their work so other researchers can build on it, hoping to create a tool that helps us all understand ourselves a little better.
The Centaur study was published in the journal Nature on July 2, 2025. The research was led by a team at the Institute for Human-Centered AI at Helmholtz Munich.