Skip to content
LIVE
Loading prices...
New Study Shows AI Can Learn Values Like Children

New Study Shows AI Can Learn Values Like Children

New Study Shows AI Can Learn Values Like Children

In Brief

  • • AI can learn values by watching how people behave.
  • • The system generalized those values to new situations.
  • • This could help AI adapt to different cultures.

A new study from the University of Washington has suggested that artificial intelligence (AI) doesn’t just learn facts, but it also absorbs cultural values, learning them the same way children do, which is by observing how people behave.

Ad

Instead of hard-coding a single moral framework into machines, researchers found that AI can infer values like altruism simply by watching humans interact, pointing toward a future where AI systems adapt to cultural norms rather than imposing a one-size-fits-all worldview, according to the findings published on December 9.

Why Teaching AI Values Is So Difficult

Most modern AI systems learn from vast amounts of internet data, which blends cultural norms, moral assumptions, and social behaviors from around the world. While that scale makes AI powerful, it also creates friction when systems interact with people from different cultural backgrounds.

According to the researchers, embedding a fixed set of ‘global values’ into AI risks misalignment. What one culture considers cooperative or altruistic may differ sharply from another. That’s why the team explored whether AI could learn values indirectly, the way children do, which is by observing behavior rather than receiving explicit instructions.

Ad
A Sasak bridal procession moves through village traffic, blending tradition and daily life in Lombok, Indonesia. A woman smiles at the camera in the foreground.
Teaching AI different cultural value has been a challenge. Source: Maximus Beaumont/Unsplash

How the Experiment Worked

Researchers recruited 300 participants from two cultural groups and had them play a cooperative video game modeled after Overcooked. In the game, players could choose to help another player, who was secretly a bot, by giving up resources at a personal cost.

Participants from one cultural group consistently acted more altruistically. AI agents trained using inverse reinforcement learning (IRL) observed each group’s behavior and inferred the underlying values guiding their decisions. Crucially, the AI went beyond just copying actions by learning why those actions made sense within each group.

Testing altruistic behavior using an online experiment.
Testing altruistic behavior using an online experiment. Source: Nigini Oliveira et al./PLOS One

AI Applies Values Beyond Training

To test whether the AI had truly internalized cultural values, researchers placed the agents in an entirely new scenario involving charitable donations. Once again, the AI trained on the more altruistic group chose to give more. 

That result suggests the system wasn’t memorizing behavior but generalizing values and applying them in unfamiliar situations. As the researchers explained, this is closer to how humans learn. According to co-author Andrew Meltzoff, a UW professor of psychology and co-director of the Institute for Learning & Brain Sciences (I-LABS):

“Parents don’t simply train children to do a specific task over and over. Rather, they model or act in the general way they want their children to act. For example, they model sharing and caring towards others. Kids learn almost by osmosis how people act in a community or culture. The human values they learn are more ‘caught’ than ‘taught.’”

Why This Matters for the Future of AI

If AI systems can learn values from local behavior, they could one day be culturally attuned before deployment, adjusted for healthcare, education, or public services in different regions. That approach could reduce friction, bias, and unintended harm caused by culturally mismatched AI decisions.

Still, the researchers caution that this is an early step. Real-world cultures are more complex, values often conflict, and ethical trade-offs can’t always be reduced to simple observations. But the message is that AI doesn’t need to be told what matters, as it can learn by watching us.

More Must-Reads:

How do you rate this article?

Join our Socials

Briefly, clearly and without noise – get the most important crypto news and market insights first.