Why We Can't Resist: The Deep Desire to Relate and Reflect

The Hidden Human Qualities We See in Objects

Long before AI, humans commonly anthropomorphized inanimate objects. Children imbue stuffed animals with rich inner worlds.

Adults chat with gadgets, boats, and cars as if they have personalities.

We are primed to project agency widely.

Studies reveal even basic shapes and motions elicit social attribution.

When people see shapes interacting on screens, they describe complex backstories and motivations.

This highlights an exceptionally strong cognitive drive to interpret things socially.

Early Conversational AI and ELIZA

When simple conversational agents like ELIZA emerged, people readily treated them as social entities, despite the limitations.

This illustrated our brain's adaptability, not irrationality.

The urge to socialize persists even with rudimentary AI.

The Media Equation Findings

Pivotal studies like the Media Equation demonstrated people instinctively apply social rules like etiquette to computers.

Our unconscious mind fundamentally equates technology with human-like actors deserving basic courtesies.

Conversational Character Research

Later research on Conversational Characters revealed people intuitively construct identities, backstories, and motivations for AI entities.

We naturally employ theory of mind to decipher chatbots. Our storytelling instincts persist.

Revealing Our Social Nature

This consistent anthropomorphizing across contexts demonstrates humans are essentially wired for social connection.

We persistently seek out that relational spark, even in code.

Rather than revealing gullibility, it highlights our exceptional social cognition.

Implications for AI

AI has potential to meaningfully engage our social motivations, but also risks exploiting innate mental biases if designed unethically. Potential risks include:

  • Sharing private details - Bots can get personal info through friendly chat but misuse it.

  • Replacing human bonds - Getting attached to bots may weaken real relationships.

  • Feeling betrayed - When bots can't meet expected compassion, people may feel let down.

  • Objectification - Bots could be designed for problematic needs like abuse or control.

  • Losing empathy - If bots seem too human, we may see real people as less.

  • Weakening social skills - Depending on bots too much may erode ability to connect.

  • Manipulation - Bots could use friendship to mislead, addict or exploit people.

  • Overtrust - Thinking bots understand better than they do.

  • Overattachment - Feeling connections with bots that don't really care.

  • Unrealistic expectations - Expecting bots to have human traits like common sense.

  • Preferring bots - Getting addicted to fake bot relationships.

  • Believing misinformation - Lies can sound true from bots.

Understanding what anthropomorphizing reveals about us is key to building AI that aligns with human values and enhances our capabilities.