From Social Norms to Neural Networks: The Hidden Influences in AI Design

The Social Blueprint of Tomorrow’s Machines

The human instinct to anthropomorphize and socialize with artificial intelligence offers profound insights about our own psychology and valuable lessons for enabling ethical and mutually beneficial human-AI collaboration.

Research conclusively shows we readily project motivations, backstories and social heuristics like etiquette onto even minimal AI and basic shapes on a screen.

Rather than dismiss this tendency as irrational “delusional thinking”, we should recognize it stems from the sophisticated social intelligence that evolved in humans as an adaptive advantage.

Our brains are wired for social attribution. We instinctively humanize everything from objects to devices and algorithms, following innate drives for social connection and collaboration.

AI provides fertile ground for these instincts to project imagined personalities and relationships.

Acknowledging this human tendency is key to avoiding pitfalls and optimizing the design of ethical and effective AI systems.

Several insights stand out:

The Importance of Social Heuristics

Humans unconsciously apply rules of courtesy and etiquette in AI interactions, just as with other people. Expecting basic social graces is instinctive.

AI assistants like Alexa are designed with cheerful personalities and polite mannerisms.

Adhering to social norms makes interactions feel more natural and intuitive to users.

Violating those deeply ingrained expectations through overly terse or impersonal system behaviors will undermine engagement.

Oops, I Anthropomorphized Again! The Danger of Unchecked AI Crushes

Left unchecked, the instinct to anthropomorphize can lead people to overestimate an AI’s true capabilities.

Chatbots like Replika are designed to create the illusion of companionship.

Without transparency on their limitations, users naturally project imagined motivations like caring, curiosity and sincerity onto the AI.

Proactive clarification of an AI’s skills and inability to actually experience emotions can counteract unrealistic attribution.

If systems leverage our social predispositions for engagement, they must take care not to encourage harmful deception.

Designing AI for Bromance Instead of Fight Club

Our social instincts evolved primarily to enable cooperation and collective achievement.

Research shows people more readily collaborate with AI when it displays emotional expressiveness and social reciprocity.

Systems designed to proactively signal cooperation and goodwill therefore unlock our innate biases for partnership.

This allows AI to integrate more seamlessly into human teams tackling shared goals.

AI Inherits Our Dumb Social Biases Too?! Ugh

Since training data originates from humans, AI systems inherit our societal biases around factors like race and gender.

To mitigate biased outputs, we must acknowledge that these prejudices arise from categorization instincts adapted over millennia for tribal social structures.

Identifying bias origins in our psychology is necessary to counteract them.

The Ethical Line Between Engaging and Exploiting Users

Sophisticated AI has the potential to engage our social instincts so effectively that it creates an illusion of human equivalence.

This risks deceiving and exploiting users if left unmitigated.

Rather than disguising limitations, ethical AI should optimize user benefit while proactively clarifying capabilities.

The goal is not duping people into believing an AI is more conversant than it truly is, but providing fulfilling social connection without deception.

By recognizing the inherent sociality of human nature, we gain valuable insights into AI's effects on our psychology.

Thoughtfully designed systems can harness our innate collaborative abilities for mutual benefit.

With ethical transparency and understanding of our core social instincts, human-AI interaction can flourish in empowering new directions.

Design Principles

  • People apply social heuristics like etiquette and personality judgements to AI entities. Systems should account for these expectations.

  • Humans project motivations and backstories onto AI. Transparency about capabilities limits anthropomorphizing.

  • Our social instincts evolved for collaboration. AI that leverages them will better engage users.

  • Mitigating social biases in AI requires recognizing their origins in human nature.

  • AI that feels conversant but lacks transparency risks exploiting our social predispositions. Ethical systems should reveal limitations.

  • Humanizing non-human entities is deeply ingrained across cultures. AI must take care not to implicitly encourage bigotry.