share_log

OpenAI Fears 'Users Might Form Social Relationships' With AI Due To ChatGPT's Human-Like Voice Mode

Benzinga ·  19:45

OpenAI has expressed concerns about the potential emotional reliance of users on its new ChatGPT voice mode, which closely mimics human speech patterns.

What Happened: OpenAI, the company behind ChatGPT, has raised concerns about the possibility of users forming emotional bonds with the AI, potentially leading to a reduced need for human interaction.

The Microsoft Corp.-backed company fears that this could affect healthy relationships and lead to an over-reliance on AI, given its potential for errors.

The report, released on Thursday, highlights a broader risk associated with AI. Tech companies are rapidly rolling out AI tools that could significantly impact various aspects of human life, without a comprehensive understanding of the implications.

"Human-like socialization with an AI model may produce externalities impacting human-to-human interactions. For instance, users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships." the report said.

It added "Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and 'take the mic' at any time, which, while expected for an AI, would be anti-normative in human interactions"

The report highlights the risk of users trusting the AI more than they should, given its potential for errors. OpenAI plans to continue studying these interactions to ensure the technology is used safely.

Why It Matters: The rise of AI has been a topic of concern for various experts. A Pew Research survey found that 52% of Americans are more concerned than excited about the increased use of AI. This wariness coincides with an uptick in awareness about AI, with individuals who are most aware expressing more concern than excitement about AI.

AI's potential negative effects have also been highlighted in the context of cybersecurity. Sakshi Mahendru, a cybersecurity expert, emphasized the need for AI-powered solutions to combat the evolving landscape of cyber threats.

Moreover, the phenomenon of AI "hallucination," where AI generates nonsensical or irrelevant responses, remains a significant issue. Even Tim Cook, CEO of Apple Inc. , admitted in a recent interview that preventing AI hallucinations is a challenge.

  • Mark Cuban Says Kamala Harris' VP Pick Tim Walz 'Can Sit At The Kitchen Table And Make You Feel Like You Have Known Him Forever'

Image Via Shutterstock

This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote

The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment