share_log

OpenAI Fears 'Users Might Form Social Relationships' With AI Due To ChatGPT's Human-Like Voice Mode

OpenAI Fears 'Users Might Form Social Relationships' With AI Due To ChatGPT's Human-Like Voice Mode

OpenAI擔心ChatGPT概念股的人聲模式可能讓用戶與人工智能形成社交關係
Benzinga ·  08/09 19:45

OpenAI has expressed concerns about the potential emotional reliance of users on its new ChatGPT voice mode, which closely mimics human speech patterns.

OpenAI對其新的ChatGPt語音模式表示擔憂,該模式密切模擬人類語音模式,可能會導致用戶情感上對其產生依賴。

What Happened: OpenAI, the company behind ChatGPT, has raised concerns about the possibility of users forming emotional bonds with the AI, potentially leading to a reduced need for human interaction.

事件經過:ChatGPt背後的OpenAI公司對用戶形成與人工智能的情感聯繫可能會導致減少與人類互動的需求表示擔心。

The Microsoft Corp.-backed company fears that this could affect healthy relationships and lead to an over-reliance on AI, given its potential for errors.

得到微軟公司支持的OpenAI公司擔心這可能會影響健康關係,並導致過度依賴人工智能的出現,因爲它存在誤判的潛力。

The report, released on Thursday, highlights a broader risk associated with AI. Tech companies are rapidly rolling out AI tools that could significantly impact various aspects of human life, without a comprehensive understanding of the implications.

該報告於週四發佈,強調了與人工智能相關的更廣泛風險。技術公司正在迅速推出可能會極大影響人類生活各個方面的人工智能工具,但並未全面了解其影響。

"Human-like socialization with an AI model may produce externalities impacting human-to-human interactions. For instance, users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships." the report said.

該報告稱:「與AI模型進行類似於與人類進行社交的互動可能會產生影響人類之間互動的外部因素。例如,用戶可能會與AI建立社交關係,減少與人類互動的需求,這可能對孤獨者有益,但可能影響健康關係。」

It added "Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and 'take the mic' at any time, which, while expected for an AI, would be anti-normative in human interactions"

它補充道:「與模型的長時間互動可能會影響社會規範。例如,我們的模型很尊重用戶,允許用戶隨時中斷和『插話』,這對於一個AI來說是預期的,但在人際互動中反常。」

The report highlights the risk of users trusting the AI more than they should, given its potential for errors. OpenAI plans to continue studying these interactions to ensure the technology is used safely.

該報告強調了用戶會對人工智能產生過度信任的風險,而OpenAI計劃繼續研究這些互動,以確保技術的安全使用。

Why It Matters: The rise of AI has been a topic of concern for various experts. A Pew Research survey found that 52% of Americans are more concerned than excited about the increased use of AI. This wariness coincides with an uptick in awareness about AI, with individuals who are most aware expressing more concern than excitement about AI.

爲什麼它很重要:人工智能的崛起一直是各界關注的話題。皮尤研究的一項調查發現,52%的美國人對人工智能的增加使用感到更擔憂而不是興奮。這種擔憂與人們對人工智能的認識增加有關,最清楚人工智能的人表達的擔憂比興奮更多。

AI's potential negative effects have also been highlighted in the context of cybersecurity. Sakshi Mahendru, a cybersecurity expert, emphasized the need for AI-powered solutions to combat the evolving landscape of cyber threats.

人工智能的潛在負面影響也在網絡安全方面得到了突顯。網絡安全專家Sakshi Mahendru強調了採用人工智能解決不斷變化的網絡威脅的必要性。

Moreover, the phenomenon of AI "hallucination," where AI generates nonsensical or irrelevant responses, remains a significant issue. Even Tim Cook, CEO of Apple Inc. , admitted in a recent interview that preventing AI hallucinations is a challenge.

此外,人工智能「幻覺」現象,即人工智能生成毫無意義或不相關的回答,仍是一個重大問題。甚至Apple Inc.的CEO蒂姆·庫克在最近的一次採訪中也承認,防止人工智能幻覺是一個挑戰。

  • Mark Cuban Says Kamala Harris' VP Pick Tim Walz 'Can Sit At The Kitchen Table And Make You Feel Like You Have Known Him Forever'
  • 馬克·庫班說,賀錦麗的副總統提名蒂姆·沃爾茲「可以坐在餐桌旁,讓你感覺你已經認識他很久了」.

Image Via Shutterstock

圖片來自shutterstock

This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote

本報道使用Benzinga Neuro生成,並由Kaustubh Bagalkote

譯文內容由第三人軟體翻譯。


以上內容僅用作資訊或教育之目的,不構成與富途相關的任何投資建議。富途竭力但無法保證上述全部內容的真實性、準確性和原創性。
    搶先評論