In a recent podcast interview with Theo Von, OpenAI’s CEO, Sam Altman, expressed concerns about artificial intelligence (AI) privacy. He stated that private information shared with the firm’s chatbot ChatGPT is not protected. He highlighted significant privacy gaps in AI interactions.
Altman compared AI conversations to those with therapists, lawyers, or doctors, which are protected and remain legally confidential. But conversations with ChatGPT do not have the same safeguards. This means users’ personal or sensitive details shared with ChatGPT, OpenAI, can be disclosed if a lawsuit is filed.
With AI tools being significantly used for multiple roles, the absence of confidentiality laws is a challenge. Altman called this a “big problem” and said it’s “not right” for users to have no protections when using AI tools. As more people utilize AI for tasks such as mental health support, medical advice, or financial guidance, he believes there should be laws to maintain the privacy of those conversations, just as with professional interactions.
Ethical and Surveillance Concerns
Altman shared concerns about personal data being potentially accessed or misused. He shared that he feels cautious about using some AI tools himself and that he has spoken to lawmakers who agree that digital conversation privacy laws are needed.
Further, Altman also cautioned that, as AI becomes more common, governments might increase surveillance to curb terrorism and other crimes. He acknowledged that while certain levels of monitoring are necessary for safety, there is a risk that governments could overuse their powers.
These comments have emphasized the need for comprehensive AI privacy frameworks. The discussion also highlighted the broader challenges that the AI industry is facing.
Also Read: OpenAI Set to Launch GPT-5 in August, Merging Advanced AI Models
