Privacy Concerns Resurface with Launch of ChatGPT on iPhone

Share the knowledge

Will Young

OpenAI’s artificial intelligence model, ChatGPT, has made its way onto the iPhone, raising fresh concerns about user privacy. Critics have cautioned consumers to think twice before sharing personal information with the app, as anonymized data may still potentially be linked back to individuals.

Muskaan Saxena of Tech Radar advises users to be wary of becoming too intimate with the AI bot as it could jeopardize their privacy. Even though the application purports to anonymize chats, making them unlinkable to specific users, such measures might not fully safeguard consumer privacy.

Joey Stanford, vice president of privacy and security at Platform.sh, has pointed out that anonymized data could potentially be re-identified by cross-referencing it with other data sources. Similarly, Jen Caltrider, the lead researcher for Mozilla’s Privacy Not Included project, highlighted that while OpenAI states it isn’t collecting location data, its privacy policy implies that it could.

Despite these concerns, Caltrider admits that OpenAI has been transparent about its intent to use user information to train its AI model. They’re “not hiding anything,” she said.

The tradeoff of using the ChatGPT app extends to the risks associated with inputting sensitive information such as names, workplaces, and other personal details. As Caleb Withers, a research assistant at the Center for New American Security, explained, any personal data shared in this way would not be anonymized. He suggests thinking of every interaction with the app as a conversation with an OpenAI employee.

The Internet Patrol is completely free, and reader-supported. Your tips via CashApp, Venmo, or Paypal are appreciated! Receipts will come from ISIPP.

CashApp us Square Cash app link

Venmo us Venmo link

Paypal us Paypal link

James McQuiggan, a security awareness advocate at KnowBe4, echoed these sentiments, noting that while organizations may prioritize data security, vulnerabilities might still exist that could be exploited by malicious actors. Therefore, he advises users to always be cautious and consider the necessity of sharing sensitive information.

Despite the privacy concerns, there are certain built-in protections on mobile phones that can help secure personal data. However, these may not be enough to fully protect users from privacy threats. Mark N. Vena, president and principal analyst at SmartTech Research, stresses the importance of understanding the specific privacy policies and practices of any service to ensure your data is handled appropriately.

Notably, Jen Caltrider identified a significant difference in privacy practices between the web and iOS versions of ChatGPT. She explained that users have the ability to opt out of their chat data being used to improve ChatGPT on the website, but this setting is absent in the iOS app.

Ultimately, privacy advocates and experts are calling for federal legislation to address privacy concerns related to AI technology. Hodan Omaar, a senior AI policy analyst at the Center for Data Innovation, has voiced her support for a federal data privacy law to ensure clear standards are followed by all companies working with AI.

Get New Internet Patrol Articles by Email!

The Internet Patrol is completely free, and reader-supported. Your tips via CashApp, Venmo, or Paypal are appreciated! Receipts will come from ISIPP.

CashApp us Square Cash app link

Venmo us Venmo link

Paypal us Paypal link

 


Share the knowledge

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.