Vulnerabilities in ChatGPT Plug-in Pose Risk of Exposing Sensitive Data

By|
Admin
|
2024-03-16
|
Vulnerabilities

Recent discoveries of vulnerabilities in ChatGPT plug-ins, now rectified, have underscored concerns over the potential theft of proprietary information and the heightened threat of account takeovers.

Researchers from Salt Labs identified three critical vulnerabilities affecting ChatGPT's extension function. These vulnerabilities facilitated unauthorized access to users' accounts and services, including sensitive repositories on platforms like GitHub, through what's known as zero-click access.

ChatGPT plug-ins and customized versions allow developers to extend the AI model's capabilities, enabling interactions with external services such as GitHub and Google Drive. However, vulnerabilities in these plug-ins posed significant security risks.

The first vulnerability occurred during the installation of new plug-ins, where users were redirected to plug-in websites for code approval. Attackers could exploit this by tricking users into approving malicious code, leading to the automatic installation of unauthorized plug-ins and potential account compromise.

Secondly, PluginLab, the framework for plug-in development, lacked proper user authentication, allowing attackers to impersonate users and execute account takeovers. This was demonstrated with the "AskTheCode" plug-in connecting ChatGPT with GitHub.

Finally, certain plug-ins were vulnerable to OAuth redirection manipulation, enabling attackers to insert malicious URLs and steal user credentials, facilitating further account takeovers.

While these issues have been addressed and there is no evidence of exploitation, users are urged to update their apps promptly.

Yaniv Balmas, VP of Research at Salt Security, emphasized the need for organizations to review the plug-ins and GPTs they use and conduct security reviews of their code. He also suggested developers become more familiar with the GenAI ecosystem's internals and security measures.

Sarah Jones, a cyber threat intelligence research analyst at Critical Start, echoed concerns over broader security risks associated with GenAI plug-ins, emphasizing the importance of robust security standards and regular audits.

Darren Guccione, CEO and co-founder at Keeper Security, warned of the inherent security risks involved with third-party applications and stressed the need for organizations to prioritize security evaluations and employee training, particularly as AI-enabled applications become more prevalent.

As AI tools continue to handle proprietary data, unauthorized access could have devastating consequences for organizations. Therefore, stringent security controls and data governance policies are essential to mitigate risks associated with AI applications.