Description
Confidential information might be included as a part of the prompt that is sent to the model.
Why is confidential data in prompt a concern for foundation models?
If not properly developed to secure confidential data, the model might reveal confidential information or IP in the generated output. Additionally, end users' confidential information might be unintentionally collected and stored.
Disclosure of Confidential Information
According to the source article, employees of Samsung disclosed confidential information to OpenAI through their use of ChatGPT. In one instance, an employee pasted confidential source code to check for errors. In another, an employee shared code with ChatGPT and “requested code optimization”. A third shared a recording of a meeting to convert into notes for a presentation. Samsung has limited internal ChatGPT usage in response to these incidents, but it is unlikely that they are able to recall any of their data. Additionally, the article highlighted that in response to the risk of leaking confidential information and other sensitive information, companies like Apple, JPMorgan Chase. Deutsche Bank, Verizon, Walmart, Samsung, Amazon, and Accenture placed several restrictions on the usage of ChatGPT.
Parent topic: AI risk atlas
We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.