The Cabinet Division has raised concerns about the potential theft of sensitive information from ChatGPT and other AI applications.
In response, they have issued an advisory urging cautious use of these technologies to protect sensitive data, personal information, and business details from cyber risks.
The advisory highlights that individuals handling highly sensitive information should avoid using chatbots to minimize the risk of data breaches. To further safeguard communications, the Cabinet Division recommends manually deleting sensitive communications rather than relying on automated processes.
One of the key recommendations in the advisory is to use devices that do not contain private or official data when interacting with chatbots.
Institutions are advised to restrict chatbot access to authorized individuals only. This measure is suggested to control the use of AI applications and ensure that only those with proper clearance can utilize these tools.
To prevent unauthorized use of AI applications, the Cabinet Division proposes adopting secure communication channels. This includes using encrypted messaging services and secure network connections to protect data integrity.
The advisory also emphasizes the importance of training employees to be cautious when using AI applications. Educating staff on the risks and best practices associated with AI tools can help mitigate potential cyber threats, it suggests.
The advisory stresses the need to ensure that employees do not share sensitive data through AI applications. By reinforcing data security protocols, organizations can better protect themselves from cyber risks associated with the use of AI technologies.