ChatGPT is the most important Artificial Intelligence development in recent years. Last month we discussed its impact on data protection. Today we will delve deeper into the risks it poses to privacy. Let’s go!
Working under the assumption that chatbots rely on our personal data and the more data they collect, the more accurate they are at both detecting patterns and anticipating and generating responses, the risk to privacy increases substantially.
Besides if bearing in mind that the information used to train artificial intelligence products like ChatGPT is taken from the Internet. And personal data that is often acquired without user consent, this is a case that breaches privacy regulations.
Measures that ChatGPT must adopt to comply with privacy regulations
- Inform on data processing. What data is used and its purpose.
- Inform on how data subjects can oppose the use of their data for training algorithms and implement mechanisms to do so.
- Conduct information campaigns in the media.
- Set up mechanisms to prevent users under the age of 13 from accessing the service.
Provisions of the privacy regulation that are not met
Firstly, the duty of information is breached since complete and transparent information is generally not provided to users and interested parties regarding the processing their data undergoes in these systems.
Additionally, they do not comply with the principles that regulate personal data processing, including the principle of accuracy. Given that a large amount of data introduced into the systems is inaccurate, the result is large-scale misinformation. In this vein, one of the most serious concerns regarding the use of systems like ChatGPT is the tendency to embellish the information and increase the bias in the answers given to the user.
Another of the notable violations is the lack of legitimate basis for the mass processing of personal data in order to train the algorithms that govern chatbot operation.
It is also important to highlight that the principle of confidentiality is contravened and that there is a lack of security measures. This greatly increases the risk of breaches and cyberattacks.
Therefore, we can conclude that although AI holds to potential to transform sectors, solve problems, simplify answers or be a great source of information, it also poses great ethical and social risks.
It is important to note that chatbot systems can reproduce, reinforce and amplify patterns of discrimination and/or inequality. As a result, the irresponsible data handling by these systems leads to unreliable results, which can harm the well-being of citizens and security in legal proceedings.
Will a robust regulatory framework be created worldwide that regulates artificial intelligence systems like ChatGPT? Here at Bacaria we will closely monitor the situation and keep you up to date.