Olha Petriv
Lawyer in the field of artificial intelligence for “Independent Media”
We are constantly connected to the network — a smartphone in our pocket, a smartwatch on our wrist, and a food delivery app in just a few clicks. All of this generates an endless stream of information about our preferences, location, and behavior. Many platforms collect vast amounts of data, often without explaining why it is being done. As a result, users often “give away” their data without realizing who might use this information.
Imagine that your credit application is not approved, but you don’t receive any explanation as to why it happened. Or that an algorithm rejects your resume, even though you meet all the requirements for the job. The AI system makes decisions in seconds, but a person is not always able to understand why this happens. Such “opacity” breeds mistrust in technology and a sense of injustice.
Cyber threats deserve special attention. Massive databases storing confidential information regularly attract cybercriminals. In the event of a successful attack, both the reputation of the organization and the security of users may be at risk, as their personal data could be exposed.
This highlights the need for clear and transparent rules regarding the collection and use of personal data. As a result, lawmakers around the world are developing norms and standards aimed at balancing the interests of commercial companies with citizens’ rights to privacy and security. In this consultation, we will explore how personal data is regulated in the United States, Canada, and the European Union.
First and foremost, it is important to understand that personal data refers to any information that can be used to identify an individual. This can include:
- General data: name, date of birth, address;
- Contact information: phone number, email address;
- Technical data: IP address, cookies;
- Sensitive (special) data: information about health, racial or ethnic origin, biometric data, etc.
With the advent of big data sets and the rapid development of AI, the risks of data breaches, unauthorized access, and manipulation of personal information are growing. Algorithms are capable of generating extremely accurate user profiles based on data. This creates both opportunities and dangers. Therefore, it is interesting to examine how the relevant issues are currently regulated.
United States of America
The United States does not have a single federal law that comprehensively covers all aspects of personal data protection. Instead, there are various sector-specific acts, as well as laws at the state level, that significantly influence the requirements for data processing and storage:
Key Federal Laws
The Health Insurance Portability and Accountability Act (HIPAA) regulates the protection of patient medical data, including setting standards for encryption, access, and the transmission of information between hospitals, insurance companies, and other participants in the medical sector. This law also includes the so-called “Privacy Rule” and “Security Rule”: the former requires organizations to limit the use and disclosure of confidential data, while the latter mandates the implementation of technical and administrative security measures to prevent unauthorized access. Additionally, HIPAA includes provisions for penalties (fines and criminal liability) in cases of intentional or gross negligence in complying with the requirements.
For the protection of children’s information, the Children’s Online Privacy Protection Act (COPPA) applies, covering digital platforms (websites, apps, online services) that may collect or process data from children under 13 years old. Under this act, such organizations are required to obtain verified parental consent before collecting or using children’s information. The law also requires a transparent privacy policy that clearly explains what data is collected, how long it is retained, and who it may be shared with. Violations of COPPA can result in fines from the Federal Trade Commission (FTC), with amounts reaching millions of dollars depending on the scale and nature of the violation.
In the financial sector, the Gramm-Leach-Bliley Act (GLBA) is the most important document. It contains several requirements, including the “Financial Privacy Rule” and the “Safeguards Rule”, which oblige banks, credit institutions, and insurance companies to protect the confidentiality and integrity of their clients’ financial data. Specifically, organizations must inform clients about what types of data are collected, for what purpose, and to whom they may be transferred. The law also requires implementing technical and organizational security measures to prevent unauthorized access to sensitive information. Separate mechanisms for control (through various regulatory bodies at both the federal and state levels) and accountability for non-compliance with established rules are also provided.
For the education sector, the Family Educational Rights and Privacy Act (FERPA) protects information about students that is stored by educational institutions. This act applies to public and most private schools, colleges, and universities that receive government funding. It guarantees parents and students (upon reaching the age of majority or when entering an institution after school) the right to access educational records, correct errors or inaccuracies, and restrict the dissemination of this information under certain conditions. If an educational institution discloses such records without proper consent, it may result in the loss of government funding or other disciplinary measures.
Overall, these four acts demonstrate the U.S. sectoral approach to regulating personal data protection. Each document focuses on a specific aspect of privacy — medical records, children’s data, financial information, or educational documents — thereby filling the particular niches where data protection is most critical. This regulatory format ensures a deep level of detail for each sector but leads to a lack of uniformity in legislation: depending on the state and sector, different rules and approaches to protecting consumer rights may apply.
Legislation at the State Level
At the state level, the most well-known and stringent personal data protection law is the California Consumer Privacy Act (CCPA), which came into effect on January 1, 2020. It grants California residents the right to know what data companies collect about them, demand the deletion of their personal information, and prohibit its sale. Additionally, the CCPA requires organizations to disclose a list of third parties to whom they share user data and explain the purpose of such sharing. Subsequently, the California Privacy Rights Act (CPRA) was added, expanding the definition of “sensitive personal data” and strengthening requirements for user notifications and consent. California’s approach has served as a model for other states looking to implement similar data protection regulations.
Currently, several other states — such as Virginia, Colorado, Utah, and Connecticut—have also passed their own laws regulating the collection and use of personal data. While these laws may differ in certain respects due to local economic or legal considerations, they generally follow California’s lead.
At the federal level, the U.S. does not have a unified law that covers all aspects of artificial intelligence (AI) development and application. However, there are government initiatives aimed at regulating this area. In October 2023, President Joe Biden signed an executive order establishing new safety standards for AI and providing for the development of recommendations on ethical norms and accountability principles for AI systems. However, on January 21, 2025, President Donald Trump rescinded this order, citing the need to promote innovation in the AI sector.
European Union: GDPR and Artificial Intelligence Act
In the European Union, data protection is governed by the General Data Protection Regulation (GDPR), which was adopted in 2016 and became applicable on May 25, 2018. GDPR significantly impacts the development and use of AI systems by setting requirements for the processing of personal data. According to GDPR, organizations must ensure the lawfulness, transparency, and fairness of data processing, which is critical when training AI models on large datasets that may contain personal information.
Key aspects of GDPR’s impact on AI:
- Lawfulness of Data Processing: Organizations must have a legal basis for collecting and processing personal data. This can be the consent of the data subject or other grounds provided by GDPR.
- “Privacy by Design” and “Privacy by Default”: AI developers must implement data protection measures during the design phase of the system and ensure that, by default, only data necessary for a specific purpose is processed.
- Data Subject Rights: GDPR grants individuals the right to access, correct, delete, and restrict the processing of their data. AI systems must be designed to enable the exercise of these rights.
- Data Protection Impact Assessment: If data processing via AI may result in high risks to individuals’ rights and freedoms, organizations must conduct a Data Protection Impact Assessment (DPIA) to identify and minimize these risks.
- Transparency and Explainability: GDPR requires data processing to be transparent to data subjects. In the context of AI, this means organizations must provide clear information about how their algorithms work and how personal data is used.
Artificial Intelligence Act
The Artificial Intelligence Act (AI Act) is a regulation from the European Union that establishes a unified legal and regulatory framework for artificial intelligence within the EU. Its aim is to ensure the safe and ethical use of AI, considering the potential risks to society and human rights. The AI Act was published in the Official Journal of the European Union on July 12, 2024, and came into force on August 1, 2024. Its provisions will be gradually implemented over a period of 6 to 36 months, depending on specific requirements.
The AI Act complements the General Data Protection Regulation (GDPR) without altering it. It sets requirements for developers and users of AI systems regarding safety, transparency, and accountability, ensuring the protection of personal data and fundamental human rights. These include mandatory risk assessments, measures to mitigate potential negative outcomes, and accountability for AI usage.
The AI Act classifies AI systems based on their risk level into four categories:
- Unacceptable Risk: Systems that are prohibited due to the threat they pose to safety or human rights.
- High Risk: Systems that require stringent control and compliance with established standards.
- Limited Risk: Systems with limited transparency requirements.
- Minimal Risk: Systems that are not subject to specific regulation.
This approach allows for adapting regulatory requirements depending on the potential societal impact of AI systems.
Thus, the Artificial Intelligence Act establishes clear rules for the development and use of AI within the EU, balancing innovation with the protection of human rights.
Canada
In Canada, the regulation of personal data processing in AI systems is governed by the Personal Information Protection and Electronic Documents Act (PIPEDA), passed in 2000. This law sets rules for the collection, use, and disclosure of personal information in the commercial sector, ensuring a balance between business needs and individuals’ rights to privacy.
With the development of technology and the growing use of AI, there was a need to update the legislation. In June 2022, the Canadian government introduced Bill C-27, which includes the Artificial Intelligence and Data Act (AIDA). This bill aims to establish unified standards for the development and use of AI systems across the country. It requires transparency, non-discrimination, and security, and prohibits AI practices that could cause significant harm to individuals or their interests.
AIDA also mandates organizations to encrypt and de-identify personal data used for training AI to protect individuals’ privacy. Moreover, the bill proposes the appointment of an AI and Data Commissioner to oversee compliance and ensure adherence to the requirements.
Thus, Canada is actively updating its legislation to ensure the responsible and ethical use of AI while protecting the personal data of its citizens.
As AI becomes an integral part of the economy and daily life, the issue of protecting personal data has become increasingly urgent. In the United States, regulation is still largely handled by sector-specific and state laws. The European Union already has strict regulations through GDPR and is strengthening oversight with the AI Act. Canada, while operating under PIPEDA, is working on the C-27 bill to address the specifics of AI and strengthen citizens’ rights.
These initiatives, on the one hand, curb the uncontrolled spread of data, and on the other, demonstrate the global aspiration to balance technological innovation with the protection of fundamental freedoms. Transparent rules and accountability for AI developers, along with rigorous protection of personal data, will ensure the sustainable development of a digital society where everyone has the right to a dignified and secure digital future.