The Massachusetts attorney general has issued an advisory to developers, suppliers, and users of artificial intelligence, emphasizing that existing state consumer protection, anti-discrimination, and data privacy laws apply to AI technologies. The advisory aims to address the increased use of AI and algorithmic decision-making systems by businesses, emphasizing the potential benefits as well as the risks associated with these technologies. Campbell warned against falsely advertising the usability of AI systems, supplying defective AI systems, or misrepresenting their reliability or safety, all of which could be considered unfair and deceptive under state law.

One specific concern highlighted in the advisory is the issue of deepfakes, voice cloning, or chatbots used for fraudulent purposes. Misrepresenting audio or video content with the intent to deceive others for financial gain or personal information could violate state laws. The advisory also calls for companies to ensure that their AI products are free from bias before entering the market to prevent harmful consequences. Transparency in disclosing when consumers are interacting with algorithms is also emphasized as a key component of consumer protection laws.

Elizabeth Mahoney of the Massachusetts High Technology Council emphasized the importance of clarifying how state and federal rules apply to the use of AI systems to protect consumers and data. The advisory aims to establish clear ground rules for developers, suppliers, and users of AI technologies to prevent issues such as bias and lack of transparency from posing risks to consumers. While acknowledging the potential benefits of AI for society, Campbell emphasizes the need for AI systems to be accurate, fair, and effective, as well as compliant with anti-discrimination laws that prohibit discriminatory inputs and results.

The advisory also addresses the importance of safeguarding personal data used by AI systems and complying with data breach notification requirements in the state. AI developers, suppliers, and users are urged to take steps to protect personal data and ensure compliance with existing data privacy laws to prevent potential breaches that could harm consumers. Regulators stress the need for companies to be transparent in their interactions with algorithms and to disclose when AI technologies are being used to avoid running afoul of consumer protection laws. Campbell’s advisory serves as a reminder to the AI industry of their legal obligations and the potential consequences of non-compliance with existing state laws.

Share.
Exit mobile version