AI Legislation and Governance: What’s on the Horizon
While artificial intelligence isn’t new, there are several laws and regulations that govern how the data used to train AI must be treated. As AI technology gains wider adoption, additional legislation and guidelines are emerging to protect personal data and to ensure that AI models are being developed with an eye toward risk management and bias prevention.
Over 37 countries are currently working on AI-related legal frameworks, covering new use cases for AI and expanding on laws that may already apply (like GDPR – General Data Protection Regulation). Complying with ever-evolving regulatory requirements adds another layer of complexity for organizations bringing AI tools to market. Here are a few of the regulations and guidelines impacting organizations developing and using AI applications, and considerations for compliance as the regulatory landscape changes.
Laws already on the books
Though AI has had a recent surge in adoption, the reality is that AI has been around for a long time. AI consumes data at a terrific rate, and there are multiple areas of existing laws that may apply to that data. In the United States, as well as other countries, certain unfair and deceptive trade practices laws may apply to the use of AI, even if the law doesn’t reference the technology explicitly. In addition, parts of GDPR and CCPA (The California Consumer Privacy Act) may become more relevant to AI particularly when personal data is used to train or design models, laying out security and data protection requirements and requiring transparency around how AI may use personal data.
New non-binding guidelines
In March, the United Nations adopted a resolution promoting “safe, secure and trustworthy” AI that will benefit sustainable development for all. The resolution encourages countries to safeguard human rights, protect personal data, and monitor AI for risks. Though non-binding, the resolution provides a set of principles to guide AI’s development and use moving forward.
In October 2023, President Biden issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Like the UN resolution, the Executive Order provides direction and guidance without imposing penalties for organizations that fail to comply. Various federal agencies have also proposed guidelines — for example, the Securities and Exchange Commission (SEC) has introduced rules to address conflicts of interest related to AI in finance. The tenets of these documents set the stage for future legislation and provide visibility into standards organizations may need to adhere to in the future.
Prescriptive legislation with stiff penalties
Earlier this year, the European Parliament voted to approve the European Union’s Artificial Intelligence Act (EU AI Act). The world’s first standalone law focused solely on the development and use of AI, the law will go into effect 20 days after it is published in the EU Official Journal, with a phased approach to enforcement over a two-year timeline. Publication is expected to occur in early summer.
The law classifies AI systems based on the four levels of risk, progressing from minimal, limited and high to unacceptable. Unacceptable uses include:
- manipulating human behavior to circumvent free will through subliminal techniques
- biometric categorization
- social scoring for governments
- scraping facial images from CCTV to create databases
Unacceptable uses are prohibited with some limited exceptions for law enforcement. High risk systems, such as technology used for critical infrastructure, finance, healthcare, justice and democratic processes must demonstrate they adhere to strict guidelines around safety, transparency, and data governance. The penalties for violating the new law are steep, with fines of up to 7% of a company’s annual global revenue (GDPR imposes maximum fines of 4% of annual global revenue).
The Brussels effect is likely in the wake of the EU AI Act, as many governments closely follow the European Union’s lead, and other countries like China have begun to emphasize different restrictions in comprehensive AI legislation. China has already passed laws regulating deep synthesis and calling for transparency in how data is used.
Back in the U.S., Colorado recently passed an act providing consumer protections for interactions with high-risk AI systems, calling for developers to exercise reasonable care to prevent algorithmic discrimination. Colorado’s law considers the following systems high-risk:
- Education enrollment or opportunity
- Employment or employment opportunity
- Financial or lending services
- Essential government services
- Health care services
- Housing
- Insurance
- Legal services
Many other states have proposed AI legislation as well.
Strategies for compliance with shifting laws
Most of the laws, guidelines and proposed legislation focus on privacy, data protection, transparency about how data is collected and used, and risk mitigation. To position themselves to comply with laws governing AI, it’s essential for organizations to develop a culture of compliance, providing training to employees and creating policies and procedures that promote compliance with AI regulations. Some key elements:
- Stay informed about AI regulations: Regularly monitor changes in AI legislation and guidelines at the local, regional, national, and international levels.
- Implement strong data privacy and security measures: Develop and implement robust data privacy and security measures to protect personal data.
- Promote transparency and explainability: Ensure that your AI systems are transparent and explainable, and provide users with clear information about how data is collected, used, and processed.
- Conduct risk assessments: Identify the risks associated with your AI systems, assess their compliance with applicable regulations, and adopt a risk mitigation framework.
Webinars
Enhancing LLMs with Human Validation
Capture the value of human feedback in developing AI systems. Hear from experts how a diverse, global, human-based testing approach reduces risks and supports teams in delivering helpful and harmless models and applications.