Govern3 min readBeginner

AI Regulations You Should Know About — A Quick Guide

Existing laws already apply to AI — civil rights, HIPAA, GLBA, and new state rules. Here is what matters for your work.

AI Guru Team

AI Regulations You Should Know About — A Quick Guide

Here is something that surprises many people: you do not need new laws to hold organizations accountable for AI decisions. Existing laws already apply. If a decision made by a human would violate anti-discrimination law, the same decision made by an AI violates the same law. The tool changed; the legal obligation did not.

Laws That Already Cover AI

Civil Rights Act (Title VII) and ADA

If an AI hiring tool disproportionately screens out candidates based on race, gender, religion, national origin, or disability, that is employment discrimination — regardless of whether a human or an algorithm made the decision. The EEOC has made this explicit: employers are responsible for the outcomes of AI tools they use in employment decisions.

HIPAA

Health information entered into an AI tool is still protected health information. If your organization is a covered entity, HIPAA's privacy and security requirements apply to AI systems that process patient data. Using a consumer AI chatbot to summarize patient cases could constitute a HIPAA violation if the tool lacks appropriate safeguards.

GLBA (Gramm-Leach-Bliley Act)

Financial institutions using AI for credit decisions, fraud detection, or customer profiling must comply with GLBA's data protection requirements. The AI system's data handling must meet the same standards as any other system processing customer financial information.

FERPA

Educational institutions using AI tools that access student records must ensure compliance with the Family Educational Rights and Privacy Act. AI-powered learning platforms, grading tools, and student analytics must protect student data accordingly.

CCPA and State Privacy Laws

The California Consumer Privacy Act and similar state laws give consumers rights over their personal data — including data used by AI systems. If your AI tool profiles consumers or makes automated decisions about them, these laws may require disclosure, consent, or the right to opt out.

'The AI Did It' Is Not a Defense

This is worth emphasizing: no court or regulatory body has accepted 'the AI did it' as a defense. When an organization deploys an AI tool that produces a discriminatory, privacy-violating, or otherwise harmful outcome, the organization bears responsibility — not the technology vendor, not the algorithm, and certainly not the AI itself.

Regulatory Enforcement Is Increasing

Multiple federal agencies are actively enforcing existing laws against AI-related harms:

  • EEOC has issued formal guidance on AI in employment and is investigating complaints about algorithmic discrimination.
  • FTC has taken enforcement action against companies making deceptive AI claims and companies whose AI tools caused consumer harm.
  • CFPB has warned that AI-driven credit decisions must comply with fair lending laws and that 'the algorithm decided' is not an acceptable adverse action explanation.
  • OCR (HHS) is investigating AI-related HIPAA complaints and has issued guidance on AI use in healthcare.

Emerging State and Local Laws

While federal AI-specific legislation continues to develop, state and local governments are moving faster:

  • New York City requires bias audits for AI tools used in hiring decisions.
  • Illinois requires consent before using AI to analyze video interviews.
  • Colorado has enacted broad AI governance requirements for high-risk AI systems.
  • Several states have proposed or enacted AI transparency requirements.

Your Responsibility

You do not need to be a lawyer. But you should know which rules apply to the data you handle and the decisions you influence. If you use AI tools that process personal information, make employment decisions, handle health data, or affect consumers, existing regulations already set the boundaries.

When in doubt, ask your compliance team or legal department. The question 'Does our AI use comply with existing regulations?' is always worth asking — and the answer is increasingly consequential.

Tags:
AI LiteracyRegulationComplianceGovernanceAI Lawlevel:beginner

Enjoyed this article?

Share it with your network!

Related Articles