California Softens AI Privacy Rules, Big Tech Wins a Round
Regulator Retreats Under Pressure
California’s privacy watchdog, the California Privacy Protection Agency (CPPA), has significantly scaled back proposed rules governing how companies use artificial intelligence and automated decision-making technologies. Initially designed to give consumers more transparency and control over how their data is used—such as requiring businesses to disclose use of AI in hiring or targeted advertising—the draft rules were expected to be a landmark move in regulating AI at the state level. However, under public pressure from major tech companies, including Google and Meta, the agency has weakened the proposal. The revised draft now contains softer obligations and exempts many common uses of AI systems from deeper scrutiny, drawing criticism from privacy advocates concerned about unchecked corporate surveillance.
Industry Relief, Watchdog Worries
Tech giants and business groups have voiced relief at the scaled-back rules, claiming that the original version would have stifled innovation and imposed confusing, costly compliance requirements. But privacy experts and consumer rights organizations say the updated draft lets firms off the hook, especially as AI-driven tools increasingly shape decisions about employment, housing, and finances. Critics argue that California, once a pioneer in digital privacy with landmark legislation like the CCPA, is now backpedaling at a critical moment. The diluted rules have ignited worries that regulatory inertia will allow big tech to advance increasingly invasive AI tools without adequate oversight—or public understanding.