A bipartisan duo of House lawmakers is pushing to amend Section 230, targeting tech platforms that fail to remove intimate AI deepfakes. Representatives Jake Auchincloss (D-MA) and Ashley Hinson (R-IA) have introduced the Intimate Privacy Protection Act to combat cyberstalking, intimate privacy violations, and digital forgeries.
Key Highlights:
- The bill requires platforms to have a "reasonable process" to address cyberstalking and digital forgeries, including AI deepfakes.
- Platforms must act responsibly by preventing privacy violations, providing clear reporting methods, and removing harmful content within 24 hours.
- The move aims to hold tech companies accountable and prevent them from using Section 230 as a shield against these harms.
Quotes from Lawmakers:
- Rep. Jake Auchincloss: "Congress must prevent these corporations from evading responsibility over the sickening spread of malicious deepfakes and digital forgeries on their platforms."
- Rep. Ashley Hinson: "Big Tech companies shouldn’t be able to hide behind Section 230 if they aren’t protecting users from deepfakes and other intimate privacy violations."
This initiative reflects a growing legislative focus on AI policy, with the Senate recently passing the DEFIANCE Act, allowing victims of nonconsensual AI-generated intimate images to pursue civil remedies. Several states have also enacted laws to combat intimate AI deepfakes, particularly involving minors.
Tech companies like Microsoft have voiced support for regulating AI-generated deepfakes to prevent fraud and abuse.
What's Next?
- The bill’s inclusion of a duty of care mechanism, similar to the Kids Online Safety Act, suggests a growing trend in internet protections.
- Will this bipartisan effort succeed in narrowing Section 230 protections? Only time will tell.
Stay tuned for updates on this crucial development in tech policy.