by Ann Cuisia, December 3, 2025


I attended today’s House hearing on artificial intelligence as an invited observer and was surprised to learn that there are now 21 AI-related bills filed in Congress. On paper, this rush looks responsible. Lawmakers want to anticipate risks and protect the public. But after hours of reviewing every bill line by line, a different picture emerged.

AI is new.

But the dangers hiding in these bills are old.

Concentrated power. Silent surveillance. Licensing disguised as governance. Vague definitions that expand regulatory reach. And bureaucracies that grow faster than accountability structures.

Below are the seven silent dangers, now tied to the specific bills that contain them.

1. Overbroad definitions that give government a blank check

Some bills define AI so vaguely that simple software, dashboards, and scripts can fall under their control. This type of overreach is clearest in:

These drafts cast an extremely wide net.

Even low-risk tools could be forced into regulatory or registration regimes.

This is how overreach begins. Quietly, through definitions.

When wording is vague, regulators gain jurisdiction over technologies they were never meant to control. Students, startups, journalists, and small developers could be forced to “register” or get clearance for work that is nowhere near high-risk.

Regulation must protect citizens, not give agencies silent permission to expand their power.

2. Mass surveillance hidden between the lines

Several bills are completely silent on biometric surveillance, facial recognition, behavioral tracking, and population scoring, including: