Meta's AI tools may be catching more noise than danger, according to US child abuse investigators. Agents with the Justice Department–backed Internet Crimes Against Children (ICAC) taskforce say Meta's automated tools are flooding them with "junk" reports about supposed child exploitation on Facebook, Instagram, and WhatsApp—tips they often can't use to build cases. One New Mexico agent testified last week in the state's trial against Meta (more on the case here) that the number of cyber tips his department receives from Meta doubled from 2024 to 2025, but many lacked crucial images, videos, or text, or didn't describe a crime at all.
The surge appears tied to a November 2024 law that expanded what platforms must report, and officers suspect Meta's AI is over-flagging to avoid penalties, reports the Guardian. Investigators testified that "we are drowning in tips," and that the deluge is straining staff and slowing real cases.
ICAC taskforce special agent Benjamin Zwiebel testified that he suspects AI is involved, as he sees "common mistakes" that a human wouldn't make; he gave the example of a tip that flagged adolescent girls having a conversation about the cutest celebrities. Meta counters that it has long aided law enforcement and that the DOJ and National Center for Missing & Exploited Children have praised its cooperation and reporting system.