Protect our Children

Lynn’s Warriors remains deeply skeptical of Meta’s latest so called safety tool announcement. Instagram is unveiling this feature while the company faces legal action in multiple states over allegations that its platforms addict and harm children. Rolling out a new feature amid mounting legal and public scrutiny looks less like reform and more like reputation management. Meta has a long history of announcing safety upgrades that fail to work as promised, particularly tools built around keyword detection that are easily bypassed and ineffective.
This new feature also fails to address the core problem in Meta’s approach to suicide and self harm content. When a teen searches for suicide related material, that query should immediately trigger direct access to crisis resources and protective guardrails. Instead, Meta’s systems continue to surface harmful content and, even worse, recommend additional self harm material. Meta’s own research shows that 8.4 percent of teens report being recommended self harm content in a single week, much of it from strangers. That is not a safety system. That is algorithmic amplification.
Once again, the burden is shifted onto parents rather than fixing the dangerous flaws embedded in platform design. All children deserve protection, regardless of whether a parent activates supervision settings. If a product is not safe for teens without constant parental intervention, it should not be marketed to teens.
Parents should not mistake this announcement for meaningful reform. Lawmakers should not treat public relations maneuvers as accountability. The Kids Online Safety Act, now with 76 Senate sponsors, would impose a clear duty of care and require companies like Meta to address algorithmic harms and platform design failures. It is time for Congress to pass the Senate version of KOSA and put child safety ahead of corporate optics.

Leave a Reply

Your email address will not be published. Required fields are marked *