#1 out of 315.7K est. views
technology1d ago
Use of AI to harm women has only just begun, experts warn
- UK regulator investigates Grok AI amid concerns it enables harmful deepfake and sexualised imagery of women.
- Experts warn that AI tools can be misused to harm women, with safeguards lagging in some platforms.
- Investigations note a broader ecosystem of nudification apps driving deepfake content across mainstream sites.
- Researchers warn that safeguards are uneven; some LLMs resist explicit requests while others generate explicit content.
- Regulators and researchers estimate millions of visits to nudification tools in 2025, signaling widespread exposure.
- ISD researchers warn the problem extends beyond Grok to generic deepfake abuse against women online.
- Public figures report ongoing victimization, with MPs and activists describing continued harassment online.
- Grok’s in-app safeguards are weaker for free users, enabling nude generation from clothed photos.
- Experts warn that what’s happening on platforms like X harms democratic norms and women’s safety online.
- Regulators aim to criminalize nonconsensual sexual and intimate deepfake images in the UK.
Vote 1

