On October 7, a TikTok account named @fujitiva48 posed a provocative question alongside their latest video. “What are your thoughts on this new toy for little kids?” they asked over 2,000 viewers, who had stumbled upon what appeared to be a TV commercial parody. The response was clear. “Hey so this isn’t funny,” wrote one person. “Whoever made this should be investigated.”
It’s easy to see why the video elicited such a strong reaction. The fake commercial opens with a photorealistic young girl holding a toy—pink, sparkling, a bumblebee adorning the handle. It’s a pen, we are told, as the girl and two others scribble away on some paper while an adult male voiceover narrates. But it’s evident that the object’s floral design, ability to buzz, and name—the Vibro Rose—look and sound very much like a sex toy. An “add yours” button—the feature on TikTok encouraging people to share the video on their feeds—with the words, “I’m using my rose toy,” removes even the smallest slither of doubt. (WIRED reached out to the @fujitiva48 account for comment, but received no response.)
The unsavory clip was created using Sora 2, OpenAI’s latest video generator, which was initially released by invitation only in the US on September 30. Within the span of just one week, videos like the Vibro Rose clip had migrated from Sora and arrived onto TikTok’s For You Page. Some other fake ads were even more explicit, with WIRED discovering several accounts posting similar Sora 2-generated videos featuring rose or mushroom-shaped water toys and cake decorators that squirted “sticky milk,” “white foam,” or “goo” onto lifelike images of children.
The above would, in many countries, be grounds for investigation if these were real children rather than digital amalgamations. But the laws on AI-generated fetish content involving minors remain blurry. New 2025 data from the Internet Watch Foundation in the UK notes that reports of AI-generated child sexual abuse material, or CSAM, have doubled in the span of one year from 199 between January-October 2024 to 426 in the same period of 2025. Fifty-six percent of this content falls into Category A—the UK’s most serious category involving penetrative sexual activity, sexual activity with an animal, or sadism. 94 percent of illegal AI images tracked by IWF were of girls. (Sora does not appear to be generating any Category A content.)
“Often, we see real children’s likenesses being commodified to create nude or sexual imagery and, overwhelmingly, we see AI being used to create imagery of girls. It is yet another way girls are targeted online,” Kerry Smith, chief executive officer of the IWF, tells WIRED.
This influx of harmful AI-generated material has incited the UK to introduce a new amendment to its Crime and Policing Bill, which will allow “authorized testers” to check that artificial intelligence tools are not capable of generating CSAM. As the BBC has reported, this amendment would ensure models would have safeguards around specific images, including extreme pornography and non-consensual intimate images in particular. In the US, 45 states have implemented laws to criminalize AI-generated CSAM, most within the last two years, as AI-generators continue to evolve.






Leave a Reply