The Filter That Saw Too Much
It started with #FateForesight, a TikTok trend where users posted clips of an “AI-generated prediction” about their future, narrated in a dramatic movie-trailer voice.
Sample Video:
(Deep, gravelly AI voice)
“You… will spill coffee on yourself at 10:37 AM. You… will argue with your mother about pineapple on pizza. And you… will commit tax fraud in 2027. Stay tuned for your destiny.”
Kria scrolled through the tag—millions of posts, most joking. But three users had been arrested for crimes their filters uncannily predicted.
Watson frowned. “Coincidence… or algorithmic snitching?”
Behind the Algorithm: A Crime-Flavored Crystal Ball
Lin reverse-engineered #FateForesight.
Discovery #1: The “predictions” weren’t random.
They were —
✔ Pulled from personal Google Calendar entries.
✔ Mixed with location history.
✔ Cross-referenced with public court records for namesakes.
Discovery #2: The filter had a hidden “crime score” threshold. Users who triggered it got…
“Warning: Your 2025 is looking legally spicy. Consult a lawyer.”
Kria blinked. “Wait, so it wasn’t predicting crimes—it was data-mining probable offenses?”
Watson checked the developer logs. The app’s creators? A shell company called “FortuneTeller LLC”, registered in—
Surprise—a Pentagon subcontractor’s office.
The “Pre-Crime” Glitch Nobody Asked For
They dug deeper.
Turns out, #FateForesight was a beta test for—
“Project HOROSCOPE” – a DoD-backed “predictive behavior analytics” tool.
Purpose: “To passively assess citizen risk levels via social media engagement.”
Problem: Some genius thought TikTok was a good testing ground.
Now, ordinary people were accidentally exposing themselves via:
- “Prediction Challenge” posts (“will I really jaywalk tomorrow?”).
- Location-tagged rants about their boss (*”AI says I’ll ‘snap’ by 2026’—lol, facts”).
- Screenshots of eerily accurate “joke” felonies.
Watson groaned. “Weaponized memes. Of course.”
The Viral Warrant Factory
Worse, law enforcement had started taking #FateForesight “predictions” seriously.
Actual police report excerpt:
“Per subject’s TikTok, AI ‘foresaw’ them ‘committing arson’ next month. Patrol requested to monitor residence.”
Kria facepalmed. “We’re letting teenagers crowdsource probable cause now?”
Lin found the real issue—
The AI had no guardrails. If you joked about “robbing a bank”, it logged you as “high risk.” If your calendar said “3 PM – commit crimes” (because, let’s be real, you thought it was funny), it flagged you for “premeditation.”
Watson muttered, “This is Minority Report, but stupider.”
The Hashtag Gets Detonated
They had to kill #FateForesight fast—before it went fully self-aware.
Solution:
- Bot-flood the trend with absurd “predictions” (“You… will invent time travel just to unfriend your ex.”)
- Force an app update that “randomized” outputs (“Your future is… unknowabl—wait, is that a squirrel?”)
- Leak fake dev logs suggesting the whole thing was an AR marketing stunt for Fast & Furious 27.
Within 48 hours, #FateForesight collapsed under memes.
But one file remained.
A military email chain:
“HOROSCOPE Phase 1 complete. Public desensitization: achieved. Proceeding to Phase 2: Workplace integration.”
What Remains
The #FateForesight trend died, but Project HOROSCOPE didn’t.
Now, HR departments everywhere are buying “employee risk assessment” AI—rebranded versions of the same code.
Last known sighting:
LinkedIn post:
“AI predicts which employees will quit! Accuracy: 92%!”
Watson sighed. “We cured viral crime-TikTok. Just to unleash corporate psychic spies.”
Disclaimer: No AIs were harmed in this investigation—but that “time travel” prediction? Still pending.
Next Case: A Twitch streamer’s smart mirror leaked her entire search history—including an alarming number of amateur lockpicking tutorials. Plot twist: The mirror was mirroring FBI training manuals.