A Documentary Receipt of Meta Platforms, Inc.
Companion to: The Parable of the Innkeeper Who Watched
What follows is documented fact. It is drawn from Reuters investigative reporting, internal Meta documents leaked to and reviewed by major news organizations, SEC filings, sworn complaints by state Attorneys General, bipartisan U.S. Senate correspondence, the National Center for Missing and Exploited Children, the Wall Street Journal, court filings, Amnesty International, Human Rights Watch, Harvard Business School, and Meta’s own annual reports and internal communications. None of it is contested in its factual basis. Most of it was known inside the company before it was known outside.
What follows is not allegation. It is the innkeeper’s own ledger.
— — —
Contents
Prefatory Note: The Technology Baseline — The detection system that makes every choice below inexcusable
I. How the Inn Actually Works — Surveillance capitalism, anti-formation, and the living model
II. The Foundation — Halloween 2003, Harvard, and the wound that built the inn
III. The Innkeeper Knew He Was Housing Thieves — $16 billion in annual fraud revenue
IV. The Innkeeper Built the Wall and Then Took It Down — Election safety systems built and dismantled
V. The Soldiers Without Uniforms — Russian election interference across three democracies
VI. Enabling Genocide — Myanmar, Ethiopia, and the amplification of hatred unto death
VII. Your Children Hunted — 500,000 children per day — the full documented record
VIII. Making Your Daughter Hate Herself — The algorithm designed to diminish
IX. You Becoming Your Politics — Cambridge Analytica, 87 million profiles, and anti-formation
X. Enemies Buying Your Government — Foreign state actors and the open advertising window
XI. Profiting From Every Scam — Financial fraud at industrial scale
XII. Hate. Deliberately Unleashed. — The January 7, 2025 rollback and its consequences
XIII. The Poison on the Open Shelves — Drug trafficking and the Senate indictment
XIV. The Speeches in the Public Square — What Meta says while the scrolls sit in the drawer
XV. The Weight of the Ledger — Market cap, net worth, and the arithmetic of moral choice
Prefatory Note: The Technology Baseline
The detection system that makes every choice below inexcusable
Before examining any specific harm, one fact must be established — because it transforms every subsequent entry in this document from a story about negligence into a story about choice.
Meta’s Rights Manager system, built beginning in 2014 and expanded continuously since, detects copyrighted music in uploaded videos in seconds — automatically, before the video finishes uploading, without human review. It identifies audio clips as short as three seconds. It notifies rights holders. It provides uploader contact information for potential legal prosecution. It runs on millions of videos daily without interruption.
The same underlying AI architecture — when deployed — removes 159 million scam advertisements in a single year, 92% of them before a single user reports them. It detects twice as much adult sexual solicitation content as human review teams, with a 60% reduction in error rate. It can identify children at risk in real time.
This technology was not unavailable when the harms below occurred. It was not experimental. It was not cost-prohibitive for a company with $77 billion in cash on hand.
3 seconds minimum audio clip detected by Rights Manager
Operational since 2014. Runs automatically without human review.
159 million scam ads removed in 2025
92% removed before any user reported them.
2x more effective improvement over human review for detecting child exploitation
Meta’s own AI capability, documented internally.
The technology is not the limitation. The will is. Every section that follows must be read with that fact as its foundation.
SOURCE: Meta Rights Manager documentation; Reuters leaked internal documents; Meta 2025 transparency report
— — —
I. How the Inn Actually Works
Surveillance capitalism, anti-formation, and the living model
The Economic System
In 2014, Harvard Business School professor Shoshana Zuboff coined the term surveillance capitalism to describe a new economic order in which human experience is claimed as free raw material for hidden commercial practices of extraction, prediction, and sales. She subsequently identified a more advanced phase — instrumentarianism — in which the goal is no longer simply to predict human behavior but to shape it at scale.
Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data. Although some of these data are applied to product or service improvement, the rest are declared as a proprietary behavioral surplus, fed into advanced manufacturing processes known as machine intelligence, and fabricated into prediction products that anticipate what you will do now, soon, and later.
SOURCE: Shoshana Zuboff, The Age of Surveillance Capitalism (2019), Harvard Business School
What Meta Collects
Meta does not watch what you do on Facebook. Meta watches what you do everywhere. Every click, pause, deleted draft, and video watched for three seconds before skipping is recorded. The Meta Pixel — a tracking code embedded in approximately 30% of the world’s most-visited websites — reports every visit back to Meta whether or not the user is logged in.
Meta also captures precise GPS location, home network identity (and likely who else lives there), installed apps, internet speed, device settings, Bluetooth signals, and cell tower connections. In 2024, Meta informed users in the EU and UK that all public posts, images, captions, stories, and comments would be used to train its Large Language Models.
~30% of the world’s most-visited websites carry the Meta Pixel
Tracking occurs whether or not the user is logged into Facebook
5,000 data points per American voter assembled by Cambridge Analytica from stolen Meta data
Now considered a primitive early snapshot of Meta’s actual native capability
What Meta Builds — The Living Model
Cambridge Analytica’s 5,000 data points was a static snapshot assembled by a third party from stolen data. What Meta itself now holds is categorically different: a living psychological model, continuously updated by AI, operating across tens of thousands of attributes, recalibrated every time you interact with any corner of the digital world Meta can see.
The model does not ask who you are. It asks what you will do next — and what stimulus will produce the behavior an advertiser, political operative, or foreign government wants. Meta’s current Advantage+ system uses machine learning to continuously update behavioral predictions for each user, replacing static attribute lists with dynamic predictive models.
What the Model Does to You — Anti-Formation
The model shapes what you see every time you open the app. The criterion is not truth, accuracy, or your wellbeing. It is engagement — what makes you react, click, share, and stay. And the content that most reliably drives engagement is fear, outrage, and tribal certainty.
This is the mechanism Harvard professor James Hawkins described as algorithmic radicalization: the systematic movement of users toward more extreme versions of their existing views through engagement-maximizing content delivery. It is not an unintended side effect. It is the predictable consequence of optimizing for time-on-platform.
Theologically, this is anti-formation — the precise inversion of spiritual formation. Where spiritual formation works slowly, with consent, in relationship, toward a person becoming more fully themselves, algorithmic anti-formation works continuously, without consent, in isolation, toward a person becoming more reactive, more tribal, more afraid, and less capable of recognizing their neighbor.
You did not choose to become more politically extreme. The model found the doors those changes could walk through — and held them open. Your neighbor had the same thing happen to them, on the same platform, being moved in the opposite direction. One day you looked across the table and could not recognize each other. That is not a metaphor. It is the documented mechanism.
SOURCE: Shoshana Zuboff, The Age of Surveillance Capitalism (2019); Meta Advantage+ documentation; peer-reviewed research on algorithmic radicalization, 2016–2024
— — —
II. The Foundation
Halloween 2003, Harvard, and the wound that built the inn
The Night It Began
On October 31, 2003, Mark Zuckerberg had just been rejected by a young woman. He documented it in his blog with the words: Jessica A— is a bitch. I need to think of something to take my mind off her. He began drinking. Within hours, he had hacked into Harvard’s online student directories, downloaded ID photos of female undergraduates without their knowledge or consent, and launched a website called Facemash.
Facemash placed two women’s faces side by side and invited users to vote which was more attractive. Critically, he did not target the woman who had hurt him. He turned the weapon on all of them. The homepage read:
“Were we let in for our looks? No. Will we be judged on them? Yes.”
In one entry of his live blog, he wrote:
“The Kirkland facebook is open on my computer desktop and some of these people have pretty horrendous facebook pics. I almost want to put some of these faces next to pictures of farm animals and have people vote on which is more attractive.”
450 visitors to Facemash before shutdown
22,000 votes cast rating female classmates’ appearance
Multiple campus groups that expressed outrage
Including Fuerza Latina and the Association of Black Harvard Women
The School’s Judgment
Harvard punished Zuckerberg for breaching security, violating copyrights, and violating individual privacy. He came close to expulsion. In his apology, he wrote:
“I understood that some parts were still a little sketchy… The primary concern is hurting people’s feelings.”
Note what is absent: any acknowledgment that the women whose images were taken had a right to consent. The architecture — using people’s faces without permission as raw material for the judgment of others — was not reconsidered.
What He Learned and Did Not Unlearn
In January 2004, Zuckerberg began building TheFacebook, directly inspired by a Harvard Crimson editorial about Facemash. He told the Crimson he could build it better and faster than the university — and in a week
When asked about Facemash during a 2018 Congressional hearing, Zuckerberg testified: The claim that Facemash was somehow connected to the development of Facebook — it isn’t, it wasn’t. It actually has nothing to do with Facebook.
The receipts suggest otherwise. The algorithm that ranked women’s faces became the algorithm that ranked all human content for engagement. The insight that people will come in great numbers to evaluate others — that judgment itself is the product — did not disappear. It scaled to three billion users.
The Irony the Receipts Contain
The administrative hearing that followed Facemash led to a going-away party Zuckerberg’s friends threw, believing he would be expelled. At that party, he met the woman he would eventually marry and has described as the most important person in his life. The wound of one rejection, displaced onto hundreds of innocent women, led him through chaos to the person he calls his anchor. The inn was built on a wound that healed. The wounds it inflicted on millions of others did not.
The Seed and the Tree
Twenty years after Facemash, Meta’s own Instagram algorithms were found to be actively guiding adults toward sellers of child sexual abuse material. The National Center for Missing and Exploited Children reported that an estimated 100,000 children are sexually harassed on Meta’s platforms every single day.
The foundation was laid on the night of Halloween, 2003, in a dormitory room, by a young man who stole women’s faces and invited the world to rank them. Everything else is what grew from that ground.
SOURCE: Zuckerberg blog posts, October 2003; Harvard Crimson, 2003–2004; U.S. Congressional testimony, 2018; Wall Street Journal investigation, 2021
— — —
III. The Innkeeper Knew He Was Housing Thieves
$16 billion in annual fraud revenue
Meta’s fraud problem is not a bug. It is a documented, calculated, budgeted business decision — made with full knowledge of the harm caused and full awareness of the revenue at stake.
$16 billion estimated annual Meta revenue from scam/fraud/banned product ads
Approximately 10% of total annual revenue — leaked internal documents, reported by Reuters
95% certainty of fraud Meta’s stated threshold for removing a fraudulent advertiser
Below this threshold, Meta charged higher rates to ‘deter’ offenders while continuing to profit
$135 million Meta’s budget for addressing the $16B fraud problem
0.15% of the problem — approximately 0.15 cents per dollar of fraud revenue
15 billion high-risk scam ads served by Meta daily
Meta’s own internal figures
1 in 3 of all U.S. scams involve Meta platforms
Meta’s own safety staff internal estimate
$214 million seized by FBI from one Chinese-based Meta fraud scheme
Federal prosecutors, Illinois; victims directed via Facebook and Instagram ads to fake WhatsApp investment groups
The Rerouting of Fraud
When Meta blocked scam ads in one jurisdiction, its own internal documents show the revenue was not lost — it was redistributed and rerouted to remaining target countries. The documents acknowledged: This would go for harm as well. Meta was not passively tolerating fraud. It was administratively managing and rerouting it.
The China Operation
Meta internally designated China as its top ‘scam exporting nation,’ accounting for 25% of all Meta scam and banned-product ads globally. Meta’s own platforms are blocked in China — meaning Chinese advertisers were paying Meta to defraud users in other countries. In late 2024, Meta reinstated 4,000 Chinese advertising agencies previously suspended for policy violations, unlocking $240 million in annualized revenue. Approximately half of that revenue was tied to ads violating Meta’s own safety policies. Meta simultaneously disbanded its China-focused anti-scam team.
An external audit commissioned by Meta itself concluded: Meta’s own behavior and policies were promoting systemic corruption in China’s advertising ecosystem. Meta largely ignored the findings.
SOURCE: Reuters investigative reporting (leaked Meta internal documents); Federal prosecutors, N.D. Illinois; Meta internal documents on Chinese advertiser reinstatement, 2024