YOU WILL BE FOUND
Israel’s lobby has used Silicon Valley’s HR system to blacklist, defund, and fire anyone who disagrees, and they’ve openly boasted about it for years.
Email subscribers and readers: If you’re reading this via email, please click the title banner to read the full version online. This version may be truncated in your email. Clicking the Truth Decay banner will display the full article and always give you the most up-to-date version, as I sometimes make edits after publication.
Here’s something that doesn’t happen often in political persecution: the persecutors tell you in advance exactly what they’re going to do to you.
They don’t whisper it. They don’t bury it in leaked memos or deny it under oath. They say it at conferences, in policy papers, in ministerial briefings, and occasionally in the kind of triumphant press releases that people who are very confident they won’t face consequences tend to issue. They tell journalists. They tell each other. They post it on websites. The plan — to systematically identify, financially destroy, and professionally eliminate people who criticise Israel or support Palestinian rights — has been articulated with a clarity that most authoritarian projects only achieve retrospectively, when historians are reconstructing the architecture from the rubble.
And yet here we are.
The infrastructure is built. It’s running. It processes applications, flags social media accounts, issues risk scores, terminates payment accounts, and quietly poisons the professional futures of academics, journalists, aid workers, students, and anyone else who happened to say the wrong thing at the wrong time — usually something that would have been considered mainstream humanitarian opinion before October 2023 made “civilians shouldn’t be bombed” a career-ending position in certain industries.
This is not a conspiracy theory. The people running this machinery have given it names, published its objectives, and in some cases accepted awards for it. What they haven’t done — what no one has done — is explain exactly how the commercial infrastructure of modern hiring and payment processing makes it not just possible but easy. No hacking required. No back-channel required. Just a database, some business development meetings, and the quiet cooperation of an HR software industry that has built a persecution machine by accident, for profit, and with no meaningful regulatory oversight.
Let’s walk through it.
THE STATED AGENDA: They Said the Quiet Part Into A Microphone
Before we get to the technology, we need to establish something that tends to get lost in arguments about whether lobby group pressure campaigns constitute coordinated persecution: the people running this operation have said, directly and on the record, that employment destruction is the objective.
Canary Mission is the starting point, because it is the most mechanically transparent. It is a website — anonymously funded, with documented links to the Israeli Ministry of Strategic Affairs — that publishes detailed profiles of students, academics, journalists, and activists who have criticised Israel or supported Palestinian rights. The site includes photographs, university affiliations, employer names, social media handles, and curated excerpts of statements and posts. Its stated mission, published on the site itself, is to ensure that targets “will not be able to find employment in their field.” That is not a paraphrase. That is the goal, stated as the goal, by the organisation pursuing it.
The site has profiled thousands of individuals. A significant portion are college students who signed open letters or attended campus demonstrations. A meaningful number of those profiles now appear prominently in Google search results for those individuals’ names — which means they appear in the first screen of results that any recruiter, HR professional, or hiring manager will see when they do the routine due diligence that now precedes virtually every job offer.
Sima Vaknin-Gil, former Director-General of Israel’s Ministry of Strategic Affairs, gave the strategic framework a name and a doctrine. In a 2017 speech that received remarkably little coverage outside pro-Palestinian media, she described a global, technology-assisted infrastructure — she used the word “system” — designed to monitor critics of Israel worldwide and use civilian economic systems as instruments of pressure. She framed it explicitly in the language of warfare: identifying adversaries, mapping their financial relationships, and targeting their economic stability. The Ministry of Strategic Affairs under her tenure coordinated with private organisations internationally to deliver what she described as “civilian” counter-measures — which, when translated out of policy language, means: we find people who criticise Israel, we find where they work or want to work, and we make their professional lives difficult.
StopAntisemitism.org is the retail-facing version of this infrastructure. It publishes “call-outs” — posts featuring a photograph of a named individual alongside their employer’s name and contact information, framing the post as alerting the employer to the presence of an antisemite in their workforce. The evidence threshold for this categorisation is, to put it politely, flexible. Multiple people have lost jobs within 48 hours of being featured. The organisation tracks these outcomes and, periodically, celebrates them.
The Anti-Defamation League operates at a more institutionally credible level, which makes it more dangerous. The ADL has a documented history — including through its own archived materials and investigative reporting by journalists including those at The Intercept — of providing “background information” to employers about activists, characterising political expression as evidence of extremism, and functioning as what one former employee described as a soft-veto mechanism in corporate hiring environments. An ADL characterisation of an individual as antisemitic — regardless of whether the underlying speech or activity bears any reasonable relationship to antisemitism — has demonstrably ended careers in media, academia, and the non-profit sector.
The Israel on Campus Coalition specifically targets university environments, coordinating with Hillel chapters and providing intelligence about campus activists to “stakeholders” — a category that includes donors who fund universities and, by extension, employ their graduates.
This is the stated agenda. These are not fringe operations. Several receive funding from recognised US charitable foundations. Some are registered non-profits. The Ministry of Strategic Affairs is a government department. The infrastructure is not hidden.
Now let’s talk about how it connects to the software running your HR department.
HOW AUTOMATED HIRING ACTUALLY WORKS: A Brief Tour of the Machine
If you have applied for a job at any large organisation in the last decade, the first human being to see your application almost certainly wasn’t the first entity to evaluate it. Before any person read your CV, an algorithm did. And the algorithm was configured by someone who may have had instructions you never saw, operating on criteria you were never told about, producing a score you were never shown, attached to reasons you have no legal right to access in any meaningful form.
This is not speculation. This is the normal functioning of modern recruitment technology, used by over 98% of Fortune 500 companies, the vast majority of large UK employers, most Australian corporate and government sector employers, and every significant financial institution in North America and Europe.
The core technology is the Applicant Tracking System (ATS). The dominant players are Workday, Greenhouse, Lever, iCIMS, and SAP SuccessFactors. These systems do several things that matter for this story:
They parse CVs and LinkedIn profiles automatically, extracting named entities — employers, institutions, locations, and increasingly, associations. They score candidates against keyword profiles configured by the client employer. They log candidate social media handles for downstream screening. They can be configured with custom exclusion flags by HR administrators — flags that produce no visible rejection notice, no explanation, and no candidate-facing record. And critically: the exclusion logic is invisible to applicants, and is not subject to disclosure requirements in most jurisdictions. A candidate rejected by an automated filter has no right to know why in the United Kingdom, Australia, Canada, or the United States. The EU’s GDPR creates a theoretical right to explanation for automated decisions, but its practical enforcement in hiring contexts has been negligible.
To the candidate, it looks like: no response.
To the employer: they acted on an independent, risk-based assessment of applicant suitability.
To the data feed that triggered the flag: successful outcome.
SOCIAL MEDIA SCREENING: The Surveillance Product That HR Bought Willingly
This is where political persecution becomes technically feasible at industrial scale, and where the connection to the lobby infrastructure becomes mechanically explicit.
A market of commercial social media screening tools has emerged over the last decade, selling to HR departments as a “risk management” product. The major players include Fama Technologies, Social Intelligence, Ferretly, and Aware. Their product is simple: you give them a candidate’s name and social media handles, they scan everything public across platforms, identify posts, associations, and engagement histories that match configurable “risk categories,” and return a report telling the employer whether this person is safe to hire.
The risk categories are configurable by the client. Standard categories include: violence, hate speech, drug references, sexual content, and — here is the relevant entry — “political extremism.”
Who defines what falls within “political extremism” is the client. There is no regulatory definition. There is no appeals process. There is no requirement to disclose to the candidate that the screening occurred, what data was captured, or what determination was made. In most jurisdictions, if the screening vendor is a subcontractor to the employer rather than the employer themselves, even the GDPR’s data subject access rights become difficult to enforce practically.
Now add the following capability, which is standard in the background check data industry: third-party data feed integration. Background check and screening platforms routinely ingest supplementary data from external risk intelligence providers. This is commercially normal. Banks use it for AML compliance. Corporates use it for vendor due diligence. Insurance firms use it for underwriting.
The transformation of a Canary Mission database entry into an HR screening input is not a cyberattack. It is a business development conversation. All that is required is for a lobby-connected organisation to approach a screening vendor with a database categorised as “activist risk intelligence” — a category that already exists in the commercial intelligence market — and negotiate a data licensing deal. The screening vendor’s clients then receive, as a standard product feature, a flag indicating that a candidate has been identified by a named risk intelligence source. The employer sees: candidate identified in adverse intelligence database. They do not need to know what the database is, who funds it, or what the evidentiary standard was for the entry.
This is not hypothetical infrastructure. The commercial relationships between political opposition research and HR screening platforms are documented in the United States. Fama Technologies has publicly described its capability to incorporate “third-party watchlist data.” HireRight, one of the largest background check firms globally, with operations across North America, the UK, and Australia, explicitly markets “adverse media” scanning and “global watchlist” checking as standard product tiers.
THE ADVERSE MEDIA PIPELINE: How A Tweet Becomes A Permanent Employment Obstacle
The adverse media component of background screening deserves specific attention because it is the most mechanically direct connection between lobby group content production and employment outcomes.
Adverse media scanning is an automated search product: background check firms run a candidate’s name through media monitoring systems, flagging any news coverage associated with configurable risk categories. The categories are variations on: crime, fraud, regulatory action, controversy, and extremism. The threshold for a “hit” in the “controversy” category is: the candidate’s name appears in a news article that also contains language associated with controversy.
A StopAntisemitism.org post is sufficient to generate an adverse media hit. The post doesn’t need to be accurate. It doesn’t need to have survived a complaint. It doesn’t need to reflect anything other than the organisation’s decision to feature the person. Once it exists and is indexed, it is permanently available to adverse media scanners. A candidate featured in 2021 for signing an open letter will still generate an adverse media hit in 2026, in 2030, indefinitely — unless they can successfully navigate a defamation process against a well-funded anonymous organisation in a US jurisdiction where political speech protections make defamation claims extremely difficult.
The employer receives a clean, professional report: “Adverse media identified: candidate associated with controversy regarding [topic area].” The topic area might read: “political controversy / Middle East conflict.” The employer does not know whether the underlying content is accurate. Many employers, particularly in financial services, defence contracting, and government-adjacent sectors, operate a zero-adverse-media policy for senior hires: any hit, regardless of context, is a disqualifier. This is documented HR practice. It is the path of least institutional risk for them.
NORTH AMERICA: The Laboratory Where The Template Was Built
The United States is where this infrastructure was designed, funded, and stress-tested, and where the most documented cases of employment impact exist.
The Steven Salaita case is the foundational precedent. In 2014, Salaita — a tenured professor who had accepted a position at the University of Illinois — was fired before he began work, after his critical tweets about Israel’s military operation in Gaza were flagged to university donors. The university chancellor, Phyllis Wise, rescinded the hire. The decision came after pressure from donors whose financial leverage over the institution was direct and explicit. Salaita subsequently sued, settled, and has been functionally unable to find a tenured academic position in the United States since. The mechanism: donor pressure → institutional decision → professional destruction, with no algorithmic layer required because in 2014 the algorithmic layer didn’t yet exist at scale.
By 2023 and 2024, it did.
Following the October 7 Hamas attack and Israel’s subsequent military campaign in Gaza, the employment consequences for people expressing pro-Palestinian views in North America escalated from a pattern into a documented mass phenomenon.
Harvard, Columbia, NYU, and Penn all faced donor pressure regarding students and faculty who signed open letters or led campus organisations. The Harvard Corporation faced calls from major donors — some of whom sent letters that were subsequently reported by the New York Times and Washington Post — demanding that student signatories of a statement holding “the Israeli regime entirely responsible” be identified and their names shared with employers. Several law firms and financial institutions subsequently announced they would not hire Harvard students who had signed the letter. Davis Polk & Wardwell and Winston & Strawn rescinded offers or made public statements that pro-Palestinian signatories would not be considered for positions.
This was not backroom pressure. The managing partner of Winston & Strawn sent a firm-wide communication. It was reported. The named students had no recourse. There is no US federal employment protection for political belief.
The doxxing trucks that circulated around Harvard and Columbia campuses in October and November 2023, displaying the photographs and names of pro-Palestinian students above text calling them antisemites, were funded by Accuracy in Media, a conservative advocacy organisation. The explicit stated purpose was to make the students unemployable. Several of the featured students subsequently reported losing job offers.
Mahmoud Khalil — a Columbia University graduate student and prominent campus organiser — was detained by ICE in March 2025 in what the Trump administration explicitly described as action against his political expression. The Secretary of State, Marco Rubio, personally signed documents asserting that Khalil’s presence in the United States was contrary to US foreign policy interests on the basis of his political activities. The invocation of a foreign policy basis for immigration enforcement action against a legal permanent resident for domestic political speech represents a weaponisation of immigration machinery that mirrors, at the government level, exactly what this article is documenting at the corporate level.
Payment infrastructure has been the parallel track in North America. PayPal closed accounts associated with Palestinian relief organisations and independent journalism covering Gaza. GoFundMe removed campaigns. Venmo froze accounts. In several documented cases, journalists and researchers covering Gaza received payment account terminations with no stated reason and no effective appeal mechanism. The AML compliance infrastructure — built under the Bank Secrecy Act and post-9/11 counter-terrorism financing frameworks — creates a legal environment in which payment processors can terminate accounts associated with “reputational risk” with essentially no obligation to explain, justify, or restore.
Canada has followed the US pattern closely. The University of Toronto faced internal pressure over a faculty position offered to a legal scholar who had published on Palestinian rights. The Centre for Israel and Jewish Affairs (CIJA) has documented operatives coordinating with major Canadian employers and HR departments.
EUROPE: Where The Language of Holocaust Memory Is Being Used to Silence Palestinian Solidarity
Europe presents a different and in some ways more chilling version of this story, because in several jurisdictions the suppression of pro-Palestinian speech has been achieved not through lobby pressure alone but through its direct encoding into law and institutional policy — with the consequence that automated employment screening doesn’t need to be hacked; it simply needs to apply the official definition.
Germany is the most extreme case, and it deserves to be understood clearly.
Following October 7, Germany entered what multiple civil liberties organisations have described as a political emergency for freedom of expression. The German government’s adoption of the International Holocaust Remembrance Alliance (IHRA) working definition of antisemitism — a contested definition that several of its own drafters have subsequently disowned — combined with German political culture’s particular sensitivity around Holocaust memory, produced a regulatory and institutional environment in which:
• Public cultural institutions cancelled events, rescinded fellowships, and withdrew invitations from Palestinian and Arab artists, writers, and intellectuals at a rate that multiple human rights organisations including Amnesty International and Human Rights Watch described as discriminatory
• Adania Shibli, a Palestinian novelist shortlisted for a major European prize, had her award ceremony at the Frankfurt Book Fair cancelled in October 2023, because her novel dealt with an Israeli massacre of Palestinians. The cancellation was made by Book Fair organisers who cited the “political situation.” She had done nothing except write a book.
• Documenta 15, the major contemporary art exhibition in Kassel, became a flashpoint in 2022 when a mural by an Indonesian collective was deemed antisemitic — an episode that then generated a broader chilling effect on Palestinian artistic expression across German institutions
• Academics at German universities who signed open letters, including 600 scholars who signed a letter in May 2024 calling for a ceasefire, faced internal university reviews and donor pressure campaigns
• The concept of Berufsverbot — the professional prohibition — is not merely historical in Germany. It has contemporary form: multiple artists and academics have been told, in writing, by publicly funded German cultural institutions that their contracts or invitations are contingent on signing statements affirming Israel’s right to exist and distancing themselves from BDS
The HR dimension in Germany is this: when a country’s major cultural funding bodies, public broadcasters, and university systems have adopted an institutional position that equates certain political speech with antisemitism, the “adverse political belief” flag in commercial screening systems doesn’t need to be configured by a lobby group. It is effectively pre-configured by official institutional policy.
France has followed a similar trajectory. The IHRA definition has been adopted at the governmental level. BDS France has been the subject of legal proceedings — the French Senate voted to condemn BDS activism, and French courts initially (before being overturned by the Court of Cassation) held that BDS advocacy could constitute discrimination. The practical effect: individuals publicly associated with BDS activism in France face a documented risk of legal proceedings, which then generates criminal record or civil litigation history that feeds directly into background screening.
The Netherlands saw the Rotterdam municipal government withdraw funding from an arts organisation that hosted a Palestinian cultural event. Belgium’s federal government faced internal coalition pressure over statements made by ministers regarding Gaza. Denmark and Sweden — countries with significant Muslim minority populations and active Palestinian solidarity movements — have seen increased use of counter-terrorism adjacent legal frameworks to monitor protest activity.
The European Parliament adopted a resolution in 2023 that, while non-binding, called on member states to take action against “antisemitism” using the IHRA definition. The practical downstream effect of IHRA adoption across European institutions is the creation of a common definitional framework within which automated screening systems can operate. If your social media history contains advocacy that falls within the IHRA definition as applied by a given institution, and that institution uses a commercial screening vendor, the flag is legitimate, documented, and legally defensible. The lobby group doesn’t need to be in the room. They wrote the definition.
The UK sits between the US and Germany on this spectrum. The IHRA definition has been adopted by the UK government, most major political parties, and hundreds of universities. The Equality Act 2010 theoretically protects “philosophical belief” as a protected characteristic — a protection that has been argued (in the employment tribunal case of David Collier v Unite the Union, among others) to potentially cover pro-Palestinian political belief. However, the practical enforcement gap is vast: you cannot access this protection unless you know you were discriminated against, can prove it was on the basis of political belief, can identify the decision-maker, and can afford to bring an employment tribunal claim. Automated screening systems produce none of the evidence you would need for the first three of these requirements.
THE LEGAL ARCHITECTURE OF PERSECUTION: Right-to-Work Frameworks as Attack Surface
Understanding how this machinery interfaces with legally mandated employment screening systems is essential, because the mandated systems create verification chokepoints that are particularly vulnerable to political contamination.
Australia:
Australia’s right-to-work verification operates primarily through VEVO (Visa Entitlement Verification Online), operated by the Department of Home Affairs. The VEVO check itself is narrow. The surrounding infrastructure is not.
Working With Children Checks and NDIS Worker Screening involve not just criminal record disclosure but “relevant information” held by police services — a category that in practice encompasses intelligence holdings, association records, and in some states, records of cautions or interactions that did not result in charges. Protest attendance, if it resulted in any kind of police contact, may be disclosed under “relevant information” provisions. The threshold for disclosure is not conviction. It is police discretion.
ASIO security assessments — required for employment in defined national security adjacent roles — can incorporate protest activity and association with organisations of interest. The definitions of what triggers concern are partially classified. The practical effect is that Australians who have been active in pro-Palestinian organisations, particularly those that have been the subject of ASIO monitoring (and there is documented evidence that pro-Palestinian groups have been monitored), may face adverse outcomes in security clearance processes without knowing why, without being able to see the underlying intelligence, and without any effective appeal mechanism.
The character test under s501 of the Migration Act — applied to non-citizens — is highly discretionary and has been applied to cancel visas on political grounds. The deportation in 2022 of Novak Djokovic for vaccine status reasons demonstrated the breadth of ministerial discretion under this framework; the same architecture could be applied to a visa holder whose political activities were deemed contrary to Australian values.
Fair Work Act protections are materially weaker than UK Equality Act equivalents. Australia has no equivalent to the protected characteristic of “philosophical belief.” Political belief protections in employment vary by state and territory and are inconsistently enforced. The Fair Work Commission has jurisdiction over adverse action claims, but the evidentiary burden on the claimant is significant.
What makes Australia specifically vulnerable is the opacity of the data-sharing relationships between Home Affairs, state police services, and the private sector vetting firms that conduct BS7858-equivalent screening for security, financial services, and infrastructure employers. The line between government-held political intelligence and private sector screening is poorly defined in Australian law and poorly understood by most Australians.
The United Kingdom:
The UK’s Disclosure and Barring Service is formally limited: criminal records for specified roles. The broader vetting ecosystem extends considerably further.
BS7858 is the British Standard for pre-employment screening in security, financial services, and infrastructure roles. It is a privately developed standard adopted by industry, not a statutory framework — which means its content is determined by industry bodies rather than by Parliament, and its “lifestyle” assessment and “character reference” components can accommodate political screening without any specific legislative authorisation.
Security Clearance (SC) and Developed Vetting (DV) processes — required for employment with UK government, defence contractors, and intelligence community adjacent roles — explicitly assess associations, travel, “potential for compromise,” and “loyalty.” These categories have historically been applied to capture political activism. The Personnel Security Standards against which these assessments are made have been updated since 2021 to include social media activity. What this means in practice: a DV assessor reviewing an applicant’s digital footprint who finds substantial pro-Palestinian posting, association with BDS organisations, or publication of material critical of UK government arms policy toward Israel has both the mandate and the discretion to characterise this as a loyalty or compromise risk.
The Prevent duty, established under the Counter-Terrorism and Security Act 2015, places positive obligations on employers in education, health, local government, and specified other sectors to refer individuals showing signs of “radicalisation” to the government’s counter-extremism programme. The statutory guidance and its practical implementation have been extensively documented — by the Open Rights Group, Cage, and academics including Arun Kundnani — as capturing political activism, particularly Muslim political activism around Palestine and foreign policy, within its operational scope. Universities have referred students to Prevent for pro-Palestinian activism. Teachers have been referred for expressing opinions about Gaza in class. The referral is logged. The log is held by the Home Office. The log is, in principle, disclosable as “relevant information” in enhanced criminal record checks.
The Online Safety Act 2023 creates new categories of regulated content and imposes due diligence obligations on platforms. Its definitions of “legal but harmful” content are contested and, in the context of political content moderation, create a framework within which employers doing reputational due diligence can point to platform-level content moderation decisions as evidence of an adverse finding — even where the underlying content was not unlawful.
The hostile environment legacy is structurally critical. Under Theresa May’s Home Office, routine commercial relationships — bank accounts, landlord tenancies, driving licences — were weaponised as immigration enforcement tools. The infrastructure for using employers as political compliance checkpoints was built, tested, and normalised. It remains in place. The immigration enforcement role of employers was expanded, not contracted, under the post-Brexit right-to-work framework.
THE INFOSEC AND INDEPENDENT JOURNALISM SPECIFIC THREAT: The People Documenting This Get Hit With It
There is a particular cruelty in the fact that the researchers and journalists most equipped to document this infrastructure are among its primary targets.
Citizen Lab at the University of Toronto has produced some of the most rigorous technical documentation of Israeli cyberweapon deployment against journalists, activists, and political figures. Its researchers — John Scott-Railton, Bill Marczak, and others — have been targeted by the tools they document. This is now a matter of legal record: NSO Group’s Pegasus spyware was used against journalists covering NSO Group. Mexican journalist Carmen Aristegui, whose newsroom was investigating the Mexican government’s use of Pegasus, was targeted with Pegasus. Saudi journalist Jamal Khashoggi’s associates were targeted with Pegasus before his murder. The feedback loop between subject and surveillance is not incidental. It is operational doctrine.
The professional implications extend beyond device compromise. Researchers working on Israeli surveillance technology, military supply chains, or dual-use cyber exports face:
• Device and communications compromise, which degrades source security and forces operational changes that reduce investigative capacity
• Coordinated smear campaigns in technology media, seeded through lobby-connected PR operations, that generate adverse media hits in background check systems
• Funding pressure on host institutions, particularly universities, from donors who conflate institutional affiliation with institutional endorsement
• Adverse media trails — once generated, permanent — that surface in any future employment screening
Lorenzo Franceschi-Bicchierai, Joseph Cox (now at 404 Media), and the broader Pegasus Project consortium faced coordinated reputational attacks concurrent with their reporting. These were not random. They were timed to reporting cycles and sourced to PR operations with documented connections to Israeli intelligence-adjacent commercial interests.
Forbidden Stories — the Paris-based consortium that coordinated the Pegasus Project reporting — found its staff and contributing journalists the subject of what multiple members subsequently described as systematic monitoring. Several contributing journalists in non-Western countries faced legal proceedings in their home jurisdictions concurrent with publication.
The Security Lab at Amnesty International — which provided the technical forensics for the Pegasus Project — faced institutional pressure through Amnesty’s donor relationships following publication. This is not documented through public disclosure; it was described by current and former Amnesty staff in background conversations that multiple journalists have referenced.
THE PAYMENT PROCESSING CHOKEPOINT: Where Independent Journalism Goes to Die
For independent journalism specifically, the most existential vulnerability is not employment screening — it is payment infrastructure. And this is where the suppression mechanism has been deployed most overtly and with the least accountability.
The business model of independent journalism — Substack, Patreon, direct subscription, donation — runs entirely through commercial payment processors. PayPal, Stripe, GoFundMe, Ko-fi, Square. These companies operate AML (Anti-Money Laundering) compliance functions under statutory obligations in every jurisdiction where they operate. Those compliance functions use automated transaction monitoring — keyword scanning, association analysis, behavioural pattern detection — to flag accounts for review. The review process is internal. The decision to terminate an account is final in practice, if not always in law. The appeals process, where one exists, is a customer service function, not a regulatory one.
PayPal has closed accounts associated with Palestinian relief organisations and independent journalism about Gaza without providing specific reasons, citing only “our Acceptable Use Policy.” GoFundMe removed fundraising campaigns for Gaza relief that had raised substantial amounts, holding funds during investigation periods that lasted weeks. Venmo froze accounts. In documented cases, the connection between account termination and political content was direct and proximate — accounts active for years without issue, terminated within days of posting content that could be categorised as controversial under the loosest possible “Middle East conflict” label.
Stripe — which is the payment infrastructure underlying Substack — terminated merchant accounts associated with creators whose content touched on Gaza, in some cases without notice and in some cases with funds held for 90-day review periods that are commercially catastrophic for small independent operations.
This matters for Substack specifically because: Substack’s editorial policy has been more protective of free expression than most platforms. But Substack’s payment infrastructure is Stripe. The editorial protection and the financial vulnerability are structurally independent, which means a publication can be fully compliant with Substack’s policies and simultaneously have its revenue stream terminated by a payment processor acting on AML keyword monitoring that was configured to flag “terrorism financing” and is catching “Gaza journalism” in the same net.
The legal framework for this is the Bank Secrecy Act in the US, the Proceeds of Crime Act 2002 in the UK, and the Anti-Money Laundering and Counter-Terrorism Financing Act 2006 in Australia. All of these create a legal obligation for payment processors to maintain AML compliance. All of them allow — and in some readings require — preemptive account termination where risk is identified. None of them require the processor to tell the customer why, to maintain an appeals process, or to make the risk determination methodology available to regulators or the public. The AML framework, built to combat actual financial crime, has become a commercially and legally protected mechanism for financial censorship.
THE ECOSYSTEM MODEL: No Single Crime, Just Coordinated Data Poisoning
This is the analysis that should stop you in your tracks, because it’s the part that makes this not just disturbing but genuinely difficult to challenge.
No single actor in this chain needs to do anything illegal. The system produces political persecution as an emergent property of commercially normal operations by commercially normal entities, each acting on commercially normal incentives, each operating within their legal mandate. Let’s run the sequence:
Step one: Canary Mission publishes a profile of a student who attended a campus protest and signed an open letter. The profile is public, indexed by Google, and contains accurate factual information about where the student goes to university and their political expression.
Step two: StopAntisemitism.org posts about a journalist who published a thread citing civilian casualty figures from Gaza. The post is public, indexed by Google, and characterises the journalist as promoting antisemitic narratives.
Step three: An adverse media scanning product used by a background check firm indexes both pieces of content as part of its routine media monitoring operation. The student and the journalist now have adverse media hits in the candidate profiles generated whenever their names are run through the background check system.
Step four: A social media screening vendor used by financial services firms runs the student’s and journalist’s social media handles. The posts that triggered Canary Mission and StopAntisemitism.org appearances are flagged under “political controversy” risk category. The vendor’s report notes: potential reputational risk / political controversy.
Step five: The student applies for a position at a financial services firm. The ATS processes their application. The background check returns an adverse media hit. The social media screen returns a political risk flag. The candidate is automatically deprioritised or excluded. No human being makes a discriminatory decision. The algorithm processed available data and produced a risk score.
Step six: The journalist’s Stripe account is flagged by automated transaction monitoring following an increase in payments associated with content about Gaza. Stripe’s compliance team reviews and terminates the account citing “unacceptable risk.” The journalist’s Substack revenue stops. Their ability to fund the journalism stops.
Step seven: The university employing an academic who signed a ceasefire letter receives a call from a major donor expressing “concern.” The university’s development office flags the concern to the provost’s office. The provost’s office flags it to the academic’s dean. No instruction is given. None needs to be. The academic’s next grant application is reviewed with particular care. Their contract renewal is discussed in terms of “institutional fit.”
Each person in this chain — the background check vendor, the ATS product manager, the social media screening algorithm, the Stripe compliance officer, the development office staffer, the dean — made a commercially or institutionally normal decision. None of them coordinated. None of them needed to. The coordination happened upstream, in the data environment, where the content that triggers all of these downstream decisions was produced by organisations whose explicit stated goal is employment destruction.
The legal concept you need here is disparate impact: discrimination that occurs not through intent but through the systematic application of neutral-seeming processes that produce discriminatory outcomes for a protected class. Disparate impact is a recognised legal theory in UK, Australian, US, and EU employment law. It has never been successfully applied to a case of this type. The primary reason is the evidentiary problem: you cannot bring a disparate impact claim without access to the data that produced the outcome, and automated screening systems are not required to produce that data.
WHAT INDEPENDENT INFOSEC JOURNALISM SHOULD BE COVERING (AND ISN’T)
To be clear about the investigative gaps that exist, and that this piece is explicitly flagging for colleagues who have the technical skills to pursue them:
Which background check firms use Canary Mission, StopAntisemitism.org, or similar databases as third-party data inputs, and under what commercial categorisation are those databases licensed? This is a procurement and contract question that should be accessible through corporate discovery, regulatory disclosure, or well-targeted FOI.
What are the specific “adverse media” and “political risk” keyword configurations used by major ATS platforms for financial services, defence contracting, and government adjacent employers? These configurations are determined at the client level but built using vendor-provided templates. Several former HR technology employees have described these templates in background; none has gone on record.
Looking at this in my own country, how does our Australian Security Intelligence Organisation, ASIO’s “relevant information” category interact with state police protest monitoring databases, and what is the data-sharing arrangement between those databases and the private sector vetting firms that conduct security screening? This sits at the intersection of privacy law and national security exemption — a gap that has not been systematically litigated.
Has the ADL provided formal risk intelligence, in any form, to any commercial screening platform, background check vendor, or ATS provider? The ADL’s corporate communications advisory function is documented. The extent to which that function has produced data that enters the screening product chain is not.
What is the contractual relationship between Stripe’s AML compliance function and editorial content — specifically, is content categorisation a factor in transaction risk scoring, and if so who configures the content categories?
Under GDPR and the Australian Privacy Act, does a rejected candidate have the right to access information confirming that a third-party database entry contributed to their rejection, and what has been the practical enforcement record of data protection authorities in cases where this right has been claimed in hiring contexts? The answer is: in theory yes, in practice almost never. That gap is a regulatory failure that should be documentable.
THE CONCLUSION: The Persecution Machine Doesn’t Need A Conspiracy. It Needs A Business Development Meeting.
Here is what we are actually describing:
A network of lobby-aligned organisations, several with direct connections to Israeli government ministries, has built a public adverse information infrastructure — blacklist databases, employer-notification campaigns, adverse media trail production — whose explicit operational goal is the professional destruction of people who express political views about Palestine, Lebanon and wherever’s next. That infrastructure interfaces, through commercially normal data licensing and product integration, with the automated hiring technology used by the overwhelming majority of large employers in the English-speaking world and Europe. The legal frameworks mandating employment vetting in Australia and the UK — right-to-work verification, security screening, Prevent referral obligations — provide additional chokepoints where politically contaminated data can enter formally mandated processes with no transparency or appeal. Independent journalists and infosec researchers face a compounded threat: their payment infrastructure is as vulnerable as their employment prospects, and their professional practice makes them high-profile targets for the same surveillance technology they are attempting to document.
The people who built this did not do it secretly. They announced it. They funded it through charitable foundations. They accepted government contracts and ministerial briefings. They published the goal — “will not be able to find employment” — on a website with their logo.
The only thing that happened in secret was the connection to the HR software.
That connection doesn’t require a hack. It doesn’t require a conspiracy. It requires a database categorised as “risk intelligence,” a sales team with a credible pitch deck, and an HR software industry that built a suppression machine by accident, sold it as a compliance product, and has no particular incentive to ask what’s in the data feeds.
The question worth asking — the one that should animate the follow-up investigations this piece is flagging — is not whether this is happening. It is happening. The question is whether the regulatory frameworks of liberal democracies are capable of seeing it.
Based on current evidence: not yet. But they weren’t designed for this. Neither was the internet. Neither was recruitment software. Neither were payment processors.
None of them were designed to be used as weapons. And here we are.
If this piece made you think, or made you angry, or made you want to forward it to someone who needs to read it — do all three.
Truth Decay is independent. No ads. No sponsors. No editorial calls from people whose lawyers are already drafting a letter. If you want this kind of journalism to keep existing, the subscription button above is the most direct way to make that happen. $5/month. Less than a coffee that won’t get you fired for ordering it.
REFERENCES:
1. Canary Mission. “About.” Canary Mission. Accessed April 2026. https://canarymission.org/about
2. Vaknin-Gil, Sima. Address to the Institute for National Security Studies Annual Conference. Tel Aviv, 2017. Cited in: Entous, Adam. “The Enemy Within.” The New Yorker, January 27, 2025. https://www.newyorker.com/magazine/2025/01/27/the-enemy-within
3. Lis, Jonathan. “Israeli Ministry Secretly Behind Ad Campaign to ‘Export the Fight Against BDS’ to US Jewish Groups.” Haaretz, September 26, 2017. https://www.haaretz.com/israel-news/2017-09-26/ty-article/israeli-ministry-behind-pro-israel-propaganda-campaign-in-u-s/0000017f-db8c-d3a5-af7f-ffef5dfe0000
4. StopAntisemitism.org. Official website. https://stopantisemitism.org
5. Blumenthal, Max, and Asa Winstanley. “How Israel’s Government Paid to Send Thousands of Canary Mission Blacklist Victims.” The Electronic Intifada, October 3, 2018. https://electronicintifada.net/content/how-israels-government-paid-send-thousands-canary-mission-blacklist-victims/25757
6. Greenwald, Glenn. “The ADL’s Long History of Defaming Critics of Israel as Antisemites.” The Intercept, November 6, 2016. https://theintercept.com/2016/11/06/the-adl-and-civil-liberties/
7. Fama Technologies. “Third-Party Watchlist Screening.” Product documentation, 2024. https://fama.io/solutions/background-screening/
8. HireRight. “Adverse Media Screening.” Product overview, 2024. https://www.hireright.com/solutions/adverse-media
9. Social Intelligence Corp. “Social Media Screening.” https://www.socialintel.com
10. Ferretly. “Automated Social Media Screening.” https://ferretly.com
11. Gold, Matea, and Susan Svrluga. “University of Illinois Fires Professor Over Anti-Israel Tweets.” The Washington Post, September 12, 2014. https://www.washingtonpost.com/national/university-of-illinois-fires-professor-over-anti-israel-tweets/2014/09/12/
12. Saul, Stephanie. “Pro-Palestinian Students Face University Discipline and Suspension of Clubs.” The New York Times, November 10, 2023. https://www.nytimes.com/2023/11/10/us/pro-palestinian-protests-college-discipline.html
13. Winston & Strawn. Firm-wide communication regarding campus statement signatories. Reported by: Weissmann, Jordan. “Law Firm Rescinds Offer to Georgetown Student Over Pro-Palestinian Post.” Slate, October 16, 2023. https://slate.com/news-and-politics/2023/10/law-firm-winston-strawn-rescinds-offer-georgetown-student-pro-palestinian.html
14. Semple, Kirk. “Truck Displaying Photos and Names of Pro-Palestinian Students Drives Around Harvard.” The Guardian, October 25, 2023. https://www.theguardian.com/us-news/2023/oct/25/doxxing-trucks-pro-palestinian-students-harvard-columbia
15. Byman, Daniel, and Riley McCabe. “The Mahmoud Khalil Case and the Limits of First Amendment Protections for Foreign Nationals.” Lawfare, March 2025. https://www.lawfaremedia.org
16. PayPal. “Acceptable Use Policy.” https://www.paypal.com/us/legalhub/acceptableuse-full
17. Cook, Joanna. “PayPal Account Closures Hit Palestinian Charities.” Middle East Eye, November 2023. https://www.middleeasteye.net/news/paypal-account-closures-hit-palestinian-charities
18. Adania Shibli Award Ceremony Cancellation. Frankfurt Book Fair. Reported by: Flood, Alison. “Frankfurt Book Fair postpones award ceremony for Palestinian author amid Gaza conflict.” The Guardian, October 18, 2023. https://www.theguardian.com/books/2023/oct/18/frankfurt-book-fair-postpones-award-ceremony-for-palestinian-author-adania-shibli
19. Documenta 15 Controversy. Reported by: Noack, Rick. “Germany’s Art World Has a Long History of Tension Over Israel. Documenta Made It Explosive.” The Washington Post, July 27, 2022. https://www.washingtonpost.com/world/2022/07/27/documenta-germany-antisemitism-israel-indonesia/
20. Massad, Joseph. “Germany’s Antisemitism Law Is Being Used to Silence Pro-Palestinian Speech.” Al Jazeera, April 2024. https://www.aljazeera.com/opinions/2024/4/15/germanys-antisemitism-law-is-being-used-to-silence-pro-palestinian-speech
21. IHRA Working Definition of Antisemitism. International Holocaust Remembrance Alliance. https://www.holocaustremembrance.com/resources/working-definitions-charters/working-definition-antisemitism
22. Stern, Kenneth. “I Drafted the Definition of Antisemitism. Rightwing Jews Are Weaponising It.” The Guardian, December 13, 2019. https://www.theguardian.com/commentisfree/2019/dec/13/antisemitism-executive-order-trump-chilling-effect-free-speech
23. Scott-Railton, John, et al. “Pegasus: The New Global Weapon for Silencing Journalists.” Citizen Lab, University of Toronto, 2021. https://citizenlab.ca/2021/07/forensic-methodology-report-how-to-catch-nso-groups-pegasus/
24. Marczak, Bill, et al. “Hide and Seek: Tracking NSO Group’s Pegasus Spyware to Operations in 45 Countries.” Citizen Lab, September 18, 2018. https://citizenlab.ca/2018/09/hide-and-seek-tracking-nso-groups-pegasus-spyware-to-operations-in-45-countries/
25. Amnesty International Security Lab. “Forensic Methodology Report: How to Catch NSO Group’s Pegasus.” July 18, 2021. https://www.amnesty.org/en/latest/research/2021/07/forensic-methodology-report-how-to-catch-nso-groups-pegasus/
26. Kundnani, Arun. The Muslims are Coming: Islamophobia, Extremism, and the Domestic War on Terror. London: Verso, 2014.
27. Open Rights Group. “Prevent and the Surveillance of Muslim Communities.” 2022. https://www.openrightsgroup.org/publications/prevent-surveillance/
28. UK Home Office. Personnel Security Standards. Cabinet Office, 2021. https://www.gov.uk/government/publications/personnel-security-standards
29. Australian Home Affairs. VEVO: Visa Entitlement Verification Online. https://immi.homeaffairs.gov.au/visas/already-have-a-visa/check-visa-details-and-conditions/check-conditions-online
30. Australian Security Intelligence Organisation. ASIO Annual Report 2022–23. https://www.asio.gov.au/publications/annual-reports.html
31. Financial Crimes Enforcement Network (FinCEN). “Bank Secrecy Act.” US Department of the Treasury. https://www.fincen.gov/resources/statutes-regulations/bank-secrecy-act
32. UK Home Office. Counter-Terrorism and Security Act 2015: Prevent Duty Guidance. https://www.gov.uk/government/publications/prevent-duty-guidance
33. BDS France court rulings. Reported by: Willsher, Kim. “France’s highest court overturns ban on pro-Palestinian boycott campaign.” The Guardian, June 11, 2020. https://www.theguardian.com/world/2020/jun/11/france-highest-court-overturns-ban-pro-palestinian-boycott-campaign-bds
34. Centre for Israel and Jewish Affairs (CIJA). Official website. https://www.cija.ca
35. Stripe. “Prohibited Businesses.” Stripe Restricted Business list. https://stripe.com/restricted-businesses
36. Australian Anti-Money Laundering and Counter-Terrorism Financing Act 2006. https://www.austrac.gov.au/business/core-compliance/aml-ctf-programs
TD-2026-04-AlgoBlacklist-001
An investigative analysis of how pro-Israel lobby groups and their associated databases interface with automated hiring technology, social media screening platforms, payment processor AML frameworks, and legally mandated right-to-work vetting systems in Australia, the United Kingdom, North America, and Europe — to produce systematic employment and financial exclusion of critics of Israeli government policy without any single illegal act being committed.





I thank you sincerely for this meaningful work.
I have two reactions. The first is that the only solution will be critical mass resistance. The problem is that people lack courage, and are easily intimidated. Car payments, kids in school, and food on the table always come first. When those are gone, and they’re on their way, then we might see some action. Mass resistance can stop this in its tracks, and it’s the only thing that can, short of war.
The second thought is that your analysis shows that nothing will change even if Israel disappears. That will be great for the Middle East, (which includes west Asia), but it won’t help in Australia, Britain, or the US. These are co-conspirators and have no hesitations about destroying their own cultures, with or without Zionism. Nor does it have anything to do with antisemitism. That’s a red herring being flogged by these opportunistic agents of control.
Keep going, we need more of this to wake up the doddering masses.
Might be naive but in the end truth changes things, lies does not. So to all the truth speaker's out there , just a very big thankyou for the change you make, the awareness you bring, the impetus for better systems, the push for alternative media , the very fact that we are here has already changed so much. 3 years ago the same systems existed but how many know now that did not know then, and as a result many have changed platforms, started using Linux platforms or equivalent.
Truth will survive. It can be buried, or hidden, silenced, locked up but it still will survive. 🙂🌞