The Score You Cannot See
A new lawsuit exposes the AI system quietly deciding whether your job application ever reaches a human.
There is a number attached to your name. You did not consent to its creation. You cannot request a copy. You cannot correct it if it is wrong. And it may be following you from company to company, quietly deciding whether a human recruiter ever reads your application at all.
This is not conspiracy. This is the architecture of the contemporary labor market.
The lawsuit filed in January 2026 against Eightfold AI — Kistler et al. v. Eightfold AI Inc. — has made it newly visible. The plaintiffs allege that the company functions as a Consumer Reporting Agency under the Fair Credit Reporting Act, that its 0-to-5 Match Scores constitute “reports” that should be governed by the same transparency rules as a credit score, and that candidates have been systematically filtered out of employment consideration by a black box they were never told existed. Whether the courts agree is a question that will take years to answer. What is not in question is the thing that prompted the lawsuit: Eightfold AI has built a system that assigns a mathematical reputation to job seekers, draws that reputation from over 1.6 billion career profiles, and provides it to employers before any human has looked a candidate in the eye.
I want to be precise about what that means. Because the danger of writing about algorithmic hiring is that it invites a certain kind of hand-wringing — vague discomfort at the involvement of machines, reflexive suspicion of anything technical. That is not the argument here. The argument is narrower and more verifiable: a specific company built a specific system that produces a specific score, and the people that score affects have no legal right to see it, dispute it, or know it exists.
What the Platform Actually Does
Eightfold AI calls itself a “system of intelligence,” not a hiring tool. It is a talent intelligence platform that ingests data from existing HR systems like Workday, Oracle, and SAP, layers it on top of a proprietary Global Talent Network of over 1.6 billion profiles and 1.5 billion career trajectories, and produces ranked candidates with scores from 0 to 5 in increments of 0.5. The platform uses deep learning and recurrent neural networks to model career sequences as a series of events — your past titles, your skill tenure durations, the companies you have worked for, how long you stayed. All of it becomes an input in a model whose output is a prediction: this is the candidate’s likely next title, and here is how closely it matches what the employer needs to fill.
The semantic matching at the center of this process is genuinely sophisticated. Traditional applicant tracking systems operated on keyword logic — if a resume did not contain the phrase “project management,” it would not surface for a project management role. Eightfold uses deep semantic embeddings that understand contextual equivalence, mapping candidates and job descriptions into a high-dimensional vector space and measuring the distance between them. A candidate who wrote “led cross-functional initiatives” and a candidate who wrote “project management” are, in this architecture, potentially equivalent.
The platform’s marketing correctly identifies this as a genuine improvement over the keyword-matching that made so many capable people invisible to so many automated filters. Consider a nurse applying to a clinical research role at a biotech company — a traditional ATS might miss her entirely for lacking the phrase “clinical trials.” Eightfold’s model, trained on the career trajectories of people who made that exact transition, would recognize the fit. That is a real capability, and it deserves acknowledgment.
But the same mechanism that finds hidden gems also generates something more troubling: a form of algorithmic determinism based not on who you are but on who you statistically resemble. The “Company Similarity” variable clusters employers in vector space — candidates from companies that “look and feel” like the target employer are scored higher than those who come from organizations outside the cluster. The “Hireability Inference” draws on patterns of historical hiring outcomes, which means if a profile type has been repeatedly rejected across the network, the model incorporates those rejections into its understanding of what a successful candidate looks like.
You are being evaluated not against the job description, but against the aggregate behavior of a billion digital twins.
The Data That Builds the Score
The lawsuit’s most striking allegations concern not the scoring but the sourcing. The plaintiffs allege that Eightfold scrapes personal data from social media profiles, location data, and internet activity without candidate knowledge or consent. Eightfold disputes the “lurking” characterization, but the platform’s own marketing explicitly references using “billions of data points” from public sources including career sites and social media to enrich profiles. The enrichment process introduces its own risks. Analysts have flagged that people with the same name — or junior and senior versions of the same name — can be confused by the technology, leading to the aggregation of what researchers call “ghost data”: information about someone else, attached to your profile, quietly depressing your score.
This is the circumstance that makes the comparison to a credit report so legible. A credit report is also compiled from data you did not directly submit. It also produces a number that determines whether institutions offer you opportunity or withhold it. And crucially: the Fair Credit Reporting Act exists precisely because Congress recognized, decades ago, that people have a right to see and dispute information that governs their economic lives. The plaintiffs in Kistler are arguing that the logic of that recognition applies here — that a score derived from billions of data points and used to determine employment eligibility is, in its functional architecture, a consumer report.
Whether the legal theory holds is genuinely uncertain. The comparison requires the court to accept that Eightfold is a third-party reporting agency rather than a software vendor whose output is interpreted by employers. Eightfold will argue that the Match Score is a tool, not a report — that employers retain final judgment and the platform is merely helping them sort. This is a meaningful distinction. But what is already clear, from the audit data Eightfold itself has released, is that the scores are not neutral.
What the Bias Audit Shows — and Doesn’t
The audit released in March 2025 excluded over 60 million applications where race or gender was unknown. Sixty million applications — roughly one in four of those reviewed — before any analysis of fairness had been run.
To understand why that matters, consider what the audit actually measured. Eightfold applied the “Four-Fifths Rule,” a standard that asks whether any group scores at a rate below 80% of the highest-scoring reference group. The platform received a passing rating. The groups that were measured all cleared the threshold.
But the results, read carefully, tell a more complicated story. Hispanic or Latino candidates scored at a rate of 0.916 relative to White candidates — inside the legal floor, but lower. Female candidates scored at 0.960 relative to male candidates. These are not disqualifying gaps under current law. They are also not evidence of fairness. They are evidence of a floor being cleared.
The 60-million-application exclusion is not a methodological footnote. It is the majority of the candidates for whom the system’s impact is most opaque, and for whom any fairness finding is, by definition, incomplete. The audit cannot tell us whether a Latina software engineer has the same probability of being seen by a human recruiter as a white male engineer with a comparable background — because more than 60 million people who might help answer that question were not included in the analysis.
There is a difference between passing an audit and being fair. Passing an audit means staying above the regulatory floor. The audit Eightfold published tells us where that floor is. It does not tell us what is happening above it.
The Cross-Company Problem
When you are rejected by one Eightfold-powered employer, that rejection may follow you to the next.
The Global Talent Network is explicitly described as self-learning — the models are “continuously updated” based on historical hiring outcomes. If a candidate profile type is associated with repeated rejection across the network of Eightfold-powered enterprises, those rejections become training data. The model recalibrates. The candidate’s “hireability” index shifts.
This creates a feedback loop that operates invisibly across company lines. You apply to Microsoft. You are scored a 2.5 and not advanced. You apply to PayPal, which also uses Eightfold. The model has learned, from the pattern of rejections associated with your profile, something about your likely fit. Your score at PayPal reflects not only your background but your history of rejection at similar organizations. You are not told any of this. The companies using Eightfold may not even know it is happening.
The platform’s Talent Tracking tools, designed for internal mobility, create additional surface area for this contagion. A negative signal from a contract role, a rejection for an internal promotion, a performance concern logged in a system that feeds Eightfold — all of it can flow into a unified view that shapes how you are evaluated the next time you apply anywhere in the network.
The Reckoning
There is a version of this story that ends with the lawsuit settling, the platform paying a fine, some transparency requirement being imposed, and the fundamental architecture continuing unchanged. That is the most likely version. The hiring industry has absorbed similar legal pressure before — the Workday litigation, the EEOC guidance, the expanding liability for AI vendors — and adapted without fundamentally reconsidering what it has built.
The question that version cannot answer is: what do we owe people whose employment prospects are governed by a score they cannot see, derived from data they did not submit, generated by a model that learns from their failures? The Fair Credit Reporting Act was created because that question, applied to financial data, had an obvious answer: they have a right to see it, dispute it, and know it exists. Eightfold AI’s legal team will argue that a Match Score is different from a credit score, that a talent intelligence platform is different from a consumer reporting agency, that the employer retains final judgment and the algorithm is merely a tool.
These arguments may succeed. They will not resolve the underlying moral situation: that somewhere between the moment you submit an application and the moment a recruiter opens your file, a number has been assigned to your name. The number was derived from the careers of a billion other people you have never met, from companies you may never have worked for, from rejections you were never told happened. You did not consent to its existence. You cannot request a copy.
The people who built it will tell you it is making the hiring process fairer.
That is what they will say. The data shows something more complicated. The lawsuits are beginning to agree.
What You Can Do While the Law Catches Up
The litigation will take years to resolve. In the meantime, the platform operates. Knowing how it works is the only practical defense available.
Recency is weighted heavily. The system checks whether skills have been used in your most recent role. Skills listed in a standalone “Skills” section at the bottom of a resume receive substantially lower weight than skills embedded in the description of a current position. If the role you are applying for requires Python, and Python does not appear in your most recent job description, the recency variable works against you regardless of your actual proficiency.
The trajectory model predicts your next title. A resume should present a career sequence that logically leads toward the role being sought. Career changers are not necessarily penalized, but they must do more work in recent descriptions to redirect the model’s prediction. An engineer applying for a product management role needs the product-adjacent aspects of her recent work foregrounded — explicitly, in the job descriptions, not summarized in a personal statement.
Company similarity rewards alignment. Research the LinkedIn profiles of successful recent hires at the target company. Note the specific language they use to describe their experience — then align your framing with those descriptions. This is not misrepresentation. It is ensuring the model reads your background correctly rather than routing it to the wrong cluster.
Profile consistency across platforms matters. Platforms analyze mismatches between resumes and LinkedIn profiles as potential “integrity risks.” Small inconsistencies in dates, titles, or descriptions can trigger score suppression before a recruiter sees anything. Ensure everything is synchronized.
None of this is a guarantee. Some of it is uncomfortable. It requires understanding yourself not as a person but as a data point in a probability distribution — and adjusting how you present that data point to maximize the chance of a human ever seeing it.
That is the labor market HR technology built. Candidates are navigating it largely alone.
If you have applied to a company using Eightfold AI, I’d be curious what you noticed. The comments are open.
Tags: algorithmic hiring, AI employment discrimination, Eightfold AI, FCRA labor rights, future of work


