Gen Z and the Bias Question: Rethinking Hiring for AI Identity Work
When the CEO of an identity-technology company says he favors hiring Gen Z because they are ‘less biased,’ the comment lands like a stone in a still pond: concentric circles of conversation radiate outward, each wave reflecting hopes, fears and hard questions about hiring, algorithmic fairness and company culture. The remark from Incode CEO Ricardo Amper is provocative not because it is new—the tech industry has long sought signals of ‘fresh perspective’—but because it collapses a knot of ideas about age, neutrality and how we build systems that touch identity.
Context: identity work, bias, and a short phrase with oversized consequences
Identity systems—face and document verification, biometric onboarding, fraud detection, anti-spoofing—sit at a fraught intersection of technology, law and human dignity. Decisions about how to collect, label and shape data in these systems ripple into who can access services, who is wrongly blocked, and who sees their face misrecognized. In that space, the composition of teams matters. It’s tempting, even seductive, to look for a single heuristic—age, background, education—that promises lower bias. Saying that younger hires are ‘less biased’ is shorthand for a belief that certain affordances come with youth: less entrenched patterns of thinking, greater facility with new tech, and fresh cultural touchstones.
Parsing ‘less biased’: possible meanings and misdirections
There are multiple ways to interpret the claim. One is literal: young people have had different socialization and exposure to diverse content online, which might affect how they label facial attributes or judge ambiguous edge cases. Another is psychological: without long careers in legacy systems, younger hires may lack habituated heuristics that cause repeated oversight. A third is practical: Gen Z brings fluency with modern tooling, rapid iteration practices and a willingness to question established defaults—factors that can reduce certain procedural biases.
Yet conflating age with moral or cognitive purity is dangerous. Bias is systemic; it emerges from datasets, organizational incentives, product requirements and regulatory contexts, not just individual minds. A hire of any age can carry unconscious patterns that replicate societal inequities. To treat Gen Z as a silver bullet risks substituting one simplification for another.
Why the idea resonates
- Agility and learning curves: Newer entrants to the workforce often adapt quickly to emerging tools—prompting iteration cycles that can surface failures earlier.
- Different cultural frames: Growing up in a digital-first world can shape perspectives on privacy, identity and representation in ways that contrast with older cohorts.
- Willingness to question: Younger staff may feel less bound by precedent and more inclined to call out problematic design choices.
The critiques and the real risks
The backlash that follows such statements is not just sensitivity; it highlights real organizational and ethical risks.
- Tokenism and age monocultures: Over-indexing on youth can create homogeneity of another kind. A team all from the same life stage misses the cognitive diversity that cross-generational collaboration brings—experience with edge cases, operational knowledge, and institutional memory matter.
- Deflecting responsibility: Saying younger people are ‘less biased’ can be used to avoid systematic remediation—shift blame from design choices or data practices to personnel selection.
- Equity and legality: Hiring strategies that favor protected characteristics—such as age—can create legal exposures and undermine commitments to fair hiring.
- Retention, growth, and culture: A workforce treated primarily as a corrective tool rather than a valued cohort will not thrive. The result is poor retention, morale problems, and superficial gains in product perspectives without durable institutional change.
What meaningful action looks like
If the goal is to reduce bias in AI systems that manage identity, the community must move from slogans to structured practice. Several practical strands deserve attention.
1) Hiring as one lever among many
Hiring can diversify perspectives, but it must operate alongside deliberate data practices, model-testing regimes and governance. Recruitment should pursue cognitive diversity—different training, life experience and problem-solving styles—not just demographic markers. Transparent job descriptions, structured interviews and work-sample assessments align hires to the problems teams actually solve.
2) Measurement and accountability
Bias reduction requires measurement. That means designing evaluation suites that test performance across axes of identity, developing continuous monitoring for production drift, and tying product metrics to organizational incentives. Teams must be rewarded for reducing false positives and false negatives across subpopulations, not merely for improving aggregate accuracy.
3) Cross-generational teams and mentorship
Rather than choose one age cohort as a corrective force, stitch together teams that mix the rapid experimentation of early-career hires with the domain knowledge of more experienced practitioners. Mentorship should flow both ways: veterans share institutional context and compliance know-how, while newer hires share the latest toolchains and user expectations.
4) Design processes that surface values early
Bias is baked in during problem framing. Inclusive design rituals—stakeholder interviews, adversarial testing, and community review—should occur before data collection and model design. This reframes hiring as one input among many that shape outcomes.
5) Invest in psychological safety and upward mobility
Creating environments where anyone can call out bias without fear of retribution is essential. That includes avenues for anonymous reporting, regular ethical audits, and clear career paths so those who join as corrective hires can grow into influence rather than leave after burnout.
Culture as the long game
Company culture is the lens through which hiring decisions manifest in code, datasets and feature roadmaps. A culture that prizes curiosity, humility, and transparent failure will leverage the strengths of younger hires without weaponizing them as scapegoats. Culture design is iterative: it requires rituals, norms and the willingness to be judged by outcomes.
Technology levers that amplify team impact
Tools and practices can help teams of any composition build fairer systems:
- Data provenance and labeling standards: Clear protocols for how training labels are collected and audited reduce downstream bias.
- Closed-loop feedback: Mechanisms for real-world errors to be rapidly incorporated back into models help correct blind spots.
- Explainability and interpretability: Transparent model outputs make it easier to understand why a system fails for certain groups.
- Simulated adversarial testing: Stress-testing systems with controlled synthetic data exposes brittleness before deployment.
Policy and industry norms
Beyond individual companies, industry norms and regulatory guardrails shape incentives. Standards for evaluation, shared benchmarks for performance across demographic groups, and clearer rules about recourse for misrecognition create an environment where hiring practices matter, but do not carry the burden of a system’s fairness alone.
A constructive path forward
Ricardo Amper’s comment does us a favor: it forces a conversation. But the right response is not to weaponize age as a proxy for ethics or cognition; it is to enlarge our toolkit. Hire across experience levels, design assessment processes that prioritize fairness, invest in culture and measurement, and ensure legal and ethical compliance. Use youthful perspectives as a catalyst—not as the only strategy.
Closing: building identity systems that reflect humanity
People will read the quote and take away different lessons: hire youth, protect tenure, rethink who gets a voice. The more constructive takeaway for the AI community is to treat hiring statements as an invitation to interrogate the whole editing room where identity systems are produced. Bias does not belong to any single cohort; it is material that teams shape through decisions. If we want identity systems that honor diversity and dignity, we must construct teams and processes that sustain that intention through recruitment, training, governance and technology.
There is a rare optimism in believing that new perspectives can move us closer to fairness. Channel that optimism into durable practice: build measurement, reward humility, mix experience with fresh eyes, and hold systems accountable. That is how we create AI that serves, rather than presumes to define, identity.

