AI Anxiety: A Gen Z Founder Says Nobody Is Moving Fast Enough — Mental Health, Climate, and the Cost of Delay
I built my first AI prototype on the floor of a co-working space with a half-dead laptop, a pot of instant coffee, and a belief that technology could nudge the planet in a better direction. I was 23. I had been raised inside a world shaped by screens, hustled learning-to-code videos, and an unspoken social contract: if you move faster than the chaos, you get to influence it.
Years later, that bargain feels frayed. We are living inside a very different exchange: accelerated capabilities meet creaking institutions, and the velocity of innovation collides with the slowness of the human systems that must live with it. For many of us — especially my generation — that collision is an anxiety unlike any we felt before.
What I mean by ‘AI anxiety’
When I say ‘AI anxiety’ I don’t mean a catch-all for techno-pessimism. I mean the daily low-grade panic that arrives when a new model is released and everything you thought stable about identity, trust, and a livable future feels suddenly negotiable. It shows up as students who cannot tell if their essay was written by a friend or a bot; as researchers who worry their work will be misused; as activists watching climate models become black boxes of promise and pollution; as founders whose teams burn out trying to ship features that both scale and care for the people who use them.
This anxiety is not an abstraction. It is measurable in DMs where teenagers ask if they should delete their social accounts because deepfakes are everywhere. It is measurable in hiring queues where engineers turn down roles citing ‘moral burnout.’ It is measurable in the quiet decision of climate programs to slow, not because the science is shaky, but because the tools they must rely on are designed without human limits in mind.
The dangerous myth: generational shortcuts
There is a seductive myth making the rounds in boardrooms and newsrooms: Gen Z has short-circuited decades of learning. We are digital natives who somehow know the rules of the algorithmic world by osmosis; we can navigate misinformation, maintain attention while multitasking, and hack our mental health with apps. This idea feeds lazy narratives about ‘shortcuts’ — that our generation is uniquely able to cope with, or benefit from, any new technology simply because we grew up alongside it.
That narrative is harmful in two ways. First, it absolves institutions of responsibility. If the narrative goes that ‘they’ can handle it, why should investors, regulators, and platforms move faster to mitigate harms? Second, it collapses nuance about vulnerability. Growing up online doesn’t inoculate you against sophisticated manipulation, nor does it replace the need for robust social supports. Being comfortable with interfaces is not the same as being equipped for existential risk.
We are not shortcircuiters. We are inheritors of systemic decisions made without adequate foresight. The real shortcut sought by many is to treat human systems — education, healthcare, climate policy — as if they can be retrofitted around the newest model. That illusion is both arrogant and dangerous.
Why speed without care amplifies harm
Every technology that accelerates without parallel social infrastructure increases fragility. Consider three interlocking vectors where this plays out:
- Mental health and attention: Models optimized for engagement feed fast-moving narratives, virality of harm, and emotional contagion. The more tailored the content, the more likely someone in a vulnerable state will be funneled deeper into harmful loops. Rapid feature cycles amplify risk because there is no pause for rigorous human-centered testing.
- Trust and information integrity: Deepfakes and synthetic text make authenticity a scarce resource. Institutions that once anchored trust — universities, mainstream media, public health bodies — are slower to adapt their verification practices than the tools that erode those anchors. The result is a trust gap that generates anxiety and strategic paralysis.
- Climate and resource costs: Powerful models have a footprint. Training at scale consumes energy, water, and hardware lifecycle budgets. At the same time, AI is framed as a silver bullet for climate modeling, adaptation, and measurement. That framing often ignores trade-offs: compute-intensive models might improve short-term predictions, but they also add to emissions and distract from deployable, lower-tech solutions that would reduce carbon now.
Concrete patterns I’m seeing in the field
From the vantage point of running a startup and working across climate and mental health deployments, a few recurring patterns make me believe we are not moving fast — and smart — enough.
- Product-first, people-later rollouts: Features ship because they are technically feasible and promise growth metrics. The human friction they create is treated as a product bug to fix later, rather than a design constraint that should preclude launch.
- Promises outpacing accountability: Companies promise transparency, fairness, and climate benefits in market language, but measurement systems lag. ‘Explainability’ becomes a PR checkbox instead of a live operational requirement.
- Funding asymmetries: There is abundant capital for shiny, fast-scaling models and comparatively little for long-term human-centered infrastructure — crisis hotlines, community moderators, training for public servants, climate resilience in frontline communities.
- Skills gaps: People designing systems are rarely those forced to live with their consequences. The people closest to the harm — mental health clinicians in resource-poor regions, climate practitioners in vulnerable geographies — are under-resourced and under-included.
A new set of urgencies
Urgency does not mean reckless deployment. It means moving with deliberate velocity: fast enough to prevent avoidable harm, and rigorous enough that speed doesn’t create new crises. Here are five urgencies we must accept if we want to head off catastrophe and lift the burden of anxiety.
- Build mental-health-first product metrics: If features increase engagement at the cost of increased anxiety or hopelessness, they should be treated as failing safety checks. We need operational metrics — not PR statements — that measure emotional harms, and product gating that prevents rollout until improvements occur.
- Adopt carbon-aware AI practices: Track and report lifecycle emissions for model development and inference. Prioritize energy-efficient architectures, model distillation, on-device inference where possible, and deployment strategies that balance predictive power against environmental cost.
- Prioritize human-identifiable transparency: Authentication, watermarking, and provenance metadata for synthetic media should be normalized. People deserve to know what is machine-made and why. This reduces anxiety by restoring basic signals of authenticity in public discourse.
- Fund frontline infrastructure: Redirect some capital from novelty-driven model growth to community-level capacity: mental health services, digital literacy programs, public interest tech, and climate adaptation projects that deploy proven low-tech solutions alongside AI augmentation.
- Design defaults for people, not engagement: Default settings should minimize harm. That can look like slower, opt-in recommendation updates for younger users, more prominent friction on content amplification, and privacy-preserving defaults that prevent covert data extraction.
What faster, smarter movement looks like
Imagine a different speed: not the breakneck pace of ‘ship now, fix later’, but a synchronized sprint where technology, policy, and care systems advance together. That means:
- When a new model is released, it comes with a public impact summary: energy consumption, likely misinformation vectors, and an accessible plan for mitigation.
- Funding rounds include commitments to build safety infrastructure, not just product-market fit. Investors ask about mental-health metrics alongside traction.
- Deployments in climate use compute budgets tied to impact: models that reduce emissions demonstrably get prioritized; models that raise emissions without clear net benefit get shelved or rearchitected.
- Education systems teach not just how to use tools, but how to live with them: critical thinking, verification skills, and emotional resilience become core literacies.
A note to the AI news community
You hold a unique lever. You translate complexity into public sense-making, frame what counts as urgent, and set the agenda for what gets corrected. Reporting that focuses only on fireworks — benchmark numbers, scary demos, or patent races — misses the lived consequences. Coverage should center the human stories without sacrificing technical nuance: how a model’s latency affected a mental health chat service, or how a seemingly small architecture choice ballooned into outsized energy costs.
Ask for accountability that matters: show me the metrics, the trade-offs, the decisions where speed was chosen over care—and the human cost of that choice. Highlight solutions that are neither naïve nor technocratic: projects that combine community knowledge with technical ingenuity and move with urgency because they must.
Final imperative: move fast, but not without a heartbeat
There is a paradox at the heart of our moment: the faster technology can change lives, the more precious the human systems that make those lives sustainable become. We stand at a threshold. If we double down on speed without rethinking our responsibilities, anxiety will calcify into wider social breakdowns: institutions that cannot be trusted, ecosystems that cannot sustain the compute we demand, and a youth generation that inherits technologies that degrade their mental and planetary health.
On the other hand, if we move fast with moral clarity — building tools that explicitly measure psychological and environmental cost, resourcing communities that bear the brunt of technology’s harms, and designing defaults that prioritize human flourishing — we can unlock a future where speed is a privilege used to preserve the things that matter.
I am a Gen Z founder, and I believe urgency should fuel care, not panic. We are not immune to harm just because we grew up on screens. We are the ones who will live longest with these decisions. So to the platforms, builders, funders, and journalists reading this: nobody is moving fast enough. Move with the speed that responsibility demands.
We started this moment by asking technology to help us solve the biggest crises of our time. We can still get it right — but only if speed is harnessed to stewardship.

