Reinventing Value: Why AI Can’t Replace All Our Real Jobs

[The title should’ve been “Can’t or Shouldn’t” — but that was too long, so there.]

Noah Berkson posted on LinkedIn

If AI takes your job, it wasn’t your job.
It was a task you were temporarily doing.
Your real job is staying valuable.
Your real job is reinvention.

I wouldn’t call it a meme, but it’s been around in different forms since 2010 or so [1]. It is provocative and it is helpful. Because now you can start investigating its claims and what you implicitly have to believe to be true and what the unintended consequences of applying AI to everything are. It talks about three things:

  • tasks vs jobs,
  • values, and
  • reinvention.

Here are five counter-arguments where AI can not or should not take my job, or where reinvention isn’t a useful concept. For each of them, I quickly jotted down my thoughts on actionable, high-potential business ideas for venture-backable DeepTech dual-use innovation opportunities:

  1. High-Context Human Care
  2. Jurisdictional Entrapment in Non-Competitive Labor Systems
  3. High-Stakes, Mission-Critical Roles That Should Not Be Automated
  4. Tasks Embedded in Cultural, Narrative, or Communal Cohesion
  5. Jobs as Identity Anchors in Fragile Social Contracts — example: Veterans!

WARNING: What often happens is that we optimize jobs “to give employees more time to do important and most valuable things” … and then instead fire all those people to capture the additional profit potential. It’s tricky: Once any of your competitors is behaving that way and can offer the same care or product or service at lower cost basis it becomes incredibly hard not to do the same as well. Unless you can continue to command a higher price point — usually through a strong brand = promise of experiences.

1. Do We Value High-Context Human Care?

Let’s say (1) policy incentives reward cost reduction over quality of care; and (2) regulatory frameworks get rewritten by cost-efficiency lobbies; and (3) the public acclimate to machine care due to normalization via systemic underfunding of human-intensive roles.

Feeding robots can be useful when there is barely staff available in assisted living facilities. But should understaffed facilities exist in the first place? Imagine an economic system that automates away all forms of high-context human care — like palliative nurses, trauma therapists, or special education aides — because AI becomes “good enough” for payers, despite a degradation in human experience.

In such a world, the displaced nurse or therapist is not obsolete; their value was ignored—not absent. Reinvention is not failure to adapt, but failure of a system to value what cannot be digitized. This scenario is especially dangerous because it appears “efficient” in short-term economic models, but externalizes immense social, psychological, and long-tail economic costs—such as diminished dignity in death, worsened trauma recovery, educational failure, and erosion of social fabric.

But, as the meme goes: “Your real job is staying valuable. Your real job is reinvention.” Indeed, our real job now should be to advocate for the human experience, to figure out where to use AI to give back time to exactly those palliative nurses, trauma therapist, etc to interact more with patients — it’s a job where it’s hard to digitize its value, but at the same time you have to reinvent it.

Interesting startups would be:

  1. Human-in-the-Loop Care Copilots (Hardware + AI SaaS): A “Spotter” for nursing aides that listens, not watches. Backed by ROI from lower rehospitalization, litigation, or staff turnover. It could help aides manage routines (medication, hygiene, movement therapy), while escalating anomalies or mood shifts to supervisors.
  2. Care-Stack Infrastructure API (EMR ↔ AI ↔ IoT Layer): A “Stripe for caregiving operations” that bridges care work with automation, scheduling, verification, and documentation. For example, automated repetitive reporting (e.g., wound checks, fall logs, care notes), freeing human time for actual interaction.
  3. Caregiver-as-a-Platform (CaaP) Networks: A care labor platform owned or governed by caregivers, emphasizing flexible work, verified skill, and escalation logic. Instead of a gig-economy race to the bottom (like current home care apps), this would be an upskilling and resilience stack.
  4. Relational Labor Tokens (RLTs): A verifiable digital artifact that represents a unit of relational care delivered — backed by a mix of time, emotional engagement, and context-sensitive presence. It’s kind of a cross between a timebank, a skills credential, and a care receipt — it could be cryptographic (blockchain-based) but also simply API-issued, depending on trust model. For example, a care aide spends 30 minutes in assisted feeding and then 10 minutes in emotional de-escalation. The system (via sensors, audio NLP, patient feedback, etc.) verifies this as non-automatable emotional labor. The worker earns 1.4 RLTs for this episode. RLTs can then be used for bonuses, internal credits, peer recognition, or community governance votes (if on a platform). They could be tracked for funding allocation (e.g., facility earns more budget based on verified high-integrity labor) and are auditable for external payers, tax benefits, or public support programs.

I find the last one especially intriguing, as the care market is starved for metrics. Payers want to see value-based care but can’t measure relational quality. RLTs give them defensible metrics. It’s also a great labor alignment: Caregivers want dignity and recognition, but not crypto complexity. A simple, earned point system with clear benefits suffices. RLTs can also be “transferred”, meaning you can earn RLTs for your next job application.

2. Jurisdictional Entrapment in Non-Competitive Labor Systems

Let’s assume (1) cross-border capital arbitrage enables automation in jurisdictions with zero local reinvestment; and (2) re-skilling is not accessible; mobility is structurally restricted; and (3) local economies are not structured to absorb “re-invented” skillsets. Then a miner in Congo or a garment worker in Bangladesh loses their job to a robotic system controlled from a remote multinational HQ. The worker didn’t lack reinvention; the system denied them access to paths for reinvention. The aphorism presumes liberal market fluidity that doesn’t exist in extractive or exploitative economies.

But why did such exploitative economies exist in the first place? I’ve also seen faster adoption of innovations and ingenious use of new technologies in both examples. Here would be some opportunities for new startups and innovations:

  1. Autonomous Micro-Training Platforms (localized, low-bandwidth): AI-powered tutors that adapt to local dialects, literacy levels, and offline operation. Examples would be a fusion-powered “skills pod” that trains for locally viable trades (solar tech, drone repair, sustainable farming).
  2. Hyperlocal Credentialing + Verification Tech: (Blockchain-anchored?) verifiable micro-credentials that can travel across borders or informal economies, because many displaced workers are undocumented or outside formal labor registration systems.
  3. Edge-AI Market Signal Mapping: AI agents that scan for emerging micro-opportunities in local economies (e.g., needs for repair services, elder care, decentralized manufacturing). This might serve as a replacement for job boards in structurally unemployed regions, but perhaps not scalable if there is no income in those regions to pay for those micro-jobs.
  4. Labor Sovereignty Protocols: Countries institute “robot replacement tariffs” or global taxes on labor-displacing AI in extractive industries, with funds reinvested in reskilling initiatives. That would be analogous to a Tobin tax on capital flows, but applied to task displacement externalities.
  5. Narrative Infrastructure for Reinvention: Storytelling systems (radio, local cinema, AIs short stories, WhatsApp storytelling bots) that normalize midlife career pivots, resilience, and skill pluralism to battle cultural rejection or shame of reskilling programs.

I don’t think government dictated policies for “Epistemic Inclusion” in AI design would work: Large corporations from other countries might not have those design principles or ethos in place. Otherwise, for the above areas there are analog examples like the South Africa’s Youth Employment Service (YES) Initiative, Estonia’s e-Residency, or Afghanistan’s Code to Inspire. So maybe there is a path forward.

3. High-Stakes, Mission-Critical Roles That Should Not Be Automated (yet?)

Assume (1) the AI works well in 99.9% of cases, but fails under “unknown unknown” edge cases (All data center operators and phone network operators know that .1% downtime per year are still 31,536 seconds, almost 9 hours, which is more than the 6.5 core hours at Wall Street with about 100 million lost trades each hour); and (2) human override systems are de-funded; and (3) a catastrophic failure prompts re-evaluation, but too late. For example, what if an air traffic controller is replaced by an AI system designed by a cost-optimizing contractor, a job that requires tacit knowledge and judgment under radical uncertainty.

In this case the replacement of the human was a policy error, not a reflection of individual obsolescence. “Reinvention” was irrelevant. This is where the original quote’s framing (“If AI takes your job, it wasn’t your job”) misses one of the most important frontiers: human-machine teaming in high-stakes, high-consequence domains. Rather than full replacement, many mission-critical roles—like air traffic control, emergency response, combat command, or power grid coordination—stand to benefit from innovations that amplify human judgment under stress and radical uncertainty. In ATC, AI could monitor low-risk sectors and flag only edge cases. In ISR (intelligence, surveillance, reconnaissance), AI triages video feeds, reserving human review for ambiguous signals. The value proposition of keeping humans in these roles isn’t inefficiency—it’s resilience. We don’t design for normal ops; we design for the edge cases, the unknowns, the adversarial moments, the ones that remind us that judgment is not computation.

“Battle is marked by confusion and ambiguity… they consciously traded assurance of control for assurance of self-induced action.” — on Auftragstaktik, Military Review, 2000

But that is, in a way, what the quote is suggesting: reframe the value, reinvent your job.

4. Tasks Embedded in Cultural, Narrative, or Communal Cohesion

Let’s assume (1) our AI lacks embedded cultural context, ceremonial nuance, and oral transmission fidelity; and (2) cultural norms are communicated through storytelling or communal exchanges; and (3) a local governance adopts AI as an official “preservation tool,” sidelining elders and community transmission, and instead scraping the Internet. For example, an Indigenous language teacher is replaced by an AI language tutor trained on text corpora scraped from the internet versus oral history of tribes.

The cultural integrity erodes while AI metrics report “success.” But the teacher’s job is not transactional language transfer—it’s cultural continuity. The value is relational, not merely functional. Reinvention in another form may be impossible without the original role.

Just to make sur: Indigenous or Tribal Teachers were just an intuitive example here. Think

  • global diaspora communities (300M+ people)
  • global faith-based education and narrative systems (religious instruction is narrative cohesion at scale. Families want trusted content, voice authenticity, lineage fidelity — not AI hallucinations)
  • Cultural rehabilitation in post-conflict and post-colonial societies (government and NGO-funded programs in 50+ countries rebuilding narrative identity after trauma or regime change)
  • Creator tools for narrative economies (Web3/Alt Media)

In all of these markets, the economic moat is cultural fidelity: Does this system sound like us, and does it preserve the meaning we make—not just replicate the content? That’s an interesting opportunity for new Deep Tech inventions:

  1. Neuro-symbolic Language Preservation Platforms: Combine neural nets (for natural language processing) with symbolic, rule-based systems to preserve grammar, metaphor, and ritualized forms — elements that pure LLMs often flatten or erase. In a sense, these would be models that understand not just “how to say,” but “when and why it is said,” including ritual context, taboo, or seasonal specificity.
  2. Real-Time Language Co-Presence Tools: Wearable or ambient devices that allow elders to teach across distance in real time, preserving prosody, cadence, and gesture — all crucial in oral traditions. It would combine low-latency edge computing, ultrawideband audio fidelity, and 3D voice spatialization.
  3. Sensory-Aware Phoneme Capture Tools: Many Indigenous languages use phonemes that don’t exist in major world languages and get distorted by Western recording tools. It would require development of sensors or edge-AI mics optimized for rare consonants, glottal stops, or tonal inflections.
  4. Ecological Embedding Engines: Some Indigenous languages embed ecological knowledge in verbs, directionality, or classifications of kinship and flora. That could be deep reinforcement learning agents trained in virtual ecological environments that speak through the language, reinforcing land-based epistemology. For example, language learners would navigate simulated seasonal foraging or kinship scenarios where language unfolds as action.
  5. Verifiable Oral Knowledge Anchors: Use zero-knowledge proofs or cryptographic watermarking to anchor oral teachings as verifiable knowledge assets without revealing content, to protect against cultural appropriation. This aligns with innovations in data privacy, cryptography, and Indigenous IP frameworks. Teachers can share stories without fear of AI scraping or unauthorized reproduction.
  6. Data Sovereignty Infrastructure: No deeptech stack is legitimate without addressing where the data goes, who owns the models, and how they are used. I could imagine a community-owned data vault with AI model governance tokens and embedded consent layers for dataset usage.

5. Jobs as Identity Anchors in Fragile Social Contracts

Assume (1) a new civilian economy devalues legacy skillsets and offers no clear psychological continuity; and (2) algorithmic filtering locks them out of new upskilling pathways. Then the lack of role continuity leads to dislocation, despair, or radicalization. For example, veterans returning from active duty are funneled into civilian roles that are suddenly AI-disrupted (e.g., logistics coordination, UAV piloting). For them, the job wasn’t just a task; it was a stabilizing identity and social contract. Reinvention isn’t always feasible without structured psychological scaffolding and community.

Veterans don’t just need jobs — they need purpose, precision, and participation in systems that value judgment under uncertainty. Here’s how Deep Tech innovations can serve that:

  1. Mission Reassignment Engines (Cognitive Matching at Scale): Use AI to map veterans’ tacit skills (situational judgment, moral reasoning, field improvisation) to high-uncertainty civilian roles. A GPT-style engine could be trained on combat logs, leadership evaluations, and military after-action reports creates “capability fingerprints” that go beyond resumes. It’s like Palantir for career mapping, but built around decision-making under stress, not resume keywords.
  2. Digital Twin Simulators for Civilian Re-missioning: Help veterans explore high-risk roles in new sectors (agtech, climate engineering, critical manufacturing) through immersive decision environments. Example like a digital twin of a wildfire response unit come to mind, where the digital twin that adapts to a vet’s previous missions, letting them “train-in-place” for civilian readiness without bureaucracy or shame.
  3. Neuroadaptive Feedback Systems for Moral Injury Recovery: Reintegrate cognitive patterns disrupted by war or betrayal — e.g., moral injury, identity conflict. It could be an EEG + ML pattern recognition + closed-loop feedback + cognitive state classifiers, delivered as wearables or spatial computing environments that reinforce resilience, not sedation.
  4. Veteran-Owned Micro-Foundry Systems (Digital Fabrication Sovereignty): Restore hands-on purpose through decentralized, digitally-driven production systems that veterans can own, operate, and adapt — think CNC + additive manufacturing + IIoT (Industrial Internet of Things)… oh, wait, that’s what I’m doing!

Closing Thoughts

The challenge isn’t that internalizing externalities is impossible — it’s that the bridges between financial ROI and societal ROI are under-engineered. To internalize externalities in these five scenarios we must:

  • Empower the caregivers not as displaced workers, but as indispensable civic infrastructure.
  • Invent technologies that trace second- and third-order consequences across time and human lives.
  • Shift capital to reward care resilience, not frictionless substitution.

[1] Reid Hoffman wrote The Start-Up of You (2012) and emphasized adaptability over fixed job titles; Marc Andreessen’s Why Software is Eating the World (2011) sparked a wave of techno-optimism and concern about job displacement; Frey and Osborne publish The Future of Employment: How Susceptible are Jobs to Computerisation (2013); I remember Twitter versions of “If a machine can take your job, it wasn’t your job” from around 2015 when I started working at Northgate Capital; And it definitely was a truism around 2017 or so.