Stop the “Innovation Theater”! A Case For Infrastructure Memory And Manufacturing Depth

“Innovation Theater” refers to a pattern common in the U.S. tech-industrial ecosystem:

  • Short-term pilots, hackathons, incubators, and flashy demos receive attention and fundingโ€ฆ
  • โ€ฆbut there is no institutional commitment to long-term capability, manufacturability, or sovereign control.

Symptoms include:

  • Endless SBIR Phase I awards with no path to deployment
  • Defense primes acquiring promising startups, then mothballing them
  • Pilot programs with no scaling budget or sustainment plan
  • Startups funded for โ€œagile softwareโ€ that canโ€™t be integrated into hardened systems
  • Celebration of innovation without infrastructure, logistics, or lifecycle thinking

<cynicism> oh the irony of that last point: After lots of LinkedIn posts how this or that startup is disrupting the industry, or how different this or that technology is, there is surely the one sudden unicorn that noone saw coming that does everything differently … including all the things that got hailed as so innovative and different and necessary and groundbreaking and awesome before. And then there is a slew of new posts about how this or that new early-stage startup is just like that new unicorn .. and very likely the next unicorn. The joys of venture capital and their success stories based on a series of one.</cynicism>

U.S. agencies fund activity, not trajectory. Thereโ€™s no infrastructure memory โ€” only demos.

Infrastructure Memory

Infrastructure memory refers to the institutional, technical, and cultural capacity to design, produce, evolve, and sustain complex systems over time โ€” even across generations, crises, or wars.

Examples:

  • Japanโ€™s automotive factories that can retool within weeks for new vehicle classes
  • Israelโ€™s Rafael carrying forward generations of directed energy expertise from core teams
  • Franceโ€™s nuclear industry retaining end-to-end enrichment, fuel fabrication, and disposal systems

Infrastructure memory preserves flexibility by having:

  • Deep tacit knowledge (operators, machinists, systems engineers)
  • Tooling and testbeds that donโ€™t need to be rebuilt from scratch
  • Supplier networks that are not just โ€œavailable,โ€ but integrated and trusted

Manufacturing Depth

Manufacturing depth is not just “making stuff”โ€” it means the ability to produce high-consequence technologies at scale, with reliability, traceability, and agility under adverse conditions.

It includes:

  • Process control: yield, tolerance, QA/QC
  • Material science integration: from raw materials to finished systems
  • Scale readiness: not just 10 prototypes, but 10,000 units/year
  • Sustainability: ability to maintain and upgrade systems across decades

Building a hypersonic glide body is “innovation.” Building a foundry and test loop that can iterate, verify, and scale that body at yield is depth.

Startups โ€” Two Real-life Example

Case A: A startup develops a next-gen sensor using novel MEMS + AI. They win a DIU pilot and build 10 units. But no U.S. fab can manufacture it at scale. Packaging relies on foreign supply chains. Radiation hardening isn’t verified. The DoD loves the idea, but can’t field it. Within 18 months, the startup is acquired and shuttered. This is innovation theater. And not ill-intended, but structurally demanded and guided towards by traditional venture capital metrics and board members and managers with quarterly KPIs.

Case B: A nation invests in a national MEMS + rad-hard foundry. It supports open-access design and cross-service application. The startup works with this foundry from day one. Within 24 months, 10k units are in the field. This is infrastructure memory + manufacturing depth.

Tension: Necessary Intellectual Discourse

These two scenarios raise a fundamental question: how do we preserve the disruptive edge of scientific innovation while also embedding it into resilient, infrastructure-aware systems?

My concerns reflect the classic challenge of path dependency: once infrastructure is built (e.g., a national MEMS + rad-hard foundry), there’s a high probability (>95%) that startups will begin designing to fit the infrastructure, not to push it.

Resilience (Infrastructure Memory)Disruption (Scientific Frontier)
Standardizes interfaces and materialsExplores new physics, substrates
Enables scale, maintenance, logisticsRequires rethinking tools and process
Optimizes for known productionPrioritizes exploration and non-linearity
Reduces risk, but also optionalityIncreases failure, but also breakthroughs

COUNTERFACTUAL SCENARIOS

Letโ€™s construct some thoughtful counterfactuals where more brittle, less “ready” innovation turns out to be strategically decisive โ€” despite violating the logic of “design for infrastructure.”

Counterfactual: Bell Labs Rejects Foundry Fit

Let’s assume that in 1947, the transistor design team at Bell Labs is required to conform to RCAโ€™s vacuum tube production line constraints. As a result the junction transistor never gets prototyped in its pure form. the team iterates toward โ€œcompatibleโ€ upgrades of the tubeโ€”not the quantum leap of solid-state electronics. And the integrated circuits are delayed by a decade.

Lesson: Infrastructure fit can kill upstream revolution if there is no โ€œoff-ladderโ€ escape path for physics-first innovation.

Counterfactual: Cold Atom Accelerometers Fail Procurement Reviews

Let’s imagine a startup building cold-atom-based inertial navigation systems is forced to fit DoD optical and thermal packaging standards from day one. Quantum performance is compromised to meet volume and packaging constraints. The startup iterates on ruggedization instead of discovering coherence gains. And China races ahead with cold-atom INS for GPS-denied navigation.

Lesson: Imposing MRL too early strangles precision-frontier technologies that need unconstrained lab-to-field transition time.

Counterfactual: Software-Defined Radar Is Rejected for Hardware Complexity

Lets assume that in 2002, a small team at MIT Lincoln Lab proposes software-defined radar, but cannot get a foundry that supports rapid RF path-switching on chip. The idea is shelved until 2018. Chinaโ€™s CETC uses GaN-based open-loop designs to scale adaptive radar on the battlefield. And the U.S. needs to spend $10B retrofitting legacy phased arrays.

Lesson: Overfitting to existing foundry capabilities can delay paradigm shifts in sensing and electromagnetic warfare.

DESIGN PRINCIPLE: Bimodal Infrastructure Strategy

Ironically, we already have a framework for doing better in non-defense and non-resilience innovations and business. Thinking fast and slow. Resilience in national manufacturing infrastructure should not come at the cost of discovery. Standardized foundries, while critical for deployment, can unintentionally stifle breakthrough science if all startups must design to fit existing capabilities. Instead, the U.S. must implement a dual-track model: resilient production lines coexisting with “disobedient foundries” that challenge todayโ€™s assumptions.

Scientific sovereignty demands infrastructure that both scales and surprises: A one-track model leads to path dependency. A two-track model creates feedback loops between discovery and deployment.

1. Mainstream Foundries

  • Standardized, resilient, interoperable, high-volume.
  • Used by 70โ€“80% of dual-use startups for reliable scaling.

2. Pathfinding Labs (e.g. “Disobedient Foundries”)

  • Subscale but unconstrained: allow new materials, devices, or processes.
  • Tolerate >80% failure rates.
  • Feed โ€œbreakthroughsโ€ into future infrastructure blueprints.

Japanโ€™s METI used this in their โ€œSunrise Industriesโ€ program (1970s), where radical ideas were tested outside main supply chains before integration. DARPAโ€™s ERI (Electronics Resurgence Initiative) and JUMP centers echo this dual-mode logic.


Policy Implications

We are already pretty good at innovation, moving fast, and breaking things. Perhaps we actually have a few too many “Disobedient Foundriesโ€. To shift from theater to depth, the U.S. must:

  • Incentivize infrastructure alongside innovation (e.g., fab capacity, test ranges, logistics backplanes)
  • Retain technical staff through competitive long-term roles (e.g., within FFRDCs or special industrial corps)
  • Mandate manufacturability and scale-readiness in early-stage programs (beyond TRL)
  • Fund reference architectures (like Israel’s MOD โ€œreference programsโ€) for long-term iteration
  • Internalize total system cost over time, not just prototype cost today

However, any such policy must include:

  • Rotating experimentation zones (DARPA-style โ€œsafe-to-failโ€ budgets)
  • Exceptions for radical science (moonshot pipelines without infrastructure constraint)
  • Dual-track procurement (e.g., โ€œfield-readyโ€ vs. โ€œfrontier-challengingโ€)
  • Outcome-based review (actual mission impact, not just process conformity)

The reason are second level risks โ€” unintended consequences from the wrong incentives and human nature. Here is a quick first overview:

1. Incentivize infrastructure alongside innovation โ€” Second Level Risks

  • Path dependency: Once capital-intensive infrastructure is built, future projects are steered toward legacy compatibilityโ€”limiting emergence of fundamentally new methods (e.g., quantum photonics vs. CMOS).
  • Infrastructure inertia: Investments in fabrication or test facilities often become politically protected assets, even after their technical obsolescence (e.g., DOE labs repurposed indefinitely).
  • Perverse incentives: Innovators may optimize grant applications for alignment with infrastructureโ€”not for problem-solution fit.

E.g. the highway system helped logistics but hollowed out rail. Infrastructure creates lock-in by default.

2. Retain technical staff through long-term roles โ€” Second Level Risks

  • Talent sequestration: Pulling top engineers into government-backed long-term roles may reduce the availability of founders, startup CTOs, or industrial trainers.
  • Institutional stasis: Over time, long-horizon roles can disincentivize risk-taking or exploratory thinkingโ€”especially when not coupled with rotational or sabbatical policies.
  • Opportunity hoarding: Legacy staff may dominate influence in allocation decisions, creating barriers for newcomers, outsiders, or unconventional thinkers.

E.g. DARPA rotates PMs every 3โ€“5 years to avoid “empire-building.” A fixed industrial corps risks the opposite.

3. Mandate manufacturability and scale-readiness early โ€” Second Level Risks

  • Premature convergence: Forcing early-stage science to conform to manufacturing constraints suppresses radical, pathbreaking approaches.
  • Over-standardization: Startups design to be acceptable, not exceptionalโ€”hampering novel substrates, materials, or post-silicon logic.
  • Penalizing frontier science: High-impact but non-obvious innovations (e.g., topological qubits, neuromorphic chips) may not appear “scale-ready” but could become strategically decisive if nurtured.

E.g. if DARPA had required manufacturability from the start, the internet protocol stack may never have emerged.

4. Fund reference architectures for long-term iteration โ€” Second Level Risks

  • Architectural monoculture: Over-reliance on reference systems can suppress architectural diversity, leading to brittle systems vulnerable to adversary exploitation.
  • Procurement lock-in: Once primes align to reference specs, alternatives become politically or bureaucratically unviableโ€”even if technically superior.
  • Innovation suppression: Upstream startups may become implementers of the reference model, not challengers of its assumptions.

E.g. the JSF (F-35) attempted a “unified architecture” โ€” and ended up as the costliest defense system ever due to universal fit constraints.

5. Internalize total system cost over time โ€” Second Level Risks

  • Budget sclerosis: Multi-decade cost modeling may penalize risky early projects that look expensive over time, even if their upside is asymmetric.
  • Innovation disincentive: Investors may avoid hardware or defense startups entirely if the bar for “lifecycle cost proof” becomes too bureaucratic or slow.
  • Reduced iteration speed: Fear of long-term cost overhang may cause program officers to favor incremental upgrades over clean-sheet designs.