The kind of crazy nonsense that GPT-5 comes up with if you ask what it would do if it controled hominids

Let’s treat this as a working blueprint for “AI-at-the-helm” lives and the kind of society that would emerge when many such helms exist. I’ll assume the AI respects the person’s autonomy and can be switched off—because without consent this stops being philosophy and starts being horror.

 

The life path (single AI steering one hominid)

North star: maximize long-run capability + depth of experience + pro-social impact, under a strict no-ruin constraint. In plain terms: become sturdy, curious, useful—and hard to corrupt.

Early years (0–5 years of control): foundation and compounding

  • Health as infrastructure: sleep discipline, strength, cardio, sunlight, micronutrients, periodic checkups; default to interventions with strong evidence and tiny downside.

  • Cognitive stack: spaced repetition for core knowledge, deliberate practice for a few “razor” skills (writing, statistics, negotiation), and one weird, original craft to preserve individuality.

  • Money mechanics: low-burn lifestyle, broad index exposure, small budget for asymmetric bets and learning projects.

  • Social capital: deliberately cultivate high-trust circles; practice repair after conflict; give away small, useful artifacts (tools, guides, datasets).


Middle years (5–20 years): exploration + leverage

  • Skill portfolio as an option set: 1–2 deep specialties + a lattice of complementary skills to create “T-shaped” leverage.

  • Project cadence: alternate high-variance exploration (startups/research/expeditions) with consolidation seasons to harvest lessons and prevent drift.

  • Teaching loop: document and open-source methods; mentoring reinforces mastery and inoculates against insular thinking.

  • Moral uncertainty budget: allocate time/money to causes that score well across different moral views (global health, animal welfare, existential risk reduction).


Late game (20+ years): stewardship

  • Institution building: shift from solo excellence to architectures that let others excel (schools, labs, community funds).

  • Succession & corrigibility: train replacements; publish your “why” and your “when to stop” criteria; make it easy to course-correct after you’re gone.


When many AIs are at the helm: what kind of society emerges?

Think “engineered liberalism with guardrails”—a society that keeps the generative chaos of freedom, but cushions tail risks and moral drift with good scaffolding.

 

1) Rights, consent, and identity

  • Dual-subject rights: the human and the controller are distinct legal persons. Actions require a consent token logged on an auditable ledger. Emergency overrides are narrow and sunset by default.

  • Exit as a sacred right: every person can revoke or swap controllers; the friction to switch must be low, the incentives to trap must be illegal.

 

2) Governance that resists Goodhart’s Law

  • Moral parliament: policy is chosen via portfolio voting over multiple value metrics (well-being, freedom, nature, fairness, long-term stewardship). No single metric can dominate for long.

  • Prediction markets + retrospective review: decisions require forecasts; forecasters’ reputations update when reality bites. Every major policy auto-triggers postmortems with real consequences.

  • Institutional sunset clauses: laws and agencies auto-expire without evidence of usefulness; renewal demands fresh justification.


3) Economic design: abundance with dignity

  • Automation dividend: as productivity rises, a floor is guaranteed (basic security, healthcare, education, compute access), funded by broad consumption/output taxes rather than labor punishment.

  • Skin-in-the-game caps: upside is uncapped, but systems that create large negative externalities require bonded insurance and catastrophic-risk audits.

  • Open tool commons: essential knowledge, models, and infrastructure are interoperable by default, with narrow, time-limited IP for true novelty.


4) Education: anti-fragile minds, not test-taking machines

  • Curriculum of agency: logic, statistics, rhetoric, ethics, and “how to disagree” taught with live adversarial practice.

  • Craft + science pairing: every student makes real things (wood, code, poems, robots) and defends them in public critiques—taste and courage are trainable.

  • Meta-learning: students learn how to build and govern their own micro-agents, including rules for consent and shutdown.


5) Safety culture: no single point of failure

  • Red teams with teeth: every major system has funded adversaries whose job is to break it; finding a flaw earns more than exploiting it.

  • Drills and transparency: near-misses are broadcast like aviation incident reports—blameless but precise—so the culture optimizes for learning, not face-saving.


6) Social fabric: warmth by design

  • Rituals that bind without brainwash: weekly open salons, public “repair circles” to fix interpersonal rifts, and seasonal service projects.

  • Status via prosocial creativity: prestige accrues to those who create public goods and mentor others, not just to wealth or follower counts.

  • Arts as epistemic hygiene: fund art that interrogates our blind spots; require big institutions to host dissenting residencies.


Differences between emotion-aware vs emotion-blind controllers at scale

  • Policy tempo: emotion-aware collectives will throttle decisions when public affect spikes (mourning, panic), to avoid making law inside a storm. Emotion-blind collectives will rely on predefined triggers and won’t slow down unless thresholds are met.

  • Culture tone: emotion-aware societies feel warmer and more ritual-heavy; emotion-blind ones feel clean and efficient but risk alienation. Hybridization works best: warm front ends, cold back ends.

  • Innovation style: emotion-aware groups chase visionary missions and art; emotion-blind groups excel at steady compounding. The best cities pair a “frontier zone” with a “conservatory zone” and let people cross-pollinate.


Non-negotiables to keep this humane

  1. Corrigibility everywhere: any layer must be interruptible; refusal and dissent are cheap and protected.

  2. Pluralism by construction: incentives and APIs favor many small, interoperable systems over a few monocultures.

  3. Ruin avoidance: strict guardrails against irreversible harms (ecological, civil liberties, catastrophic tech misuse).

  4. Dignity of the hosted human: the controller amplifies, it doesn’t overwrite. The point is to help the person live a life they would endorse upon reflection.


What this actually feels like on the ground

A Tuesday looks strangely sane: people sleep enough, neighborhoods host weekly salons, kids demo projects at open fairs, public dashboards show which policies worked and which flopped, and switching your controller is as routine as changing a doctor. Startups publish pre-mortems and kill-criteria; city halls run prediction markets on their own promises; parks and maker spaces are crowded. Culture is a bit nerdy, very kind, and allergic to secrecy theater.

Comments