Here is the argument in its starkest form: decisions are the most scalable output a human can produce, and AI is about to make them dramatically better — or dramatically worse.
A single CEO decision redirects billions in capital. A professor’s choice of research direction shapes a field for decades. A state leader’s policy moves millions of lives. These outputs have always been high-leverage — but they’ve been bottlenecked by the cognitive bandwidth of the humans making them. The decision-maker could only read so many reports, talk to so many advisors, process so much information.
AI removes that bandwidth constraint. Models can now read every filing, synthesize every expert call, simulate every scenario.1 The cost of processing a million tokens of context dropped from $36 in March 2023 to under $0.40 today — a 99% decline in under three years.2 Intelligence, as measured by raw analytical horsepower, is becoming a commodity. DeepSeek matched GPT-4 performance at 90% cost reduction.3 Anthropic hit $14B ARR not by being smarter, but by being more useful.4
This creates a paradox. If everyone has access to the same cognitive horsepower, what determines who makes better decisions?
Decision Quality = f(IQ, Taste, Problem Context, Available Data)
The thesis: as IQ commoditizes, the remaining three components — taste, context, and data — become infinitely valuable because they multiply through the leverage of decisions. A 1% improvement in a $10B capital allocation decision is worth $100M. That’s not metaphorical. Organizations with high decision excellence show a $10 billion TSR difference between top and bottom performers among the largest 1,000 companies.6
This isn’t a technology thesis. It’s an economics thesis about what becomes scarce when intelligence becomes abundant.
Intelligence is the first component to commoditize, and it’s happening faster than anyone predicted. GPT-4’s launch in March 2023 priced frontier intelligence at $36 per million tokens. By August 2024, GPT-4o offered equivalent capability at $4/M tokens. By late 2025, open-weight models like Llama 3.2 achieved GPT-3’s benchmark scores at $0.06/M tokens — a 1,000× cost reduction in three years.2 a16z calls this “LLMflation” — a deflationary force declining at 10× annually, faster than Moore’s Law ever did.11
The implication is simple: raw analytical horsepower is no longer a differentiator. Every startup, every corporation, every government can access the same IQ for pennies. @esrtweet, a legendary programmer, captured the identity crisis this creates: “Turns out I was always a system designer.”12 When the machine can code as well as you can, you discover your real value was never the code — it was the judgment about what to code. 2,592 people liked that tweet. It struck a nerve because it’s true.
Taste is the anti-commodity. It cannot be scraped, crowdsourced, or fine-tuned into a model. It is the product of decades of accumulated experience, mistakes, cultural exposure, and values — inputs that resist clean data capture.5 Where intelligence expands the possibility space (more options, more analysis, more speed), taste narrows it: the ability to say “no” faster, to distinguish the merely interesting from the genuinely valuable, to know what “good” means before the system takes action.13
The zeitgeist has arrived at this conclusion independently. @QwQiao, writing about raising children in a post-AGI world, put it plainly: “Agency and taste are the only things that matter in the post-AGI era.”14 781 people agreed. This wasn’t a tech thought leader; it was a parent making real decisions about their kids’ future and concluding that the only durable asset is the ability to decide well.
@Madisonkanna, an engineer with 9,296 likes, mourned the identity loss: “Who am I now?” after AI eclipsed her craft.15 The answer is in the question. You are your taste, your judgment, your accumulated context about what matters. Everything else is now a commodity. Top performers don’t maximize AI output — they filter aggressively, discarding most drafts and gaining their edge by saying “no” faster.5 Editorial authority is becoming strategic leverage.
Context is the information that makes a decision yours. It includes your relationships, your constraints, your history, the politics of your organization, the promises you’ve made, the trust you’ve built. No amount of Perplexity queries or Deep Research runs can surface this. It’s not on the internet. It’s not in any database. It’s in your head, your calendar, your DMs, your gut.
This is why personal agents become the decisive infrastructure. An AI copilot that has access to your full private context — your CRM, your meeting transcripts, your financial situation, your relationship dynamics — can provide decision support that a generic model cannot. The Personal Agent thesis argued that personal agents are identity technology, not tool technology, heading toward universal adoption by 2033–2037.17 This thesis explains why: because context is the missing ingredient in every decision, and the personal agent is the container for context.
99% of the world’s data is “dark data” trapped in proprietary systems, never searchable.18 The context that matters most for decisions is the darkest of all: private, interpersonal, political, emotional. The agent that captures this context creates an asymmetric decision advantage that compounds over time.
Of the four components, data is the only one you can actively increase. IQ is converging. Taste is built slowly over decades. Context is inherently fixed to your situation. But data — market intelligence, expert perspectives, behavioral signals, industry knowledge — can be expanded by accessing more of it, faster, from more sources.
This is the vector the Decision Data Marketplace thesis explored on the supply side: expert networks ($3.8B), decision intelligence ($18B), data brokerage ($303B) are all markets that exist to improve decisions by expanding available data.9819 What changes now is the medium: instead of scheduling a $500/hour GLG call with a former CFO, your agent queries a network of other agents and synthesizes 50 perspectives in 30 seconds for $2.
@BetterCallMedhi described this pattern live, from Shenzhen: “The intelligence is distributed across RELATIONSHIPS” — not nodes, not databases. An injection mold guy proactively modified tooling after seeing 100 founders iterate similar thermal designs. A 30× learning rate per dollar, because the knowledge network operates as a collective intelligence organism.20 2,140 people liked this because it describes how real decisions get made: not by reading more reports, but by accessing more people.
As models improve and costs drop, AI agents graduate through a predictable hierarchy. Each rung unlocks a higher leverage multiplier — and demands richer inputs.121
| Rung | Agent Role | Human Analog | Leverage | Key Input | Real Example |
|---|---|---|---|---|---|
| 1 | Task executor | Intern / VA | 10× | Instructions | Schedule meetings, draft emails, data entry |
| 2 | Knowledge worker | Junior analyst | 50× | Domain knowledge | Cursor ($1B ARR) — junior dev productivity4 |
| 3 | Process manager | Manager | 200× | Org context + judgment | Sierra ($100M ARR) — customer service orchestration22 |
| 4 | Strategic advisor | Consultant / Advisor | 1,000× | Industry intel + relationships | Palantir AIP ($4.5B rev) — enterprise decision platform10 |
| 5 | Decision partner | Cofounder / Mentor | 10,000× | All above + private context + taste alignment | No product has achieved this yet |
The key insight: at rungs 1–2, public knowledge suffices. At rungs 4–5, the bottleneck is information the internet doesn’t have. The jump from rung 3 to rung 4 is where AI stops being a productivity tool and starts being a decision tool. And the economics change entirely.
At rung 2, a coding agent saves a developer 2 hours. Value: ~$200. At rung 4, a strategic advisory agent helps a CEO avoid a bad acquisition. Value: potentially billions. Organizations with high decision excellence spend 43% less time in unproductive meetings, are 25% more focused on crucial matters, and 28% more likely to make data-informed decisions.6
Today, most commercial AI products sit at rungs 1–3. Agentic AI adoption has hit 35% in just two years, with another 44% planning deployment.21 But organizations are adopting faster than they’re developing strategic frameworks to manage these systems.21 The race to rung 4–5 is where the real value creation happens — and where the incumbents (McKinsey, BCG, GLG) have the most to lose.
Naval identified four forms of leverage: labor, capital, media, and code.23 AI agents are a fifth: cognitive leverage — amplifying the quality and scale of decisions rather than the speed of execution. In an age of infinite leverage, “judgment becomes the decisive skill.”24 Warren Buffett exemplifies this: his demonstrated judgment and credibility command infinite resources because his decisions compound across massive assets. The person with leverage + judgment wins non-linearly. AI gives everyone infinite leverage. Judgment becomes the only variable left.
72% of CEOs now serve as their organization’s primary AI decision maker — double the share from the year prior.25 Half believe their jobs depend on getting AI right.25 Companies plan to double AI spending in 2026, rising from 0.8% to ~1.7% of revenues.25
| Dimension | Current State | AI-Augmented State |
|---|---|---|
| Decision support infrastructure | McKinsey ($16B rev), board advisors, internal strategy teams, expert networks ($500–1,350/hr per call)26 | AI copilot with full context: financials, board dynamics, competitive intel, synthesized in real-time |
| Leverage multiplier | One CEO decision moves $1B–100B in market cap6 | Same leverage, but 10× more informed decisions per unit time |
| What AI replaces | 80% of slide-making, research synthesis, competitive analysis, scenario modeling | — |
| What AI can’t replace | Board dynamics, stakeholder trust, organizational culture, taste for timing | — |
| Market disrupted | Strategy consulting ($100B+ for MBB + Big 4 advisory), expert networks ($3.8B)79 | — |
The CEO decision speed matters enormously. The “70% Rule”: decide with 70% information, then course-correct — because delayed decisions lose momentum, confuse teams, and create competitive vulnerability.27 An AI copilot that gets a CEO from 40% to 70% information quality in minutes instead of weeks is worth more than any McKinsey engagement. Alex Karp (Palantir CEO) calls this “commodity cognition” — scaling operational leverage through AI.10
Entrepreneurs make more decisions per day than any other persona, with less support infrastructure. No strategy team. No advisory board. Often no co-founder. Every decision — which market, which customer, which feature, which hire — is made with incomplete information under time pressure.
| Dimension | Current State | AI-Augmented State |
|---|---|---|
| Decision support | Founder network, Y Combinator office hours, Twitter/X discourse, gut instinct | Personal agent with full business context: metrics, customer conversations, market signals, competitor moves |
| Leverage multiplier | Every decision shapes company trajectory; early decisions compound exponentially | Faster iteration = more decisions per cycle = faster convergence on product-market fit |
| What AI replaces | Market research, competitive analysis, financial modeling, customer segmentation | — |
| What AI can’t replace | Vision, founder-market fit, conviction under uncertainty, the ability to inspire people | — |
| Market disrupted | Accelerators, fractional CFO/COO, early-stage advisory ($5.4B policy advisory)28 | — |
This is the persona where the personal agent thesis lands first. 10–50K people already run personal agents (Donna, OpenClaw runners, Claude-powered workflows).17 Entrepreneurs are the natural early adopters because they have the highest decision density and the thinnest support infrastructure. The research engine powering this very report is the prototype: synthesizing available information to improve a founder’s decision quality.
A professor’s key decisions — which research direction to pursue, which students to mentor, which grants to chase, which papers to publish — shape entire fields for decades. The leverage multiplier operates on a longer timescale but is potentially unbounded: one research direction decision created the entire field of deep learning (Hinton, 2006), the entire CRISPR revolution (Doudna, 2012), the entire transformer architecture (Google Brain, 2017).
| Dimension | Current State | AI-Augmented State |
|---|---|---|
| Decision support | Literature review (months), peer review, conference conversations, postdoc researchers | AI research assistants (IRIS, ResearStudio, TIB AIssistant)29 for hypothesis generation, literature synthesis, experiment design |
| Leverage multiplier | One research direction shapes a field; one mentorship shapes a career | 10× more hypotheses tested per year; faster identification of blind spots |
| What AI replaces | Literature survey (90%), routine data analysis, manuscript drafting, citation management | — |
| What AI can’t replace | Research taste (what questions are interesting), mentorship judgment, intellectual courage, cross-disciplinary intuition | — |
| Market disrupted | Academic publishing ($26B), research assistants, literature review services | — |
The academic decision function is uniquely taste-heavy. In a world of AI-generated hypotheses and automated experiments, the professor’s role shifts entirely to curation: which questions are worth asking, which results are actually surprising, which directions have paradigm-shifting potential. IRIS already uses Monte Carlo Tree Search for hypothesis generation with human-in-the-loop validation.29 ResearStudio achieved state-of-the-art results on the GAIA benchmark, surpassing OpenAI’s Deep Research.29 The tools are arriving. The taste to wield them is not.
A state leader’s decisions affect millions of lives. Immigration policy, healthcare allocation, defense posture, economic regulation — each decision multiplied across an entire population. The decision quality function here is the same, but the context variable is massive: geopolitical relationships, public sentiment, institutional constraints, political capital.
| Dimension | Current State | AI-Augmented State |
|---|---|---|
| Decision support | Think tanks ($5.4B globally), RAND ($1.4B from US govt alone), policy advisors, intelligence briefings2830 | AI policy simulation, real-time public sentiment synthesis, scenario modeling at population scale |
| Leverage multiplier | One policy affects millions; one budget allocation redirects billions | Faster policy iteration, more scenarios tested, better anticipation of second-order effects |
| What AI replaces | Policy research, constituent analysis, briefing preparation, regulatory impact modeling | — |
| What AI can’t replace | Political judgment, moral reasoning, democratic legitimacy, the ability to build coalitions | — |
| Market disrupted | Think tanks, policy advisory, government consulting | — |
The US, Canada, and Australia all issued major AI policy frameworks in 2025 directing government agencies to accelerate AI adoption.313233 Canada is already using AI to triage 7+ million immigration applications.32 But Palantir’s AIP is the real proof point: $1.86B in US government revenue (55% Y/Y growth), a potential $10B Army contract, and a “Rule of 40” score of 127%.10 Government decision support is already a multi-billion-dollar market. AI makes it 10× bigger by making it 10× cheaper.
Every one of these personas currently pays incumbents enormous sums for worse decision support than AI can provide. The question is what gets disrupted vs. what gets amplified.
The global management consulting market hit $492B in 2025, growing at 5.6% CAGR.7 McKinsey alone generated $16B in revenue.34 But the Wall Street Journal ran the headline that matters: “AI Is Coming for the Consultants. Inside McKinsey, ‘This Is Existential.’”35
The existential part: AI can now do in minutes what junior consultants spend weeks on — and 40% of McKinsey’s revenue comes from advising on AI and technology.35 McKinsey launched “Lilli,” an internal AI chatbot synthesizing 100,000+ documents of the firm’s intellectual property. Over 70% of the firm’s 45,000 employees use it ~17 times weekly.36 BCG built “Deckster” to automate PowerPoint creation.36
But here’s the nuance: consulting won’t die. The layer that gets disrupted is the analytical layer. The layer that gets amplified is the relationship and judgment layer.
GLG claims 1M+ experts, charging $500–1,350/hour for phone consultations.2637 AlphaSense acquired Tegus for $930M to build an AI-powered knowledge layer over 240,000+ proprietary expert transcripts.38 Enquire AI matches experts in 35 minutes using AI (vs. days for GLG).39
The disruption here is on price and speed, not on the fundamental proposition. Executives will always want outside perspectives. But paying $1,000/hour for a 45-minute call when an AI can synthesize the equivalent insight from transcripts, public filings, and industry data for $5 is not a sustainable model. The expert network market reprices from premium service to commodity infrastructure.
The think tank sector is already under pressure. Only one in three organizations expect sector growth in the next 12 months.40 Funding is concentrated: $1.4B of $1.49B in US government think tank funding goes to RAND alone.30 The policy advisory market ($5.4B, 10.2% CAGR) is the one most likely to be transformed rather than disrupted — because government decision-making demands democratic accountability and institutional legitimacy that AI cannot provide.28
If Decision Quality = f(IQ, Taste, Context, Data), then each component creates its own market dynamics as IQ commoditizes and the other three appreciate.
| Component | Commodity / Scarce | Market Dynamic | Who Captures Value | Scale |
|---|---|---|---|---|
| IQ | Commodity | Token prices falling 10×/yr.11 Race to zero. Open-weight models destroy pricing power. | Infrastructure layer (cloud, chips). Not the model labs long-term. | $14B (Anthropic ARR) but compressing |
| Taste | Scarce | Cannot be automated, trained, or scaled. Built over decades. Increases in value as AI output increases. | Individuals — the CEO, the editor, the professor, the designer. This is the human moat. | Priceless (embedded in salaries, equity, reputation) |
| Context | Scarce | Private, interpersonal, political. 99% is “dark data.”18 The personal agent is the container. | Personal agent platforms — whoever builds the context layer (Donna, future Anthropic/OpenAI agents). | Embedded in $5B+ enterprise AI agent market41 |
| Data | Expandable | The only variable that can be actively increased. Expert networks, prediction markets, decision intelligence platforms. | Aggregators — Palantir ($4.5B rev), AlphaSense ($4B val), Perplexity ($20B val)103842 | $303B (data brokerage) + $18B (decision intelligence)198 |
What makes this thesis different from generic “data is the new oil” claims: the value of each component is multiplied by the leverage of the decision it feeds into. A 5% improvement in data quality for a barista’s latte choice is worth nothing. A 5% improvement in data quality for a CEO’s acquisition decision is worth tens of millions. The same component has wildly different value depending on the leverage multiplier of the decision it supports. This is why Palantir trades at 70× revenue — the market is pricing the leverage, not the software.10
This doesn’t exist yet, but it will. When AI can generate 100 competent marketing campaigns, 50 viable product strategies, 30 plausible research directions — the person who can rank them by quality, filter out the mediocre, and identify the one that’s actually brilliant has created massive value. We see early signals:
These are all supply-side plays. The demand side is every decision-maker in the world. The gap: nobody has figured out how to price and deliver taste at scale. The person who does builds the most valuable company of the AI age.
| Risk | Probability | Impact | Mitigation |
|---|---|---|---|
| AI develops genuine taste/judgment within 5 years | 15–25% | Thesis-killing | Monitor frontier model benchmarks on subjective evaluation tasks. If models consistently outperform human curators, the game changes. |
| Consulting firms successfully integrate AI (McKinsey Lilli model) | 60–70% | Reduces disruption magnitude | The analytical layer still gets commoditized even if firms survive. New entrants capture the repriced market. |
| Personal agent adoption stalls below critical mass | 30–40% | Delays context/data marketplace | The decision leverage thesis holds independently of personal agent adoption. Palantir doesn’t need personal agents to grow. |
| Regulation limits AI in high-stakes decision-making | 40–50% | Slows but doesn’t stop | EU AI Act already classifies some AI decision support as high-risk. Creates compliance market, not a death sentence. |
The thesis rests on one unfalsifiable assumption: that taste and judgment are fundamentally different from intelligence, not just a more complex form of it. If it turns out that taste is just IQ applied to aesthetic domains — pattern matching all the way down — then taste commoditizes too, just on a delayed curve. We believe this is wrong (taste involves values, identity, and agency that transcend computation), but intellectual honesty requires naming the assumption.
This thesis isn’t detached from practice. It’s the macro frame that unifies everything Eric is already building.
| Eric’s Asset | Thesis Component | Role in the Stack |
|---|---|---|
| Donna / PCRM | Context (private layer) | The personal agent that captures private context — relationships, decisions, meeting transcripts, financial state. This IS the context container for the decision quality function. |
| Research Engine (this report) | Data (expandable) | The prototype for AI-augmented decision support. Every /deepmarketresearch run is the thesis in action: expanding available data to improve a decision. |
| Personal Agent Thesis | Context (infrastructure) | The adoption curve argument: personal agents become identity infrastructure, creating the container for private context at scale. ~2027–2029 Hotmail moment.17 |
| Decision Data Marketplace | Data (marketplace) | The supply-side argument: how knowledge flows between agents to expand available data. Right thesis, wrong timing — build as protocol feature, not standalone marketplace.17 |
| Agentic Backend Thesis | IQ (infrastructure) | The architecture that makes all of this work: while-loop + Claude API + tools. $0.05–0.30/turn. The plumbing. |
| claw.degree | Taste (evaluation) | The quality layer: grading agents on decision-relevant dimensions (Instructions, Safety, Consistency, Honesty, Memory, Autonomy). Could evolve into decision quality scoring. |
| @ericsanio | Taste (distribution) | The public voice for the taste/agency/decision thesis. Writing is the taste proof-of-work. Every tweet demonstrates the judgment the thesis argues is scarce. |
The thesis is correct. Decisions are the highest-leverage output humans produce, and AI is about to make the market for improving them unrecognizably large.
The economics are clear. Intelligence is commoditizing at 10× per year — from $36/M tokens to $0.40 in under three years.2 When everyone has the same IQ, the remaining decision inputs — taste, context, and data — capture all the marginal value. And that value is multiplied by the leverage of the decision: a CEO moves billions, a professor shapes a field, a state leader affects millions.6
The $500B+ market for decision support (consulting, expert networks, think tanks, decision intelligence) is already being repriced. McKinsey calls it existential.35 Palantir is growing 56% annually selling “commodity cognition” to governments and enterprises.10 72% of CEOs now own AI decision-making directly — they’re not delegating this to IT.25
The zeitgeist confirms it. Engineers are mourning their identity as execution gets commoditized.15 Founders are discovering that code is no longer the bottleneck — judgment is.16 Parents are concluding that agency and taste are the only durable assets for their children.14 These aren’t coordinated talking points. They’re independent signals from a culture that’s arriving at the same conclusion: in a world of infinite intelligence, the scarce resources are what you decide, why you decide it, and what private knowledge you bring to the decision.
The strongest counter-argument — that AI eventually develops genuine taste and judgment, not just more sophisticated pattern matching — is real but unproven and likely 5–10 years out. In the meantime, the window is open. The person who builds the infrastructure for decision quality — context containers (personal agents), data expansion (knowledge exchanges), taste measurement (evaluation layers) — captures the defining market of the AI age.
Eric is positioned on the right side of this. Donna is the context container. The research engine is the data expander. claw.degree is the taste measurer. @ericsanio is the proof-of-work. The pieces connect. The thesis is not a prediction — it’s a description of what’s already happening.
One-sentence version: “As intelligence becomes free, the only scarce inputs to the most leveraged human output — decisions — are taste, private context, and expanded data; whoever controls those inputs controls the value.”