Friday, February 27, 2026
Physical constraints are protecting the AI supercycle.
February 27 · 5 videos
NVIDIA hit $68 billion in revenue.
Anthropic wiped 15% off IBM's market cap.
Poetiq beat Gemini 3 for half the cost.
The AI bubble isn't popping.
It is hitting a physical wall of electricity.
Watts and wafers are the new gold.
“The second you're in fine-tuning land, I'm spending millions... and then I just lit it on fire cuz the next version of the frontier model comes out.”
Everyone Is Waiting for the AI Bubble to Pop (NVIDIA Earnings)
Gavin Baker · Limitless Podcast · 29 min
Watch on YouTube →Gavin Baker and the Limitless team analyze NVIDIA's massive earnings and the structural reasons why this AI cycle differs from the dot-com bubble. They explore how physical constraints on power and silicon act as a market stabilizer.
- NVIDIA's $68.1 billion revenue signals an AI supercycle gated by the physical scarcity of electricity and silicon.
- The scarcity of watts and wafers prevents the typical overbuild and collapse seen in previous tech bubbles.
- Anthropic's COBOL automation features caused a 15% drop in IBM's market capitalization by targeting legacy code moats.
- Perplexity is launching an AI-first operating system layer to unify research, browsing, and project management.
- Meta is diversifying its compute strategy with a $6 billion partnership with AMD to reduce total NVIDIA reliance.
- Hardware utilization is so extreme that five-year-old GPUs are currently renting for 1.5x their original cost.
- The Pentagon has issued an ultimatum to Anthropic regarding model access, signaling AI's shift to national security infrastructure.
The Powerful Alternative To Fine-Tuning
Ian Fischer · Y Combinator · 19 min
Watch on YouTube →Former DeepMind researcher Ian Fischer explains why fine-tuning is a strategic trap for AI startups. He introduces recursive self-improvement as a more robust, model-agnostic path to state-of-the-art performance.
- Fine-tuning is a risky investment because frontier models quickly render custom-tuned smaller models obsolete.
- Poetiq uses recursive self-improvement harnesses to achieve elite results without the cost of raw model training.
- A 7-person team achieved a 54% score on ARC-AGI V2, beating Gemini 3's 45% score for half the cost.
- Reasoning strategies written in code are significantly more effective than simple prompt engineering or context stuffing.
- Moving from manual prompts to automated reasoning moved performance from 5% to 95% on complex tasks.
- Startups should treat underlying models as a common layer and build optimization loops that are compatible with future releases.
- The Poetiq meta-system identifies failure modes in datasets and generates specific reasoning code to solve them.
Measuring Exponential Trends Rising (in AI) — Joel Becker, METR
Joel Becker · Latent Space · 65 min
Watch on YouTube →Joel Becker from METR discusses the organization's mission to quantify AI capabilities through objective benchmarks. He explores the potential for a capability explosion if AI R&D loops are fully automated.
- METR uses the Time Horizon metric to measure AI progress by the human-time equivalent of tasks models can solve with 50% reliability.
- AI capability growth has been remarkably linear for years but shows signs of accelerating to a 4-month doubling time.
- A capability explosion could occur if the final 10% of the AI R&D loop, including chip design and training code, is automated.
- Organizations often lack the absorption capacity to handle 10x productivity gains due to non-technical bottlenecks.
- Compute acts as a bottleneck for algorithmic progress because experimental discovery requires massive hardware resources to validate ideas.
- Prediction markets are effective for price discovery but can be manipulated by high-agency actors influencing the outcomes.
- Developers are increasingly refusing to work on tasks where AI assistance is disallowed, making productivity studies difficult.
The most beautiful formula not enough people understand
Grant Sanderson · 3Blue1Brown · 60 min
Watch on YouTube →Grant Sanderson explores the counterintuitive geometry of high dimensions where spheres effectively disappear relative to cubes. This mathematical reality underpins how modern AI models process vector embeddings.
- Unit hyperspheres peak in volume at 5 dimensions and then decrease toward zero as dimensions increase.
- In 100 dimensions, a unit sphere occupies a negligible fraction of the space compared to a unit cube.
- High-dimensional cubes are spiky because their corners drift away from the center at a rate of the square root of the dimension count.
- The volume of high-dimensional spheres is concentrated almost entirely in a thin shell near the surface.
- The Gamma function is required to define factorials for half-integers to calculate volumes in odd dimensions.
- In high-dimensional embeddings, two random vectors are almost certainly orthogonal and located on the surface of the space.
- The Knight's Move recurrence relation allows for calculating n-sphere volume by looking two dimensions down.
How To Make Time For Everything
Rob Dial · The Mindset Mentor Podcast · 17 min
Watch on YouTube →Rob Dial reframes productivity as the management of mental energy and open loops rather than just time. He provides frameworks to eliminate decision fatigue and align daily actions with a future identity.
- Open loops are unfinished tasks or delayed decisions that consume mental energy like background apps on a computer.
- Productivity is a function of energy management rather than just managing hours on a clock.
- The Decide Once rule eliminates decision fatigue by making permanent choices for recurring daily activities.
- Identity State Batching groups tasks by the cognitive role required to reduce the metabolic cost of context switching.
- Tracking energy levels every 60 minutes for a week reveals biological peaks that should be reserved for high-leverage work.
- Jeff Bezos limits himself to three major decisions per day to preserve executive function and decision quality.
- Every yes to a low-value meeting is a simultaneous no to the deep focus work required for major goals.
References
PeopleJoel Becker · Grant Sanderson · Ian Fischer · Rob Dial (coachwithrob.com) · Gavin Baker (x.com/GavinSBaker) · Jeff Bezos · Arvin Srinivas · Noam Brown · Archimedes · Donald Knuth
ToolsMETR · Poetiq · Claude 3.5 · Gemini 3 · NVIDIA · Perplexity Computer · Manifold Markets · ARC-AGI · AMD · Meta · Claude Code Security
PapersHumanity's Last Exam · DSPy · METR GT5 Report