Thursday, February 5, 2026
From Orbital Compute to Formal Verification: Solving the Physical and Logical Bottlenecks of AI Scaling
The Big Picture
- Orbital compute in 36 months — Elon Musk predicts AI data centers will shift to space to bypass terrestrial power constraints and leverage 5x solar efficiency via Starship's high-cadence reusability.
- Interpretability as model design — Myra Deng and Mark Bissell argue that surgical edits to internal representations outperform the low-bandwidth feedback of RLHF, enabling real-time steering of 1-trillion-parameter models.
- Formal verification for hardware — Carina Hong demonstrates that AI reasoning engines using the Lean language can solve Putnam-level math and reduce 3-year hardware verification cycles to automated processes.
- 4D simulation vs. Video — Dave Baszucki posits that 3D spatial reality is the ground truth for AI training, and traditional video calls are merely downsampled versions of immersive communication.
- Security is a consent game — Fireship warns that tools like Metasploit and Hashcat make weak hashing (MD5) a catastrophic liability, requiring a security-first mindset for all developers.
- Cockroach Consistency — Rob Dial defines success as the speed of recovery after failure, using micro-wins to rebuild self-trust and identity through the compound interest of effort.
The Deeper Picture
The current AI trajectory is hitting a dual wall: the physical limits of Earth's power grid and the logical limits of probabilistic reasoning. In Elon Musk – "In 36 months, the cheapest place to put AI will be space", we see a pivot toward orbital data centers as a regulatory and energy play, where Starship enables the deployment of gigawatts of compute powered by 24/7 space-based solar. This physical scaling is mirrored by Dave Baszucki in Introducing 4D Creation Open Beta and the Future of Gaming with Roblox CEO Dave Baszucki, who argues that the next generation of AI must be trained on 3D vector data rather than 2D video to achieve true physical intuition and agency.
Simultaneously, the industry is moving from black-box training to intentional design. Goodfire AI’s Bet: Interpretability as the Next Frontier of Model Design introduces Activation Steering as a surgical alternative to the 'slurping supervision through a straw' inefficiency of RLHF. This shift toward deterministic control is echoed in This 24-Year-Old Built an AI That Can Pass the Hardest Math Tests, where Carina Hong utilizes the Lean formal language to ground LLM outputs, solving the 'handwaving' problem in complex proofs and hardware verification.
On the human level, the focus shifts to the neurobiology of performance and the psychology of persistence. Ido Portal in Essentials: The Science & Practice of Movement connects physical 'wordlessness' and Panoramic Vision to cognitive flexibility, while Rob Dial in frames consistency as the . These human frameworks are essential as the technical landscape becomes more volatile, requiring developers to adopt the security-first mindset detailed in to protect increasingly powerful systems.
Where Videos Converge
The Verification Bottleneck
This 24-Year-Old Built an AI That Can Pass the Hardest Math Tests · Goodfire AI’s Bet: Interpretability as the Next Frontier of Model Design · Elon Musk – "In 36 months, the cheapest place to put AI will be space"
All three videos identify verification—whether of mathematical proofs, internal model safety, or hardware design—as the primary hurdle to scaling. Axiom uses formal languages, Goodfire uses mechanistic interpretability, and Musk uses physics-based 'truth-seeking' to move beyond the limitations of human-labeled RLHF.
3D/Vector Data as Ground Truth
Introducing 4D Creation Open Beta and the Future of Gaming with Roblox CEO Dave Baszucki · Elon Musk – "In 36 months, the cheapest place to put AI will be space"
Roblox and Tesla/xAI are both pivoting away from 2D video training toward 3D spatial data. Baszucki views video as 'downsampled' reality, while Musk emphasizes that humanoid robots (Optimus) require real-world 'self-play' data to master physical interactions.
Key Tensions
AI Alignment Strategy
Elon Musk
Alignment should be achieved through 'truth-seeking' and physics-based verification to avoid 'programmed lying' or political correctness.
Myra Deng
Alignment requires mechanistic interpretability and activation steering to surgically remove internal biases and detect hallucinations via uncertainty signals.
Resolution: Musk focuses on the 'moral constitution' of the model's training objective, while Goodfire focuses on the 'engineering visibility' of the model's internal state. Both agree that current RLHF is insufficient.
Video Breakdowns
7 videos analyzed
Goodfire AI’s Bet: Interpretability as the Next Frontier of Model Design
Latent Space · Myra Deng, Mark Bissell · 68 min
Watch on YouTube →Goodfire AI is moving interpretability from post-hoc research to a core engineering workflow. By using Sparse Autoencoders to identify and steer internal features, they enable surgical edits to model behavior, bypassing the inefficiencies of standard RLHF.
Logical Flow
- Problem: RLHF is 'slurping supervision through a straw'
- Solution: Mechanistic interpretability as intentional design
- Technical enabler: Sparse Autoencoders (SAEs) for feature decomposition
- Production case: Rakuten PII detection and multilingual steering
- Scaling: Real-time steering of 1-trillion-parameter Kimi K2
- Future: Interpretability for scientific discovery in genomics
Key Quotes
"Post-training and RLHF... it's like slurping supervision through a straw."
"Are you modifying the pipes or are you modifying the water flowing through the pipes?"
"Models internally have some awareness of uncertainty that leads to hallucinatory behavior."
Key Statistics
Contrarian Corner
From: Introducing 4D Creation Open Beta and the Future of Gaming with Roblox CEO Dave Baszucki
The Insight
Video conferencing is a 'downsampled' version of reality that will be superseded by 4D spatial simulation.
Why Counterintuitive
Most people view Zoom and video calls as the pinnacle of remote communication, but Baszucki argues they are 'legacy analog modes' that lose critical spatial and acoustic data.
So What
When building remote collaboration tools or training AI, prioritize 3D vector data and spatial acoustics over 2D video streams to achieve higher fidelity and human-like intuition.
Action Items
Implement BCrypt and Salt for all password storage
Fireship demonstrates that dictionary attacks can crack weak MD5 hashes in seconds using the 14M password 'rockyou' list.
First step: Audit your current database schema and replace any MD5 or plain-text password fields with BCrypt hashing.
Adopt the 'Never Miss Two Days' rule for key habits
Rob Dial explains that consistency is about the speed of recovery, and preventing a total lapse is more important than a perfect streak.
First step: Pick one 'laughably small' habit (e.g., 60 seconds of meditation) and commit to never missing two days in a row for the next 30 days.
Practice Panoramic Vision during high-stakes tasks
Ido Portal and Andrew Huberman show that reaction times are 4x faster in peripheral mode via the magnocellular pathway.
First step: While working or moving, consciously soften your gaze to take in the entire room's periphery to lower systemic anxiety and increase alertness.
Explore Sparse Autoencoders for model steering
Goodfire AI shows that SAEs allow for surgical edits to model behavior (like removing PII or bias) without full retraining.
First step: Visit Neuronpedia to visualize SAE features and experiment with activation steering on open-source models like Llama 3.
Final Thought
The future of AI and human performance is shifting from 'brute force' to 'surgical precision.' Whether it is steering 1-trillion-parameter models, verifying hardware with formal logic, or mastering the neurobiology of vision, the theme of 2026 is the removal of the 'black box.' By grounding digital systems in physical reality and formal logic, we move toward a world where technology is not just powerful, but provably safe and intentionally designed.