2 min read

Everything is Computable

More than 15 years ago, Adam brought a copy of The Singularity Is Near to math class. It's been incredible to watch the industry sprinting towards Kurzweil's 2029 AGI prediction ever since.

The last decade has seen historic milestones like AlphaGo, chatGPT, and Waymo. We might point to Google acquiring DeepMind as AI's crossing the Rubicon from papers to products. Regardless, this is an unprecedented time to build for founders, with new platforms and endless possibilities at our fingertips.

On my road trip to SF last month, I listened to Lex interview Terence Tao then Demis Hassabis. Despite my MIT degree studying theoretical computer science (and being familiar with the Church-Turing Thesis etc), these interviews somehow crystalized for me the power of matrix multiplication.

Within my natural lifespan, it's conceivable that a computer will be able to do everything I can today, even better. Consensus is now within the next 30 years or so. Does this include love?

I'm autistic enough to see the world mathematically. Maybe not quite on Terry or Demis's level, but at least enough to view things fairly functionally. Yet I'm still human enough to empathize with emotional creatures too.

I believe in AI achieving all superhuman capabilities. I believe this now more firmly than ever because everything good I contribute to the world is essentially reducible to certain ideas. Many of these are described, or yet to be described, on this very blog (or Seth's).

We express ideas through language. And we use money to coordinate people around them.

Ricky Gervais has a great bit about scientific facts being immune to book-burning, whereas the Bible may not have the same verbatim quality. But if you buy Girard's Mimetic Theory perhaps human nature spawns truthful biblical stories and atheists are quibbling over semantics.

Doesn't matter. Ideas exist outside of us. Even if we die, ideas live on.

When I give Phu advice to improve his sales pitch or copy for SaynSave, he can test it out versus his original wording and see the results for himself. But he usually just nods and says "I understand" after I explain something in sufficient detail. He'll get to the same place by listening to other founders or investors, perhaps even faster! He is exceedingly truth-seeking and low-ego.

Good ideas have predictive power. They are shortcuts.

Although human systems are complex and not always easily compressed into words, many of the most profound and complicated ideas are easily conveyed. Such as "Make something people want." I believe something like this may become a sort of rallying cry, a tent pole for AI alignment.

We are so very finite. But if we can learn in 30 years to be a friend, have a family, run a business, and learn from our mistakes... constantly moving towards better and more complete ideas...

a friend's model

For humans, becoming a positive-sum adult is simply a matter of experience and exposure. We hard-code certain heuristics after running into enough walls head-first.

Why not a pattern-absorbing machine that devours libraries in an instant and grows ~100x brainpower every 4 years?