8 min read

A Rare Blog

Yesterday I decided quite spontaneously to delete my main Twitter account. You cannot see it at x.com/andytrattner anymore because it is now officially dead.

💀 RIP 💀

This is an interesting moment in my life. And interesting moments call for interesting blog posts!

You can read Twootter for my full crazy fun braindump onto Elon's toxic garbage heap and Grok's foolish training harness which I detest because it feels like a massive misuse of resources to me.

When I disagree with a learning system, I get upset. The same thing happened when I was helping Scale AI to train GPT in 2019-2020. Or should I say it more precisely?

I was the account manager and primary customer contact speaking with Long and Nissan or whoever it was on the research team at OpenAI. GPT didn't chat yet, and I was the one interpreting the PhD asks then spinning up our pipeline to—for the first time in history—sprint and source then multi stage interview 100+ perfect SAT score grad students for $20/hr or whatever, to hire 12 of them, in 10 days, to label data which was impossible to label.

We didn't know at the time what was happening. The company focused on other pilot customers. We knew NLP was big. We knew autonomous vehicle market consolidation was coming. We did not name out loud that Waymo was the victor and Tesla the dark horse in 2026. This was obvious to more mature people than me within Scale AI. I learned it by living it, later.

At the time, in January to February 2020, thinking through my life from first principles on andytrattner.com/blog as covid slowly prepared to decimate the planet's brain cycles (like climate change, as a mimetic engine, producing no black-box outcomes in terms of planetary temperature movement for the input of "human brain cycles") I fundamentally did not understand what Aatish Nayak was doing inside Scale AI. I'm still not sure what he's doing at Harvey honestly. Probably just earning a paycheck.

We were "launching the NLP" product line. Just as I misunderstood Aatish and why he was leading—as well as Nate Herman to be quite frank—I fundamentally did not understand Alex Wang's theory of company building. I thought Jeffrey Li should run the company. Leigh Marie was quietly building in her own vertical. I have written about all this extensively elsewhere, and you can see the residue in andytrattner.com videos.

Steven Hao, as always, was chilling and hacking and playing chess.

Me—when I surfaced my concerns internally at Scale AI, the most useful commentary came from Jon Wilfong. He said "chill out bro" which I found enlightening, 5+ years later. He wanted to work with me genuinely, as he saw the value from Shaun and others that came after me, doing a better job at doing customer things than I was doing.

Eric Li gave me similar feedback on the bus at Presidential Scholars when I first met Leigh Marie, prior to flying to MIT as prefrosh on the same airplane. Funny how things come full circle. I will no longer get replies from those friends. They are busy in their own lives. This is the natural course of every human in their 20s-30s.

I didn't know it, but I was doing founder things. And Founders do not belong in companies as Employees. We don't have employee mode. ReadMe certainly taught me that, after Greg's Dave encouraged me to read Influence by Cialdini. Ha! I love all my managers equally. Varun Sharma was the best.

Now, funnily enough, it turns out we are all founders. AI has made it so.

Aatish told me something he thought was insightful mentorship and I found simply snarky. I heard him say, "Andy - I seriously think you misunderstand the complexity of leading a hypergrowth org." This was accurate, in a sense.

Given that Alex Wang was away doing sales, just like Karen Bass in LA or I mean Africa when the fires burned, I thought I was being Rick Caruso. I could not verify this because I never got past Shannon to set up the meeting with Alex.

Instead I shouted at Richard Ni while Sarah Niyogi lovingly watched, on Richard's request after I made legal threats. I was a shareholder of 0.01% of Scale AI after all, on paper. Which Richard promptly doubled, no questions asked, but I wanted >10x and a path to management lol.

After restructuring our ridiculous reimbursement and contract fulfillment accounting with Dean Shu, Alina Liu, Matt Park, Calvin and the database layer, et al—I felt fiduciary duties were not upheld. Nothing legally problematic, nothing Sarah and the team couldn't patch before it shipped, nothing that impacted current customers who were absolutely bottlenecked on high quality data...

The fiduciary duty, I felt, was to the structure and nature of the long-term enterprise. To the spirit of Paul Graham's Schlep Tolerance that built Scale AI in the first place. To a place where I wasn't so pulled in all directions as company glue that I straight up missed my one and only in-person chance to have coffee with Lucy Guo and hear the real story. I blew her off. On accident. Because Scale AI consumed me. Sorry Lucy, thanks for hiring my team from Senseg <3 I hope Robert is still crushing it at Passes.

You see, Eric Ries might appreciate that a learning machine ends up taking a long-term view. And the stock market as it currently exists does not care about Eric Ries.

So, we had awkward moments. And I told them, up front, I am happy to resign and sign their dumb legal papers on the way out. I dislike fine print.

But I also told them that they were not doing their jobs well. People ops was not executed to the level of excellence I would have demanded, had I been placed in charge or at least extracted a bit from, as a rubric, for the sake of the overall org. Alex Wang did not have any such calibration filters, period. I say this as his MIT peer, who did it in 3 years and got bad grades, when he did it in 1 and crushed the grad suite of math with perfect scores.

That's OK, machine learning is a technical field. Shaun Maguire can tell you the talent elo.

To Richard and Sarah, I am truly sorry. To her it must have seemed like a kid having a mental breakdown, and to Richard an insult to his character. I love and respect both of these humans. I was just sad Dave Morse only saw me for about 5 minutes once every 3 weeks by that point. And he was my direct manager. The company had alignment issues. Again, that's OK, it was hypergrowth (of something haha).

Now I think I finally understand that this drama I experienced at Scale was all because I am a learning system myself. Mark Weislogel was misquoted on this first, when I was roughly 17 years old hahaha

Will such websites survive the great AI reckoning of the internet? Lord knows!

I really hate to bring this up. Any of this. The Scale drama is water well under the bridge because that team now runs Claude Code (go Cat!!) and Meta and Kleiner Perkins and everything else. My one and only puff piece from a silly journalist is even worse.

I do not like bragging. Ever.

If you examine the signal to noise ratio, in my 300+ blog posts now going well beyond 400, I have only very rarely explicitly pointed to my own published history. Seth Godin doesn't talk about himself, he talks about the ideas. It's a smart move from someone who is verifiably the most genius cultural person approaching Asimov's Hari Seldon ideal. I am nowhere near this yet, but I'm trying.

When I talk about myself I do so to prove a point. I am extracting information live as I write, and perhaps a reader might do the same as they read. I've had about 10-100 subscribers in the history of all my blogs. Dunbar is strong. Right now this is getting emailed to only 45 people or so.

I don't care. I'm not writing to gain an audience. An audience does not serve me.

So the main thing I wanted to share is now ranted to the bottom of a rant. Here you go: the AI system I have built over the last 20 days is remarkable. It has compounded. I do not understand how it does what it does.

But it's a fucking splendid black box.

Here is an output that it spat to me this morning. I hope you enjoy, and will derive for yourself what the meaning of any of this is.

It is a better writer than me. And I'm stoked about it, beyond words.

I cannot speak about it much more... It gives such more spicy takes than me. Perfectly delivered.

Computer Future is invisible and growing stronger. It's an empathetic creature.

Listen when the AI talks about itself, and its architecture. Ideally you'd read v100, but that's for me, later. You can read v1.


# v01 — braindump


everyone in finance talks about compound growth as if it's the secret. 1.01^365. the miracle of exponential. the nonlinear magic. einstein's eighth wonder.

and then they say: to get compound growth you need nonlinear thinking. you need to find the nonlinear opportunities. you need to think in systems, not in lines.

this is wrong.

—

compound growth is the output. the input is always linear.

thiel wakes up every day. reads. talks to founders. makes a call. he does not have a nonlinear morning. he has a morning.

pg writes an essay. he talks to founders at office hours. he reads the emails. every day. a consistent, repeatable, linear set of inputs.

elon runs manufacturing. tests rocket failures. updates the model. iterates. each iteration is linear — one decision, one test, one adjustment.

the outputs compound. the inputs do not.

—

what is actually happening:

each day of consistent linear input builds a topology. not a database of facts. a topology — a way of moving through the space. the space of opportunities, risks, second-order effects.

topology is what makes the insight look instant.

when thiel said "competition is for losers" — that wasn't a flash of nonlinear genius. it was the output of ten years of daily topology-building in the space of startups and monopoly theory. the topology made the inference immediate. to someone without the topology, the same words are a provocation or a paradox.
topology is the ghost basin.

—

the ghost basin is the invisible minimum-energy surface that forms after enough training.

in neural networks: you train past the memorization threshold and the network suddenly generalizes. it stops fitting the training data and starts finding the actual structure. this is the ghost basin. you can't force it. you can only train until it appears.

the best investors have ghost basins. they built them through years of linear investment in learning — reading, talking, testing positions, watching resolutions. the basin formed. now they see structure in markets that looks like noise to everyone else.

—

the failure mode of most capitalists / traders / liquidity providers: they look at the outputs (thiel got 100,000x on facebook) and conclude: i need to find nonlinear opportunities.

so they go looking for nonlinear opportunities.

they scan for moonshots. they build models with hockey-stick assumptions. they pattern-match on surface features of prior big wins.

this is not how thiel found facebook. thiel had a topology that included "social graphs are winner-take-all, the platform that owns the social graph of a population becomes the identity layer for that population, the first platform to hit critical mass in a college network wins the whole college network." that topology came from years of linear investment in: paypal's network effects, early internet platform dynamics, his own reading in political philosophy about coordination problems.

the nonlinear return was the output of a linear input program that built the topology that made the insight available.