Zaid Al Kazemi

On Taste and Token Maxing

Every AI you use right now is optimized for token volume. Produce the maximum number of tokens that gets the user to say, “hell yeah!” That’s the objective. If they can have you engage with a 10,000 token response, every time, they’d happily do it. Ideas that make you do a double-take on life decisions, filtered. A profound 6 word response that makes you walk away and think, filtered. Anything unsatisfying yet true, filtered. The machine isn’t thinking. It’s converging on the statistical center of what SOUNDS helpful, ie what gets you back to using it more. In one word. Engagement.

That’s token maxing. AI, in all its might, aimed at maximizing your session time and token consumption.

The problem is the production of these engaging tokens trains the AI AND your self. Ask for summaries, it stops getting better at explaining, and you lose the muscle for deep reading. Ask for answers, it gets better at placating you, and you lose the ability to sit with questions. Ask for outputs that meet requirements, it gets good at meeting requirements, and you forget the ecstasy of overdelivering, innovating, and setting a new bar of excellence. The tool trains you to settle for average while you’re training it to produce average at acceptable levels. A year in you’re not the same person and the AI is not better. You’ve smoothed out all the friction it takes for you to grow and it got better at producing half assed effort that feels good enough to make the VC hand over more cash to the LLM providers.

There are two reactions to this.

Going all in. Producing ten times faster. More content. More output. The numbers feel like winning. But the AI averages everything.

Refusing the tools. Holding the line on craft. Years of taste. Personal references. Deep expertise.

The person who will win chooses both. They spent years curating work they admired. Broke it down. Understood why it worked. Practiced in its spirit. Understand the quirks of great work. And they’ve competed in the arenas to outshine their masters. Now when they point AI at their archive the tool amplifies something original instead of something average. Speed plus substance. No one else has that archive that reflects the winner’s identity. No one else can speak to the LLM with the nuance they’ve earned.

Taste plus tools. Neither alone will suffice. The person with only taste is slow. The person with only tools is generic. The system has to hold both at once to produce anything worth keeping.

Taste is the ability to say no. To cut. To simplify. Elegance. Beauty. Aesthetics. Not MORE. Not BULKY. Not indigestion. Not the self-appointed “eye.”

Curate work you admire. Read deeply. Compete with the masters of your field not the average of it. Build your taste. Master your craft. Then accelerate.

Speed before substance gives you more of nothing. Substance before speed gives you something the tool can amplify instead of replace.

Thank you for reading this reaction to: 

Long Short-Term Memory — Hochreiter, Schmidhuber — 1997 — Memory gates solve vanishing gradients

Download the paper here: 

https://www.bioinf.jku.at/publications/older/2604.pdf

“On Taste and Token Maxing” is reaction 3 of 53 to the most influential AI papers in history. To follow along subscribe to my newsletter or follow me on 𝕏 @zalkazemi

Home