AI Research Reaction #5: Mixability and Evolution
On being robust solo, combining in multiplicative ways, and being the one every great team wants… Reaction to reading: “Dropout” by Srivastava, Hinton et al
Guess what happens if you randomly silence half a neural network during training? It gets better! In 2014, Geoffrey Hinton’s team published a paper on a technique called dropout that did exactly this. Force nodes in a neural network to work without their usual partners and the network learns to generalize instead of memorize. Dropout improved vision models, speech recognition models, document classification models, and worked better across the board.
What if we applied the same optimization to ourselves? What if we were forced to thrive alone AND with other people as capable as we are? What if we were maximally mixable?
First, we must become self-sufficient enough to stand alone. Years of developing real capacity, independent of any specific team, tool, or context. Second, we learn to combine so that other self-sufficient people thrive with us. Both capacities develop in parallel, but awareness of which one needs more attention at a given stage is the useful part. Dropout trains networks under conditions that force both at once. Each unit has to be useful when its collaborators disappear, and it has to combine cleanly with whatever random subset is present on any given pass. Same logic for humans.
Investing in becoming mixable is expensive upfront but pays forever. Dropout makes training 2-3x slower than training without it but the payoff is a model that keeps performing for every inference after. Similarly, the work to become robust, i.e mixable, compounds for the rest of your life.
The temptation right now is to skip the investment. AI looks like an escape route. Mediocre writers ship passable writing with it. Non-coders ship apps. Early adopters are producing outsized output because most people haven’t caught up yet. This is arbitrage, and arbitrage windows close. Once everyone has the tools and knows how to use them, the differentiator shifts back to what you bring. Your signal, your taste, your capacity to think, and your ability to mix and combine.
Your environment will change whether you want it to or not. New collaborators show up, old ones leave, markets turn, tools evolve. The person who can only thrive in one specific configuration is betting the configuration won’t change. It inevitably does. And what can you do to not be the bag-holder? Become the anti-chaotic mixable node. A person whose presence stabilizes any system they’re part of, whether the system is them alone or them with others.
What does peak mixability look like? It looks like Jobs and Ive. McCartney and Lennon. Jordan and the Bulls. Two or more formidable people who chose to keep combining because the output kept compounding. Both parties were already self-sufficient before they met. The partnership didn’t make them capable, it made them amplified. This is the reward of mixability. Deep partnerships between people who don’t need each other but produce their best work together.
This isn’t generalism. It’s self-sufficient expertise that combines well. You remain a specialist. You also become someone whose expertise travels across codebases, teams, tools, collaborators, and time.
Be the best at what you do. Then anticipate every way it can combine to become something greater. Combine and prosper.
Thank you for reading this reaction to:
Dropout — Srivastava, Hinton et al. — 2014 — Random silence prevents network overfit
Download the paper here:
https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
“Mixability and Evolution” is reaction 5 of 53 to the most influential AI papers in history. To follow along subscribe to my newsletter or follow me on 𝕏 @zalkazemi